So let’s discuss how OpenAI’s CEO, Sam Altman, spent three hours with the Senate, pushing for the need to get AI models licensed by the U.S. government. And guess what? The senators agreed that AI’s potential is huge, comparing it to big stuff like the internet, the Industrial Revolution, and the atomic bomb. Sure, it’s kinda cool that both sides of the political aisle are working together on this, but I’ve got my doubts. Here’s what you need to know about the meeting:
- The U.S. is just starting to get into the whole AI regulation thing, while places like the EU and China are way ahead. The EU’s about to wrap up their AI Act, and China’s dropping its second round of AI rules.
- Altman’s all for AI regulation and licensing. He even wants a government agency to oversee AI safety. This agency would give licenses to companies that develop advanced AI models and could take them away if they mess up on safety.
- Altman wants the U.S. to take the lead in global AI regulation. He thinks we need something like the International Atomic Energy Agency to help set up international guidelines for AI development and use.
- Altman admitted that regulation, like licensing AI models, could be a good thing for OpenAI. They’re about to release a new open-source language model and regulation could help protect their interests.
- The conversation touched on copyright and compensation. AI models often use artists’ works, which brings up the question of how to compensate creators. Altman is down for paying creators, but he didn’t go into details.
- Altman agrees that Section 230, which protects social media companies from being responsible for user-generated content, doesn’t really apply to AI models. He thinks new rules are needed to deal with what AI models produce and their legal responsibilities.
- Altman said that AI could pose big risks to the world. He’s especially worried about AI-powered personalized disinformation campaigns that could mess with democracy and society.
- Senator Cory Booker expressed worries about the concentration of AI power within the OpenAI-Microsoft alliance. Other AI researchers, like Timnit Gebru, criticized the meeting as an example of big corporations having too much influence over the rules.
Altman was all about the need for government oversight and licensing of AI models, international cooperation, and highlighting the risks of unregulated AI. He also brought up OpenAI’s interests and the need to handle copyright issues related to AI models.
Personally, It feels like we’re just handing over more power to big corporations and limiting consumer control. We need to find a better way forward. Sure, we do need some amount of regulation but to what end? Unregulated AI can be scary, but AI controlled by big interests is downright terrifying. Are we too late to put the genie back in the bottle?
AI could allow anyone with the brains to create something amazing, without needing to be rich or well-connected. But the real problem here is our own fear and greed, and we’re not even close to dealing with that. Some rules might help slow down the possible collapse of our economy. But our economic systems are totally unprepared for this technology. Big corporations will replace humans with AI as soon as they can, keeping people around only if it’s cheaper. This will make the rich even richer, and we might end up needing a universal income.
The future could be amazing, but we need to make big changes, like spreading power around more evenly. The fact that anyone with the smarts can now compete is something we should be cheering about!
Photo created using Midjourney