Silicon Valley has stifled the AI ​​doom 2024 movement

Silicon Valley has stifled the AI ​​doom 2024 movement

For several years now, technology experts have sounded alarm bells about the potential for advanced AI systems to cause catastrophic damage to the human race.

But in 2024, those cautionary calls have been drowned out by a practical, thriving vision of generative AI promoted by the tech industry – a vision that has also benefited their wallets.

Those who warn of the catastrophic dangers of AI are often called “AI naysayers,” though that is not a name they like. They worry that AI systems will make decisions to kill people, or be used by the powerful to oppress the masses, or contribute to the downfall of society in some way.

In 2023, it feels like we are at the beginning of a renaissance for technology regulation. The doom of AI and AI safety — a broader topic that can include hallucinations, inadequate content moderation, and other ways AI can harm society — has gone from a niche topic discussed in San Francisco coffee shops to a conversation appearing on MSNBC, CNN, and the front pages. From the New York Times.

The summary of warnings issued in 2023: Elon Musk and more than 1,000 technologists and scientists called for… Pause on the development of artificial intelligenceasking the world to prepare for the profound dangers of technology. Shortly after, senior scientists at OpenAI, Google and other labs signed an open letter saying The threat of artificial intelligence causing human extinction It should be given more credibility. Months later, President Biden signed an executive order on artificial intelligence with The overall goal is to protect Americans from artificial intelligence systems. In November 2023, the board of directors of the non-profit behind the world’s leading AI developer, OpenAI, decided Sam fired Altman, claiming that its CEO was known to lie Nor can they be trusted with important technology like artificial general intelligence, or AGI — once the imagined endpoint of AI, systems that actually exhibit self-awareness. (in spite of The definition is changing now To meet the business needs of those who talk about it.)

For a moment, it looked as if the dreams of Silicon Valley entrepreneurs would take a backseat to the overall health of society.

But for these entrepreneurs, the narrative of AI’s doom was more troubling than the AI ​​models themselves.

In response, a16z co-founder Marc Andreessen posted, “Why will artificial intelligence save the world?“In June 2023, a 7,000-word article deconstructs the AI ​​doomsday agenda and offers a more optimistic vision for how the technology could be implemented.

Marc Andreessen speaks on stage during TechCrunch Disrupt.
SAN FRANCISCO, CA – SEPTEMBER 13: Entrepreneur Marc Andreessen speaks on stage during TechCrunch Disrupt SF 2016 at Pier 48 on September 13, 2016 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)Image credits:Steve Jennings/Getty Images

“The era of artificial intelligence has arrived, and people are afraid. Fortunately, I am here to bring good news: AI will not destroy the world, in fact it may save it. article.

In his conclusion, Andreessen offers a convenient solution to our fears of AI: move fast and break things—essentially the same ideology that has defined all other 21st-century technologies (and their attendant problems). He said big tech companies and startups should be allowed to build AI as quickly and aggressively as possible, with little or no regulatory barriers. He added that this would ensure that artificial intelligence does not fall into the hands of a few powerful companies or governments, and would allow America to compete effectively with China.

Of course, this will also allow many of the AI ​​startups at a16z to make more money – and some He found his technological optimism crude In an era of extreme income inequality, pandemics, and housing crises.

While Andreessen doesn’t always agree with big tech companies, making money is one area where the entire industry can agree. The a16z founders essentially wrote a letter with Microsoft CEO Satya Nadella this year Demanding the government not to regulate the artificial intelligence industry Absolutely.

Meanwhile, despite frantically waving about 2023, Musk and other tech experts haven’t slowed down in focusing on safety in 2024 — quite the opposite: Investment in AI in 2024 has surpassed anything We’ve seen it before. Altman quickly returned to leading OpenAI, and A A large number of safety researchers left the device in 2024 while alarm bells were ringing About a diminishing safety culture.

Incoming President-elect Donald Trump announced that Biden’s safety-focused AI executive order was no longer popular this year in Washington, D.C. Plans to rescind Biden’s orderUnder the pretext that it hinders the innovation of artificial intelligence. Andersen says it was Advising Trump on artificial intelligence and technology In recent months, a longtime venture capitalist at a16z, Sriram Krishnanis now Trump’s official senior advisor on artificial intelligence.

Republicans in Washington have several AI-related priorities that trump today’s AI doom, according to Dean Paul, an AI-focused research fellow at George Mason University’s Mercatus Center. This includes building data centers to power AI, using AI in government and the military, competing with China, limiting content moderation from center-left tech companies, and protecting children from AI-powered chatbots.

“I guess [the movement to prevent catastrophic AI risk] It has fallen out of favor at the federal level. “At the state and local level, they also lost the only major battle they had,” Paul said in an interview with TechCrunch. Of course, he’s referring to California’s controversial AI safety bill SB 1047.

Part of the reason why AI’s popularity will decline in 2024 is simply that as AI models become more popular, we’ve also seen how unintelligent they are. It’s hard to imagine Google Gemini becoming Skynet when I just asked you to put glue on your pizza.

But at the same time, 2024 was the year when many AI products seemed to bring concepts from science fiction to life. For the first time this year: OpenAI showed how we can talk with our phones And not through them, and dead Smart glasses with real-time visual understanding unveiled. The ideas behind the catastrophic dangers of AI stem largely from science fiction films, and while there is an obvious limit, the age of AI proves that some science fiction ideas may not be so fanciful forever.

The biggest AI doomsday in 2024: SB 1047

State Senator Scott Wiener, a Democrat from California, right, during the Bloomberg BNEF Summit in San Francisco, California, US, on Wednesday, January 31, 2024. The summit provides ideas, insights and connections to formulate successful strategies, capitalize on technological change and shape… A cleaner, more competitive future. Photographer: David Paul Morris/Bloomberg via Getty ImagesImage credits:David Paul Morris/Bloomberg via Getty Images/Getty Images

The AI ​​safety battle of 2024 has reached its peak SB 1047a bill supported by two well-respected AI researchers: Geoffrey Hinton and Joshua Bengio. The bill attempted to prevent advanced AI systems from causing mass human extinction events and cyberattacks that could cause greater damage than a CrowdStrike outage in 2024.

SB 1047 has passed through the California Legislature and landed on the desk of Gov. Gavin Newsom, where he called it a “high-impact” bill. The bill tried to prevent the things that Musk, Altman, and many other Silicon Valley leaders warned about in 2023 when they signed those open letters about artificial intelligence.

But Newsom SB 1047 was vetoed. And in the days leading up to his decision Talk about regulating artificial intelligence On stage in downtown San Francisco, saying, “I can’t solve everything. What can we solve?”

This clearly sums up how many policymakers are thinking about the catastrophic risks of AI today. It’s not a problem with a practical solution.

However, SB 1047 was flawed beyond its focus on catastrophic AI risks. The bill regulates artificial intelligence models based on size, in an attempt to regulate only the largest players. However, this did not take into account new technologies such as test time calculation or the emergence of small AI models, which leading AI labs are already starting to focus on. Moreover, the bill was widely viewed as an assault on open source AI – and, by proxy, the research world – because it would have limited the ability of companies like Meta and Mistral to release highly customizable frontier AI models.

But according to the bill’s author, state Sen. Scott Wiener, Silicon Valley has played dirty to influence public opinion About SB 1047. He previously told TechCrunch that venture capitalists from Y Combinator and A16Z participated in a publicity campaign against the bill.

Specifically, these groups posted a claim that SB 1047 would send software developers to prison for perjury. Y Combinator asked young founders to do just that Sign a letter that says as much in June 2024. Around the same time, Andreessen Horowitz general partner Anjney Midha made a similar claim On the podcast.

The Brookings Institution described this as: One of the bill’s many misrepresentations. SB 1047 did mention how tech executives would need to file reports identifying deficiencies in their AI models, and the bill noted that lying on a government document constitutes perjury. However, the venture capitalists spreading these concerns failed to point out that people are rarely charged with perjury, and even more rarely are they convicted.

YC rejected the idea of ​​spreading misinformation, previously telling TechCrunch that SB 1047 was vague and not as concrete as Senator Wiener made it out to be.

Overall, there was a growing sense during the SB 1047 fight that AI convicts were not only anti-technology, but also delusional. Famous investor Vinod Khosla is called Winner Ignorant of the real dangers of artificial intelligence At TechCrunch’s 2024 Disrupt event.

Meta’s chief AI scientist, Yann LeCun, has long opposed the ideas behind the doom of AI, but he has become more vocal this year.

“The idea of ​​it somehow [intelligent] “The regimes will come up with their own goals and take over humanity, which is unconscionable, it’s ridiculous,” Lacon said. Davos in 2024noting that we are very far from developing super-intelligent artificial intelligence systems. “There are many, many ways to build [any technology] In ways that would be dangerous, or wrong, or kill people, etc…but as long as there is one way to do it right, that’s all we need.

The next battle is in 2025

The policymakers behind SB 1047 have hinted They may return in 2025 with a revised bill To address long-term AI risks. One of the bill’s sponsors, Encode, says the national attention SB 1047 has attracted has been a positive sign.

“The AI ​​safety movement has made very encouraging progress in 2024, despite objection to SB 1047,” Sunny Gandhi, vice president of political affairs at Encode, said in an email to TechCrunch. “We are optimistic that public awareness of the long-term risks of AI is growing, and there is a growing desire among policymakers to address these complex challenges.”

Gandhi says Incode expects “significant efforts” in 2025 to regulate AI-powered catastrophic risks, though he didn’t reveal any specific ones.

On the other hand, Martin Casado, General Partner of a16z, is one of the people leading the fight against regulation of catastrophic AI risks. In December Editorial Regarding AI policy, Casado argued that we need a more rational policy going forward, declaring that “AI seems very safe.”

“We are largely behind the first wave of stupid AI policy efforts,” Casado said. December tweet. “Hopefully we can be smarter going forward.”

Calling AI “very safe” and attempts to regulate it “stupid” is a bit of an oversimplification. For example, Character.AI – a startup that a16z has invested in – is currently being built File a lawsuit against and Investigation Due to concerns about children’s safety. In one ongoing lawsuit, a 14-year-old Florida boy killed himself after he allegedly disclosed his suicidal thoughts to Character.AI’s Chatbot with whom he had romantic and sexual conversations. This case shows how our society must prepare for new types of AI-related risks that may have seemed ridiculous just a few years ago.

There are more bills addressing the long-term risks of AI, including one just introduced at the federal level by 2019. Senator Mitt Romney. But now, it looks like AI critics will have an uphill battle in 2025.

Leave a Comment

Your email address will not be published. Required fields are marked *