California lawmakers are about to make a huge decision on the future of AI 

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

California’s hotly contested AI bill nears a decisive moment

A new California bill requiring AI companies to enact stricter safety standards has made its way through California’s state house, much to Silicon Valley’s chagrin.  

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (otherwise known as SB 1047) would require developers of large “frontier” models (those requiring a massive compute power or at least $100 million in training costs) to implement a safeguarding and safety-testing framework, undergo audits, and give “reasonable assurance” that the models won’t cause a catastrophe. Developers would report their safety work to state agencies. The bill also calls for the creation of a new agency, called the Frontier Model Division, which would help the Attorney General and Labor Commissioner with enforcement and creating new safety standards.

A long list of industry, academic, and political names has lined up to voice their disapproval of SB 1047. The VC firm Andreesen Horowitz and its influence network has produced the loudest voices in opposition. Stanford professors Fei Fei Li and Andrew Ng have also come out against the bill following meetings with its author, Senator Scott Wiener. (Li has a billion-dollar AI company funded by Andreesen Horowitz. Ng is CEO of Landing AI.) Meta’s Yann LeCun has come out against the bill. So have Reps. Ro Khanna and Zoe Lofgren.

SB 1047’s opponents argue that tech companies have already pledged to lawmakers and the public that they’ll take steps to make sure their models can’t be used to cause catastrophic harm. They also claim that the bill would create burdensome rules that slow progress toward smarter and more capable models, and worry that the new Frontier Model Division would struggle to gain enough expertise to develop safety guidelines for such a young and fast-moving technology as AI. Many have said the bill would make it too risky to open-source AI models because open-source model developers could be held liable if someone modified or fine-tuned the model for harm later on. 

The bill’s sponsors cite a recent survey showing that 65% of Californians support SB 1047 as currently written. Two of AI’s godfathers, Geoffrey Hinton and Yoshua Bengio, also support the bill. But unsurprisingly, the list of big-name opponents to the bill has grown faster than the list of supporters. 

This week, the California Assembly’s Appropriations Committee is expected to add a series of amendments to SB 1047. Many of the amendments are focused on cost considerations. If the committee can agree on the amendments, the bill will go to a floor vote in the Assembly, the final step before heading to the governor’s desk. 

Google wisely puts DeepMind at the center of its smartphone story

AI is quickly becoming central to the smartphone business, and the  speed and manner in which companies including Google, Apple, and Samsung embed new and useful AI features into their phones could reshape the marketplace. As I watched Google’s product event on Tuesday, I thought about what makes people switch between iPhones andAndroids. It’s through that lens that I gauge the importance of the AI features Google announced for its Android operating system and for its line of Pixel smartphones: Only about 5% of people in Google’s home country (U.S.) use Pixels, and only about 1% globally. If Google is ever going to steal some of the iPhone’s mojo, it’ll be because it loaded the Pixel phones with more and better AI features. 

Google is putting its Gemini AI models, created by Demis Hassabis and DeepMind, at the center of Android, and at the center of its smartphone story. The company emphasized at Tuesday’s event that DeepMind played a central role in the development of the G4 chip that powers the Pixel 9 phones. And it’s in the smartphone that Google’s AI has the most potential. Because our smartphone is (almost) always with us, it can collect all kinds of data on everything from fitness to search history to financial information. Google is just now starting to feed that data into its phone-based AI. The company announced an Android feature that lets the AI “see” what’s on the screen (say, a YouTube video or webpage), and take actions on it—for example, letting you know an artist whose video you’re watching will soon be playing a live show nearby. The Gemini model can now gather data from other apps, such as Calendar and Keep. The new Gemini Live (subscription only) lets the AI talk to you in a natural way.

Those features are just the start. Google is laying the groundwork for the phone to become a personal assistant that develops a deep understanding of the user over time—so much so that it may know what the user needs before the user does. Smartphones have been pretty boring for awhile now; AI could give them new life.

RAND study: Why AI projects fail

The biggest U.S. tech companies lost a combined $1.3 trillion of market capitalization from July 31 through August 5. This selloff sparked lots of conversation and hand-wringing about how soon AI, which all the companies are developing, could start pushing up revenues instead of just R&D costs. While some generative AI projects have indeed saved companies money or increased worker productivity, lots of others—according to some estimates, up to 80%—have failed. To find out why, RAND interviewed 65 data scientists and engineers, each with at least five years of experience building AI/ML models. 

These are the five leading root causes of AI-project failures, according to the RAND report: 

  • Stakeholders often misunderstand or miscommunicate the problem that needs to be solved using AI.
  • The organization lacks the necessary data to adequately train an effective AI model.
  • The organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.
  • Organizations lack adequate infrastructure to manage their data and deploy completed AI models.
  • AI technology is applied to problems that are too difficult for AI to solve at present.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91172627/california-lawmakers-ai-sb-1047?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 9mo | 15 ago 2024, 13:50:05


Accedi per aggiungere un commento

Altri post in questo gruppo

How learning like a gamer helped this high-school dropout succeed

There are so many ways to die. You could fall off a cliff. A monk could light you on fire. A bat the size of a yacht could kick your head in. You’ve only just begun the game, and yet here you are,

29 apr 2025, 12:20:08 | Fast company - tech
Renate Nyborg’s Meeno wants to become the Duolingo of dating

Former Tinder CEO Renate Nyborg launched Meeno less than two years ago with the intention of it being an AI chatbot that help

29 apr 2025, 12:20:07 | Fast company - tech
How Big Tech’s Faustian bargain with Trump backfired

The most indelible image from Donald Trump’s inauguration in January is not the image of the president taking the oath of office without his hand on the Bible. It is not the image of the First Lad

29 apr 2025, 12:20:06 | Fast company - tech
Turns out AI is really bad at picking up on social cues

Ernest Hemingway had an influential theory about fiction that might explain a lot about a p

29 apr 2025, 12:20:04 | Fast company - tech
Signal is the unlikely star of Trump’s first 100 days

The first 100 days of Trump’s second presidential term have included a surprising player that doesn’t seem likely to go away anytime soon: Signal.

The encrypted messaging pl

29 apr 2025, 09:50:13 | Fast company - tech
How federal funding cuts could threaten America’s lead in cancer research

Cancer research in the U.S. doesn’t rely on a single institution or funding stream—it’s a complex ecosystem made up of interdependent parts: academia, pharmaceutical companies, biotechnology start

29 apr 2025, 09:50:11 | Fast company - tech