California lawmakers are about to make a huge decision on the future of AI 

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

California’s hotly contested AI bill nears a decisive moment

A new California bill requiring AI companies to enact stricter safety standards has made its way through California’s state house, much to Silicon Valley’s chagrin.  

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (otherwise known as SB 1047) would require developers of large “frontier” models (those requiring a massive compute power or at least $100 million in training costs) to implement a safeguarding and safety-testing framework, undergo audits, and give “reasonable assurance” that the models won’t cause a catastrophe. Developers would report their safety work to state agencies. The bill also calls for the creation of a new agency, called the Frontier Model Division, which would help the Attorney General and Labor Commissioner with enforcement and creating new safety standards.

A long list of industry, academic, and political names has lined up to voice their disapproval of SB 1047. The VC firm Andreesen Horowitz and its influence network has produced the loudest voices in opposition. Stanford professors Fei Fei Li and Andrew Ng have also come out against the bill following meetings with its author, Senator Scott Wiener. (Li has a billion-dollar AI company funded by Andreesen Horowitz. Ng is CEO of Landing AI.) Meta’s Yann LeCun has come out against the bill. So have Reps. Ro Khanna and Zoe Lofgren.

SB 1047’s opponents argue that tech companies have already pledged to lawmakers and the public that they’ll take steps to make sure their models can’t be used to cause catastrophic harm. They also claim that the bill would create burdensome rules that slow progress toward smarter and more capable models, and worry that the new Frontier Model Division would struggle to gain enough expertise to develop safety guidelines for such a young and fast-moving technology as AI. Many have said the bill would make it too risky to open-source AI models because open-source model developers could be held liable if someone modified or fine-tuned the model for harm later on. 

The bill’s sponsors cite a recent survey showing that 65% of Californians support SB 1047 as currently written. Two of AI’s godfathers, Geoffrey Hinton and Yoshua Bengio, also support the bill. But unsurprisingly, the list of big-name opponents to the bill has grown faster than the list of supporters. 

This week, the California Assembly’s Appropriations Committee is expected to add a series of amendments to SB 1047. Many of the amendments are focused on cost considerations. If the committee can agree on the amendments, the bill will go to a floor vote in the Assembly, the final step before heading to the governor’s desk. 

Google wisely puts DeepMind at the center of its smartphone story

AI is quickly becoming central to the smartphone business, and the  speed and manner in which companies including Google, Apple, and Samsung embed new and useful AI features into their phones could reshape the marketplace. As I watched Google’s product event on Tuesday, I thought about what makes people switch between iPhones andAndroids. It’s through that lens that I gauge the importance of the AI features Google announced for its Android operating system and for its line of Pixel smartphones: Only about 5% of people in Google’s home country (U.S.) use Pixels, and only about 1% globally. If Google is ever going to steal some of the iPhone’s mojo, it’ll be because it loaded the Pixel phones with more and better AI features. 

Google is putting its Gemini AI models, created by Demis Hassabis and DeepMind, at the center of Android, and at the center of its smartphone story. The company emphasized at Tuesday’s event that DeepMind played a central role in the development of the G4 chip that powers the Pixel 9 phones. And it’s in the smartphone that Google’s AI has the most potential. Because our smartphone is (almost) always with us, it can collect all kinds of data on everything from fitness to search history to financial information. Google is just now starting to feed that data into its phone-based AI. The company announced an Android feature that lets the AI “see” what’s on the screen (say, a YouTube video or webpage), and take actions on it—for example, letting you know an artist whose video you’re watching will soon be playing a live show nearby. The Gemini model can now gather data from other apps, such as Calendar and Keep. The new Gemini Live (subscription only) lets the AI talk to you in a natural way.

Those features are just the start. Google is laying the groundwork for the phone to become a personal assistant that develops a deep understanding of the user over time—so much so that it may know what the user needs before the user does. Smartphones have been pretty boring for awhile now; AI could give them new life.

RAND study: Why AI projects fail

The biggest U.S. tech companies lost a combined $1.3 trillion of market capitalization from July 31 through August 5. This selloff sparked lots of conversation and hand-wringing about how soon AI, which all the companies are developing, could start pushing up revenues instead of just R&D costs. While some generative AI projects have indeed saved companies money or increased worker productivity, lots of others—according to some estimates, up to 80%—have failed. To find out why, RAND interviewed 65 data scientists and engineers, each with at least five years of experience building AI/ML models. 

These are the five leading root causes of AI-project failures, according to the RAND report: 

  • Stakeholders often misunderstand or miscommunicate the problem that needs to be solved using AI.
  • The organization lacks the necessary data to adequately train an effective AI model.
  • The organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.
  • Organizations lack adequate infrastructure to manage their data and deploy completed AI models.
  • AI technology is applied to problems that are too difficult for AI to solve at present.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91172627/california-lawmakers-ai-sb-1047?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 5mo | 15 aug. 2024, 13:50:05


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

5 GenAI principles for K-12 education

Imagine this: A new technology has arrived, drawing enormous public discussion. Among the questions is how it might be used in schools. Advocates call for its widespread adoption, saying it will r

10 ian. 2025, 19:50:09 | Fast company - tech
The TikTok ban is being weighed by the Supreme Court. Here’s what to know

The Supreme Court of the United States is hearing arguments today to decide the

10 ian. 2025, 19:50:08 | Fast company - tech
Farming tech is on display CES: How John Deere and others are embracing sustainability

When Russell Maichel started growing almonds, walnuts and pistachios in the 1980s, he didn’t own a cellphone. Now, a fully autonomous tractor drives through his expansive orchard, spraying p

10 ian. 2025, 17:40:02 | Fast company - tech
DoorDash is expanding its portable benefits program to Georgia next year (exclusive)

DoorDash is expanding its portable benefits pilot program to certain gig workers in Georgia starting next year, the food-delivery giant tells Fast Company.

Dashers (which is wha

10 ian. 2025, 15:20:07 | Fast company - tech
Red Bull and Ford are building a new F1 hybrid race car engine—first as bits, then atoms

To get from 0 to 60 in Formula 1 engine design while competing against organizations with much more experience, Red Bull Ford Powertrains will need extra help (and, no, that boost won’t come in th

10 ian. 2025, 15:20:06 | Fast company - tech
AI taught me to be a (slightly) better badminton player at CES

I am not what you would call a finely tuned athletic machine. I am, if anything, an outdated lawnmower engine held together by duct tape and rust. So when I was offered the opportunity to let AI h

10 ian. 2025, 15:20:04 | Fast company - tech
The L.A. wildfires show how social media has become just another spin room

It’s hard to remember now, as you scroll through a thicket of porn bots, anti-trans activists, and AI slop

10 ian. 2025, 12:50:06 | Fast company - tech