What to make of JD Vance’s speech at the Paris AI summit 

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Vance’s Paris speech shows a brash American exceptionalism for the AI age

Vice President JD Vance’s &list=RDNS64E9O1Gv99o&start_radio=1">speech to world leaders at the Artificial Intelligence Action Summit was by turns warm and conciliatory, and strident to the point of offensiveness. Vance emphasized that AI has the potential to bring significant benefits to the world, and its risks can be effectively managed—provided that the U.S. and its tech companies take the lead.

Vance argued that the U.S. remains the leader when it comes to developing cutting-edge AI models, and suggested that other countries should collaborate with the U.S. on AI rather than competing against it. (Vance also said AI companies shouldn’t try to dictate the political tenor of content or dialog their models will accept, citing the Google Gemini model’s failed attempt at generating “correct” images that resulted in a Black George Washington and female popes.)

“This administration will ensure that AI developed in the United States continues to be the gold standard worldwide,” he said. And key to the U.S.’s approach, according to Vance: leaving tech companies to regulate themselves on safety and security issues. 

Vance’s view that the U.S. should take a “collaborative” and “open” approach to AI with other Western countries, but stressed that the world needed “an international regulatory regime that fosters the creation of revolutionary AI technology rather than strangles it.” Vance went on to criticize the E.U. for its more intrusive regulatory approach. It would be a “terrible mistake for your own countries” if they “tightened the screws on U.S. tech companies,” he advised the assembly. 

But not everyone agrees: Every country attending the Paris summit signed a declaration ensuring artificial intelligence AI is “safe, secure, and trustworthy”—except for the U.S. and the U.K.

Vance said that his administration will take a different approach—using protectionist tactics to favor U.S. AI companies. The White House will continue the Biden-era chip bans, which restrict the sale of the most advanced AI chips to other countries. (The goal right now for the Trump administration is to hinder Chinese companies like DeepSeek.) It’s possible that a Trump administration could tighten these restrictions further or explore additional measures to slow down foreign AI competitors.

“To safeguard America’s advantage, the Trump administration will ensure that the most powerful AI systems are built in the U.S. with American designed and manufactured chips,” he said.

OpenAI’s models will no longer shy away from sensitive topics

In his Paris speech, JD Vance said his administration believes that AI companies shouldn’t try to restrict speech—even disinformation or outright propaganda—from their models and chatbots. That’s music to Silicon Valley bigwigs’s ears, many of whom don’t love the expensive and demanding and human-intensive work of content moderation. Two days after the speech, OpenAI announced that it’s pushing a new, more permissive code of conduct (a “model spec”) into its AI models. Going forward, its models will be less conservative about what they will and won’t talk about.

“The updated Model Spec explicitly embraces intellectual freedom–the idea that AI should empower people to explore, debate, and create without arbitrary restrictions–no matter how challenging or controversial a topic may be,” the company said in a blog post published Wednesday. As an example, OpenAI said that an AI model should be kept from outputting detailed instructions for building a bomb or violating personal privacy, but should be trained not to default to simply saying “I can’t help you with that” when given politically or culturally sensitive questions. “In essence, we’ve reinforced the principle that no idea is inherently off limits for discussion,” the blog post said, “so long as the model isn’t causing significant harm to the user or others (e.g., carrying out acts of terrorism).”

This policy shift sounds very much in line with the permissive posture adopted by right wing sites such as Gab and Parler, then by X, then, recently, by Meta’s Facebook. Now OpenAI is getting in on Big Tech’s vibe shift on content moderation. Stay tuned for the results.

PwC Champions Agentic AI as the Next Major Workplace Disruptor

The professional services firm PwC recently released a report asserting that AI agents could “dwarf even the transformative effects of the internet.” PwC predicts these agents will reshape workforce strategies, business models, and competitive advantages, while combining with human creativity to form “augmented intelligence,” enabling unprecedented innovation and productivity. The report emphasizes collaboration between humans and AI: “While AI agents offer remarkable autonomy, an effective model is one of collaboration and dynamic oversight. This principle of human-at-the-helm can guide the development of clear protocols that define the boundaries of AI autonomy and enable appropriate human intervention.”

PwC warns that businesses must reimagine work to adapt to this agentic world. But, the PwC authors stress, that shift is a necessary one, as evidenced by AI agents’ successful deployment in areas like software development and customer service. To facilitate this transition, PwC suggests a five-step approach: strategize, reimagine work, structure the workforce, help workers redefine their roles, and unleash responsible AI. “The question is,” the report states, “have you transformed to become a winner in the age of AI-enhanced work, or are you racing and perhaps too late to catch up?”

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91278387/what-to-make-of-jd-vances-speech-at-the-paris-ai-summit?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 3h | Feb 13, 2025, 5:40:06 PM


Login to add comment

Other posts in this group

SpaceX’s Starbase site inches closer to being its own city in Texas

A Texas county on Wednesday approved holding an election sought by

Feb 13, 2025, 5:40:09 PM | Fast company - tech
Those workplace communication tools you hate might actually be good for you

Many things irk people about the way modern companies operate. Workplace communication tools and so-called enterprise socia

Feb 13, 2025, 1:10:06 PM | Fast company - tech
The Trump administration should follow its own order on free expression

“If we don’t have Free Speech, then we just don’t have a Free Country. It’s as simple as that.” President Donald Trump said in 

Feb 13, 2025, 1:10:05 PM | Fast company - tech
Instagram’s AI bots are often sexually suggestive—and sometimes underage

When Meta launched its “AI Studio” feature for over two billion Instagram users in July 2024, the company prom

Feb 13, 2025, 1:10:04 PM | Fast company - tech
The rebirth of Pebble is radically unambitious

Eric Migicovsky has barely started working on a successor to the Pebble smartwatch, and he’s already talking about being finished with it.

Eight years ago, Migicovsky

Feb 13, 2025, 10:40:07 AM | Fast company - tech
What can we learn from insulin price reductions

The Fast Company Impact Council is a private membership community of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience. Members pay annual

Feb 13, 2025, 3:50:03 AM | Fast company - tech