How the U.S. chip bans led to a monster called DeepSeek

The Chinese AI company DeepSeek has put the AI industry in an uproar. Denied the most powerful chips thought needed to create state-of-the-art AI models, DeepSeek pulled off some engineering master strokes that allowed the researchers to do more with less. The DeepSeek-V3 and DeepSeek-R1 models the company recently released achieved state-of-the-art performance in benchmark tests and cost much less time and money to train and operate than comparable models.

And the cherry on top: The company’s researchers showed their work—they explained the breakthroughs in research papers and open-sourced the models so others can use them to make their own models and agents.

The main reason DeepSeek had to do more with less is that the Biden administration put out a series of restrictions on chip exports saying that U.S. chipmakers such as Nvidia couldn’t ship the most powerful GPUs (graphics processing units, the go-to chip for training AIs) to countries outside the U.S.

This effort started in October 2022, and has been updated and fine-tuned several times to close loopholes. Biden released an executive order shortly before leaving office further tightening restrictions. DeepSeek apparently played by the rules. It made do with H800 chips the U.S. allowed Nvidia to sell in China, instead of the more powerful H100 that U.S. tech and AI companies use. 

With less powerful chips, the researchers were forced to find ways of training and operating AI models using less memory and computing power. 

The DeepSeek models use a “mixture of experts” approach, which allows them to activate only a subset of the model’s parameters that specialize in a certain type of query. This economizes on computing power and increases speed. DeepSeek didn’t invent this approach (OpenAI’s GPT-4 and Databricks’s DBRX model use it), but the company found new ways of using the architecture to reduce the computer processing time necessary during pretraining (the process in which the model processes huge amounts of data in order to optimize its parameters to correctly respond to user queries).

In DeepSeek-R1, a reasoning model comparable to OpenAI’s most recent o1 series of models (announced in September), DeepSeek found ways of economizing during inference time, when the model is “thinking” through various routes to a good answer. During this process of trial and error, the system must collect and store more and more information about the problem and its possible solutions in its “context window” (its memory) as it works.

As the context window adds more information, the memory and processing power required leaps up quickly. Perhaps DeepSeek’s biggest innovation is dramatically reducing the amount of memory allocated to storing all that data. In general terms, the R1 system stores the context data in a compressed form, which results in memory savings and better speed without affecting the quality of the answer the user sees. 

DeepSeek said in a research paper that its V3 model cost a mere $5.576 million to train. By comparison, OpenAI CEO Sam Altman said that the cost to train its GPT-4 model was more than $100 million.

Since the release of DeepSeek’s V3, developers have been raving about the model’s performance and utility. Consumers are now embracing a new DeepSeek chatbot (powered by the V3 and R1 models), which is now number one on the Apple ranking for free apps. (However, that success has attracted cyberattacks against DeepSeek and caused the company to temporarily limit new user registrations.) 

For the past two years, the narrative in the industry has been that creating state-of-the-art frontier models requires billions of dollars, lots of the fastest Nvidia chips, and large numbers of top researchers. Across the industry and in investment circles that assumption has been challenged. As a result, Nvidia stock fell nearly 17% Monday as investors question their assumptions about the demand for the expensive GPUs. And it’s all happening because a small shop of Chinese researchers knew they’d need some big engineering breakthroughs in order to create state-of-the-art models using less than state-of-the-art chips. 

https://www.fastcompany.com/91267968/how-the-biden-chip-bans-created-a-monster-called-deepseek?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2d | Jan 28, 2025, 12:40:02 AM


Login to add comment

Other posts in this group

How execs can bridge the AI knowledge gap

From streamlining administrative tasks to enhancing brainstorming sessions, AI is becoming an essential workplace companion. Yet, despite its transformative promise, its integration isn’t as

Jan 30, 2025, 1:20:05 AM | Fast company - tech
‘What’s more motivating than a punch card?’ TikTok has a new hack for keeping New Year resolutions

“What’s more motivating than a punch card?” That’s the simple idea behind a recent so-called “punch party” that crea

Jan 29, 2025, 10:50:09 PM | Fast company - tech
This group is playing ‘Dungeons & Dragons’ to help L.A. fire victims, and you can join in

The devastating California wildfires have led to a number of benefit events, from concerts to comedy shows, with the intention to fundraise for wildfire recovery efforts. 

The team

Jan 29, 2025, 10:50:08 PM | Fast company - tech
Amazon secretly tracked Californian consumers via cellphones, lawsuit alleges

Amazon.com was sued on Wednesday by consumers who accused the retailing giant of secretly tracking their movements through their cellphones

Jan 29, 2025, 10:50:07 PM | Fast company - tech
Alibaba rolls out AI model, claiming it’s better than DeepSeek-V3

Chinese tech company Alibaba on Wednesday released a new version of its Qwen 2.5 artificial intelligence model that it claimed surpassed t

Jan 29, 2025, 8:40:03 PM | Fast company - tech
The rise of ‘influencer voice’: Why this TikTok creator accent is taking over the internet and maybe the world

The “influencer accent” is taking over TikTok. If you don’t know what I’m talking about, scroll through your FYP page and listen. 

British singer-songwriter Cassyette pointed out th

Jan 29, 2025, 4:10:03 PM | Fast company - tech
AI assistants for lawyers are a booming business—with big risks

Illinois lawyer Mathew Kerbis markets himself as the Subscription Attorney, charging businesses and individual clients a monthly rate for legal

Jan 29, 2025, 1:40:07 PM | Fast company - tech