Curious about DeepSeek but worried about privacy? These apps let you use an LLM without the internet

Most of us are used to using internet chatbots like ChatGPT and DeepSeek in one of two ways: via a web browser or via their dedicated smartphone apps. There are two drawbacks to this. First, their use requires an internet connection. Second, everything you type into the chatbot is sent to the companies’ servers, where it is analyzed and retained. In other words: the more you use the chatbot the more the company knows about you. This is a particular worry surrounding DeepSeek that American lawmakers have expressed.

But thanks to a few innovative and easy-to-use desktop apps, LM Studio and GPT4All, you can bypass both these drawbacks. With the apps, you can run various LLM models on your computer directly. I’ve spent the last week playing around with these apps and thanks to each, I can now use DeepSeek without the privacy concerns. Here’s how you can, too.

Run DeepSeek locally on your computer without an internet connection

To get started, simply download LM Studio or GPT4All on your Mac, Windows PC, or Linux machine. Once the app is installed, you’ll download the LLM of your choice into it from an in-app menu. I chose to run DeepSeek’s R1 model, but the apps support myriad open-source LLMs.

LM Studio can run DeepSeek’s reasoning model privately on your computer.

Once you’ve done the above you’ve essentially turned your personal computer into an AI server capable of running numerous open-source LLMs, including ones from DeepSeek and Meta. Next, simply open a new chat window and type away just as you would when using an AI chatbot on the web.

The best thing about both these apps is that they are free for general consumer use, you can run several open-source LLMs in them (you get to choose which and can swap between LLMs at will), and, if you already know how to use an AI chatbot in a web browser, you’ll know how to use the chatbot in these apps.

But there are additional benefits to running LLM’s locally on your computer, too.

The benefits of using an LLM locally

I’ve been running DeepSeek’s reasoning model on my MacBook for the past week without so much as a hiccup in both LM Studio or GPT4All. One of the coolest things about interacting with DeepSeek in this way is that no internet is required. Since the LLM is hosted directly on your computer, you don’t need any kind of data connection to the outside world to use it.

Running LLMs like DeepSeek in apps like GPT4All can help keep your data secure.

Or as GPT4All’s lead developer, Adam Treat, puts it, “You can use it on an airplane or at the top of Mount Everest.” This is a major boon to business travelers stuck on long flights and those working in remote, rural areas. 

But if Treat had to sum up the biggest benefit of running DeepSeek locally on your computer, he would do it in one word: “Privacy.”

“Every online LLM is hosted by a company that has access to whatever you input into the LLM. For personal, legal, and regulatory reasons this can be less than optimal or simply not possible,” Treat explains. 

While for individuals, this can present privacy risks, those who upload business or legal documents into an LLM to summarize could be putting their company and its data in jeopardy.

“Uploading that [kind of data] to an online server risks your data in a way that using it with an offline LLM will not,” Treat notes. The reason an offline LLM running locally on your own computer doesn’t put your data at risk is because “Your data simply never leaves your machine,” says Treat.

This means, for example, if you want to use DeepSeek to help you summarize that report you wrote, you can upload it into the DeepSeek model stored locally on your computer via GPT4All or LM Studio and rest assured the information in that report isn’t being sent to the LLM maker’s servers.

The drawbacks of using an LLM locally

However, there are drawbacks to running an LLM locally. The first is that you’re limited to using only the open-source models that are available, which may be less recent than the model that is available through the chatbot’s official website. And because only open-source models can be installed, that means you can’t use apps like GPT4All or LM Studio to run OpenAI’s ChatGPT locally on your computer.

Another disadvantage is speed. 

“Because you are using your own hardware (your laptop or desktop) to power the AI, the speed of responses will be generally slower than an online server,” Treat says. And since AI models rely heavily on RAM to perform their computations, the amount of RAM you have in your computer can limit which models you can install in apps like GPT4All and LM Studio.

“As online servers are usually powered by very high-end hardware they are generally going to be faster and have more memory allowing for very fast responses by very large models,” explains Treat.

Still, in my testing of both LM Studio and GPT4All over the past week, I don’t think the reduced speediness of DeepSeek’s replies is a dealbreaker. When using DeepSeek’s R1 reasoning model on the web, the DeepSeek hosted on servers in China took 32 seconds to return an answer to the prompt “Can you teach me how to make a birthday cake?” When asking the local DeepSeek R1 model stored in LM Studio and GPT4All, the response time was 84 seconds and 82 seconds, respectively. 

I’ve found that the benefits of running DeepSeek locally on my device using LM Studio and GPT4All far outweigh the extra waiting time required to get a response. Without a doubt, being able to access a powerful AI model like DeepSeek’s R1 locally on my computer anywhere at any time without an internet connection—and knowing the data I enter into it remains private—is a trade-off worth making.

https://www.fastcompany.com/91285738/curious-about-deepseek-but-worried-about-privacy-these-apps-let-you-use-an-llm-without-the-internet?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 4h | 1. 3. 2025 11:30:05


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

Intel’s anticipated $28 billion chip factories in Ohio are delayed until 2030

Intel‘s promised $28 billion chip fabrication plants in Ohio are facing further delays, with the first factory in New Albany expected

28. 2. 2025 23:50:06 | Fast company - tech
Tired of overdramatic TikTok food influencers? Professional critics are too

TikTok and Instagram are flooded with reels of food influencers hyping already viral restaurants or bringing hundreds of thousands of eyes to hidden gems. With sauce-stained lips, exaggerated chew

28. 2. 2025 23:50:05 | Fast company - tech
The internet has suspicions about family vloggers fleeing California. Here’s why

An unsubstantiated online theory has recently taken hold, claiming that family vloggers are fleeing Los Angeles to escape newly introduced California laws designed to protect children featured in

28. 2. 2025 21:40:02 | Fast company - tech
DOGE isn’t Silicon Valley innovation—it’s just a sloppy rebrand of free-market dogma

At a press conference in the Oval Office earlier this month, Elon Musk—a billionaire who is not, at least formally, the President of the United States—was asked how the Department of Government Ef

28. 2. 2025 19:20:04 | Fast company - tech
Next-gen nuclear startup plans 30 reactors to fuel Texas data centers

Last Energy, a nuclear upstart backed by an Elon Musk-linked venture capital fund, says it plans to construct 30 microreactors on a site in Texas to supply electricity to data centers across the s

28. 2. 2025 16:50:10 | Fast company - tech
Who at DOGE has access to U.S. intelligence secrets? Democrats are demanding answers

Democratic lawmakers demanded answers from billionaire Elon Musk’s Department of Govern

28. 2. 2025 16:50:09 | Fast company - tech
Ethan Klein declares war on r/Fauxmoi. But can a subreddit even be sued?

Pop culture subreddit r/Fauxmoi is facing accusations of defamation from YouTuber and podcaster Ethan Klein.

Klein first rose to internet fame through his YouTube channel,

28. 2. 2025 14:40:03 | Fast company - tech