Awhile back I wrote about how you can run your own local ChatGPT experience for free using Ollama and OpenWebUI with support for LLMs like Llama3, Microsoft Phi, Mistral and more! With the recent, open source release of DeepSeek R1, it’s also supported to run locally with Ollama too! This article will take you through the steps to do this. If you’re looking for an intro to getting started with Ollama on your local machine, I recommend you read my “Run Your Own Local, Private, ChatGPT-like AI Experience with Ollama and OpenWebUI” article first, then come back here.
Why Run DeepSeek R1 Locally?
DeepSeek R1 is a powerful and efficient open-source large language model (LLM) that offers state-of-the-art reasoning, problem-solving, and coding abilities. Running it locally provides several advantages:
- Privacy: No data is sent to external servers, ensuring complete control over your interactions.
- Performance: Get faster responses by leveraging your local hardware rather than relying on cloud-based APIs.
- Cost-Efficiency: Avoid ongoing API costs associated with cloud-based AI services.
- Customization: Fine-tune and integrate the model into your specific workflows without third-party limitations.
Prerequisites
Before installing DeepSeek R1, ensure you have the following:
- A compatible operating system (macOS, Linux, or Windows with WSL2)
- At least 16GB RAM (for smaller models) and more for larger variants
- Ollama installed on your system
If you haven’t installed Ollama yet, you can download it from Ollama’s official website and follow their installation instructions.
Installing and Running DeepSeek R1 with Ollama
Step 1: Install Ollama
If you haven’t already installed Ollama, follow these steps:
- Download and install Ollama from the official website.
- Once installed, verify the installation with the command:
ollama --version
Step 2: Download the DeepSeek R1 Model
To pull the DeepSeek R1 model using Ollama, run the following command in your terminal:
ollama run deepseek-r1
This will automatically download the DeepSeek R1 model to your local machine. The download time depends on your internet speed.
Step 3: Verify Installation
To ensure the model was downloaded successfully, run:
ollama list
If installed correctly, you should see deepseek-r1
in the list of available models.
Step 4: Running DeepSeek R1 Locally
Once downloaded, you can run the model locally with:
ollama run deepseek-r1
This starts the model, allowing you to interact with it in your terminal.
Optional: Using OpenWebUI for a GUI Experience
If you prefer a graphical interface instead of using the terminal, you can pair Ollama with OpenWebUI:
- Install Docker if you haven’t already.
- Run the OpenWebUI Docker container:
docker run -d -p 3000:3000 --name openwebui openwebui/openwebui
- Access OpenWebUI at
http://localhost:3000
and configure it to use Ollama as the backend.
Note: Keep in mind this is a local instance of OpenWebUI. Although, since this uses Docker, it is possible to host OpenWebUI on a server in the cloud too, if you want to make it available from other machines.
Fine-Tuning and Customization
For advanced users, you may want to fine-tune DeepSeek R1 for specific tasks. Ollama allows you to create custom models based on DeepSeek R1 by modifying prompt templates and response behaviors.
ollama create deepseek-custom --base deepseek-r1 --modify-config
Follow the prompts to configure your custom AI assistant.
Conclusion
With Ollama, running DeepSeek R1 locally is simple and offers a powerful, private, and cost-effective AI experience. Whether you’re a developer, researcher, or enthusiast, having access to a cutting-edge model like DeepSeek R1 on your local machine opens up endless possibilities.
By running DeepSeek R1 locally, you not only enhance privacy and security but also gain full control over AI interactions without the requirement of cloud services. This setup is particularly beneficial for enterprises looking to integrate AI into their internal systems, researchers requiring offline capabilities, and developers interested in experimenting with AI models efficiently. Furthermore, the combination of DeepSeek R1 and Ollama allows users to create highly customized AI applications tailored to specific needs.
As AI continues to evolve, the ability to run sophisticated models locally will become an increasingly valuable asset. Whether you’re exploring AI for personal use, professional development, or business applications, DeepSeek R1 provides a robust and accessible solution. Try it today and take your AI experiments to the next level!
Original Article Source: Run DeepSeek R1 Locally for Free with Ollama and OpenWebUI by Chris Pietschmann (If you’re reading this somewhere other than Build5Nines.com, it was republished without permission.)