Sitemap

The Power of Local LLMs: Hello Ollama

3 min readMar 9, 2025
Press enter or click to view image in full size

Introduction

Running Large Language Models (LLMs) online can present several significant challenges, particularly when it comes to handling personal or proprietary data. Here are some key considerations:

Data Privacy: Sharing your proprietary data online can be risky. When using online LLMs, your sensitive information may be transmitted over the internet, increasing the risk of unauthorized access or data breaches. Offline models, on the other hand, allow you to keep your data secure within your own infrastructure

Cost: Utilizing LLM API keys in your daily applications can become expensive over time. The cost of API calls can add up quickly, especially if you are using these models frequently

Customization: Online LLMs often offer limited customization options. If you need a model tailored to specific business or personal needs, offline models provide more flexibility and control over the model’s architecture and training data

Offline Capability: Online LLMs require a stable internet connection to function. This can be a limitation in areas with unreliable internet access. Offline models, however, can operate independently of internet connectivity, making them more versatile for various environments

Solution: Use Ollama

Ollama (Omni-Layer Learning Language Acquisition Mode) is a free, open-source platform used for running LLM models locally

  1. Download Ollama for your OS

2. Run the exe & follow the installation wizard

Press enter or click to view image in full size

Once the installation is completed successfully, you should see the following

Press enter or click to view image in full size

3. Run an LLM model (in my case deepseek-r1) via CLI

PS C:\Windows\System32> ollama run deepseek-r1

You should see ollama pulling the model — deepseek-r1. This step may take a while to complete based on the size of the model

Press enter or click to view image in full size

Once the model is downloaded, you should see a ‘success’ message

Press enter or click to view image in full size

4. Now you are ready to chat with your locally running deepseek-r1 model

Press enter or click to view image in full size

You could play around with other commands like show

Press enter or click to view image in full size

/show info — gives the details of the current LLM model

/bye — exit from the prompt mode of the LLM

Once you exit from the prompt mode & return to the PowerShell mode, you can check the ollama command options.

We will now use the remove command to remove the deepseek-r1 model

Press enter or click to view image in full size

You can get a list of all the commands available from Ollama here

Further reading

ollama provides support for usage via python packages in your code. You could also explore the REST API provided by Ollama to interact with the LLM models

Conclusion

This article demonstrated how you can run any LLM model locally through Ollama platform. It explored the different commands that you can use to manage the LLM models

If you liked my article, consider giving multiple claps. Follow my linkedin profile for more such interesting content

--

--

Ashish Agarwal
Ashish Agarwal

Written by Ashish Agarwal

Engineer and Water Color Artist @toashishagarwal

No responses yet