9  Setup Ollama

9.1 Install Ollama

Download and install Ollama for your OS from the official website ollama.com.

9.2 Download Ollama Models

download any model from their vast libray.

For example - to install the popular Qwen 3 model with 8 billion parameters you would run the following:

ollama pull qwen3:8b

9.3 Serve Ollama

Ollama includes a server that runs on your local system which allows you to make any models you have downloaded available to your system. You can access these models in many different ways:

  • Using Ollama’s clean & minimal chatbot interface.
  • Using Ollama’s command line utility.
  • Using Ollama’s Python, R, or JavaScript APIs.
  • Through a long list of open source and proprietary APIs and web applications.

There are two main ways to serve Ollama on your system:

  • Use the Ollama app and toggle the “Expose Ollama to the network” setting to ON.
  • Run the server on the command line:
ollama serve

9.4 Install the Ollama Python library

Use your preferred way to manage the installation of Python versions, virtual environments, and packages. This guide’s recommendation is to use the powerful and highly performant uv, which can handle all these tasks.

9.4.1 Setup new project

Initialize a new project:

uv init my-ollama-project

Change directory:

cd my-ollama-project

uv automatically adds a simple main.py file. You can run it to verify the installation was successful. The first time you run any python script like this, uv will create the project-specific virtual environment:

uv run main.py

Add the ollama package:

uv add ollama

Now, you are able to import the ollama package, when using the newly created virtual environment.

9.5 Install the Ollama R package

Use your preferred way to install the ollamar package, for example:

install.packages("ollamar")

or:

pak::pak("ollamar")