Follow these steps to set up and run the Ollama project:
-
Pull the Ollama Docker Image
Use the following command to pull the Ollama image:
docker pull ollama/ollama
Example output:
Using default tag: latest latest: Pulling from ollama/ollama a186900671ab: Pull complete b0130a66c113: Pull complete 16dfb65baac7: Pull complete 03856fb3ee73: Pull complete 898a890e221d: Pull complete db1a326c8c34: Pull complete Digest: sha256:7e672211886f8bd4448a98ed577e26c816b9e8b052112860564afaa2c105800e Status: Downloaded newer image for ollama/ollama:latest docker.io/ollama/ollama:latest
Optionally, you can:
-
Sign in to your Docker account:
docker login
-
View a summary of image vulnerabilities and recommendations:
docker scout quickview ollama/ollama
-
-
Run the Ollama Docker Container
Start the container with the following command:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Example output:
d236da51b6995b0ba4981869ad30fa4ea97d02b844e1e971487604b04ca47af0
-d
: Runs the container in detached mode.-v ollama:/root/.ollama
: Maps a volume to persist installed models.-p 11434:11434
: Exposes port 11434 for communication.--name ollama
: Assigns the container a name for easier management.
-
Verify Container Network and Retrieve IP Address
Ensure Ollama is running on the default Docker network
bridge
and retrieve its IP address. Run the following command:docker network inspect bridge
Example output:
[ { "Name": "bridge", "Id": "2cf7eaa6e7aae4836088f42a362fab3f01d75ce1b871ac3fb7256609bc707e84", "Containers": { "f19cb3ac564924b0102834a125d4707f116194173fec6d082bd30492eb8098ab": { "Name": "ollama", "IPv4Address": "172.17.0.2/16" } } } ]
Identify the IP address for Ollama from the
IPv4Address
field (e.g.,172.17.0.2
). -
Install Models in the Container
Once the container is running, install the required models using:
docker exec -it ollama ollama pull llama3.2
Example output:
pulling manifest pulling dde5aa3fc5ff... 100% ▕██████████████████████████████████████████████▏ 2.0 GB pulling 966de95ca8a6... 100% ▕██████████████████████████████████████████████▏ 1.4 KB pulling fcc5a6bec9da... 100% ▕██████████████████████████████████████████████▏ 7.7 KB verifying sha256 digest writing manifest success
-
Verify Installed Models
List all available models with:
docker exec -it ollama ollama list
-
Clone the Repository
Clone the Assistant repository using the following command:
git clone https://github.com/Kwaai-AI-Lab/assistant.git
-
Navigate to the Assistant Directory
Move into the
assistant
directory:cd assistant
-
Prepare the
.env
FileCreate a
.env
file in theassistant
directory with the following content. Update theOLLAMA_LOCAL_MODELS_URL
andOLLAMA_IP
values using the IP address obtained in Process 1, Step 3:PAIOS_ALLOW_ORIGINS='http://localhost:5173,https://0.0.0.0:8443,https://localhost:3000' PAIOS_DB_ENCRYPTION_KEY='your_db_encryption_key_here' CHUNK_SIZE='2000' CHUNK_OVERLAP='400' ADD_START_INDEX='True' EMBEDDER_MODEL='llama3.2:latest' SYSTEM_PROMPT='You are a helpful assistant for students learning needs.' MAX_TOKENS='200' TEMPERATURE='0.2' TOP_K='40' TOP_P='0.9' PAIOS_SCHEME='https' PAIOS_HOST='0.0.0.0' PAIOS_EXPECTED_RP_ID='localhost' PAIOS_PORT='8443' PAIOS_URL='https://localhost:8443' PAI_ASSISTANT_URL='https://localhost:3000' PAIOS_JWT_SECRET='your_jwt_secret_here' # Eleven Labs XI_API_URL='https://api.elevenlabs.io' XI_API_KEY='sample_api_key' XI_CHUNK_SIZE='1024' # Ollama OLLAMA_LOCAL_MODELS_URL='http://172.17.0.2:11434/api/tags' OLLAMA_MODELS_DESCRIPTION_URL='https://ollama.com/library/' OLLAMA_IP='http://172.17.0.2:11434'
-
Build and Run the Docker Image
Build the Docker image and run the container:
docker build -t assistant . && docker run -p 8443:8443 assistant
-
Clone the Repository
Clone the Frontend repository using the following command:
git clone https://github.com/Kwaai-AI-Lab/kwaai-ui.git
-
Navigate to the Frontend Directory
Move into the
kwaai-ui
directory:cd kwaai-ui
-
Build and Run the Docker Image
Use the following command to build and run the frontend Docker container:
docker build -t kwaai_frontend . && docker run -p 3000:3000 kwaai_frontend
To ensure the three containers (ollama
, assistant
, and kwaai_frontend
) can communicate with each other, verify they are running on the default Docker network bridge
.
Run the following command:
docker network inspect bridge
Ensure all three containers are listed under the Containers
section and retrieve the IP address for Ollama (e.g., 172.17.0.2
). If any containers are missing, troubleshoot their network connectivity to ensure proper communication.