Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
Check Ollama
ollama --version
Pull llama3.2 model
ollama pull llama3.2
Run llama3.2 model
ollama run llama3.2
Ollama API reference link: https://github.com/ollama/ollama/blob/main/docs/api.md
Access from local using curl
curl http://localhost:11434/api/generate -d '{ "model": "llama3.2", "prompt": "How are you today?"}'
The format of the default response is not very friendly, let's add additional parameters to generate a single json object data, and the response is the return content.
curl http://localhost:11434/api/generate -d '{ "model": "llama3.2", "prompt": "How are you today?", "stream": false}'
Ollama Python library link: https://github.com/ollama/ollama-python
The Ollama Python library provides the easiest way to integrate Python 3.8+ projects with Ollama. Let's install a Python virtual environment first, install python3-pip.
apt install python3-pip python3.12-venv python3 -m venv myvenv ./myvenv/bin/python3 -m pip install ollama
There are many specific ways to use Python to investigate the Ollama API. Here are two relatively simple ones. For more information, please refer to the link above.
1. Create a Python file
2. Use Python Shell