How to run Mistral using Ollama

Learn how to efficiently run Mistral using Ollama with our comprehensive guide. Maximize your productivity and streamline your workflow today!

What is Ollama?

Ollama is an open-source app that lets you run, create, and share large language models locally with a command-line interface on MacOS and Linux. Given the name, Ollama began by supporting Llama2, then expanded its model library to include models like Mistral and Phi-2. Ollama makes it easy to get started with running LLMs on your own hardware in very little setup time.

What is Mistral?

The Mistral model is a sophisticated language model designed to facilitate advanced natural language understanding and generation tasks. The Mistral model in Ollama is an advanced AI language model that leverages state-of-the-art deep learning techniques to provide robust natural language processing (NLP) capabilities. It is specifically optimized for tasks that require high accuracy and efficiency in understanding and generating human language.

Prerequisites

CPU >= 4 cores, RAM >= 16 GB, Disk >= 100 GB

Docker version 18.06 or higher

Ubuntu 20.04 LTS or later, CentOS 7 or 8

Less than 1⁄3 of the false “refusals” when compared to Llama 2

How to run Mistral with Ollama

Step 1: Pull Docker Image

docker pull ollama/ollama

Step 2: Running Docker Containers

docker run -it --name ollama_container ollama/ollama
using docker run ollama
run ollama with docker

Step 3: Entering the container

Keep the Ollama docker container running and open another command line terminal.

docker exec -it ollama_container /bin/bash

Step 4: Run the Mistral Model

ollama run mistral
using ollama run mistral

Next time, How to run Mistral with Ollama

1.login to the server.

2.docker exec it intelligent_fermat /bin/bash

3.ollama run mistral

using ollama run mistral in docker
Additional - Some Good GPU Plans for Ollama AI
Summer Sale

Professional GPU VPS - A4000

90.3/mo
Save 50% (Was $179.00)
1mo3mo12mo24mo
Order Now
  • 32GB RAM
  • 24 CPU Cores
  • 320GB SSD
  • 300Mbps Unmetered Bandwidth
  • Once per 2 Weeks Backup
  • OS: Linux / Windows 10
  • Dedicated GPU: Quadro RTX A4000
  • CUDA Cores: 6,144
  • Tensor Cores: 192
  • GPU Memory: 16GB GDDR6
  • FP32 Performance: 19.2 TFLOPS
  • Available for Rendering, AI/Deep Learning, Data Science, CAD/CGI/DCC.

Advanced GPU - A4000

209.00/mo
1mo3mo12mo24mo
Order Now
  • 128GB RAM
  • Dual 12-Core E5-2697v2
  • 240GB SSD + 2TB SSD
  • 100Mbps-1Gbps
  • OS: Windows / Linux
  • GPU: Nvidia Quadro RTX A4000
  • Microarchitecture: Ampere
  • Max GPUs: 2
  • CUDA Cores: 6144
  • Tensor Cores: 192
  • GPU Memory: 16GB GDDR6
  • FP32 Performance: 19.2 TFLOPS
  • Good Choice for 3D Rendering, Video Editing, AI/Deep Learning, Data Science, etc
Summer Sale

Advanced GPU - A5000

242.1/mo
Save 31% (Was $349.00)
1mo3mo12mo24mo
Order Now
  • 128GB RAM
  • Dual 12-Core E5-2697v2
  • 240GB SSD + 2TB SSD
  • 100Mbps-1Gbps
  • OS: Windows / Linux
  • GPU: Nvidia Quadro RTX A5000
  • Microarchitecture: Ampere
  • Max GPUs: 2
  • CUDA Cores: 8192
  • Tensor Cores: 256
  • GPU Memory: 24GB GDDR6
  • FP32 Performance: 27.8 TFLOPS

Enterprise GPU - RTX A6000

409.00/mo
1mo3mo12mo24mo
Order Now
  • 256GB RAM
  • Dual 18-Core E5-2697v4
  • 240GB SSD + 2TB NVMe + 8TB SATA
  • 100Mbps-1Gbps
  • OS: Windows / Linux
  • GPU: Nvidia Quadro RTX A6000
  • Microarchitecture: Ampere
  • Max GPUs: 1
  • CUDA Cores: 10,752
  • Tensor Cores: 336
  • GPU Memory: 48GB GDDR6
  • FP32 Performance: 38.71 TFLOPS

Enterprise GPU - A40

439.00/mo
1mo3mo12mo24mo
Order Now
  • 256GB RAM
  • Dual 18-Core E5-2697v4
  • 240GB SSD + 2TB NVMe + 8TB SATA
  • 100Mbps-1Gbps
  • OS: Windows / Linux
  • GPU: Nvidia A40
  • Microarchitecture: Ampere
  • Max GPUs: 1
  • CUDA Cores: 10,752
  • Tensor Cores: 336
  • GPU Memory: 48GB GDDR6
  • FP32 Performance: 37.48 TFLOPS
New Arrival

Multi-GPU - 3xRTX A5000

539.00/mo
1mo3mo12mo24mo
Order Now
  • 256GB RAM
  • Dual 18-Core E5-2697v4
  • 240GB SSD + 2TB NVMe + 8TB SATA
  • 100Mbps-1Gbps
  • OS: Windows / Linux
  • GPU: 3 x Quadro RTX A5000
  • Microarchitecture: Ampere
  • Max GPUs: 3
  • CUDA Cores: 8192
  • Tensor Cores: 256
  • GPU Memory: 24GB GDDR6
  • FP32 Performance: 27.8 TFLOPS
New Arrival

Multi-GPU - 3xRTX A6000

899.00/mo
1mo3mo12mo24mo
Order Now
  • 256GB RAM
  • Dual 18-Core E5-2697v4
  • 240GB SSD + 2TB NVMe + 8TB SATA
  • 100Mbps-1Gbps
  • OS: Windows / Linux
  • GPU: 3 x Quadro RTX A6000
  • Microarchitecture: Ampere
  • Max GPUs: 3
  • CUDA Cores: 10,752
  • Tensor Cores: 336
  • GPU Memory: 48GB GDDR6
  • FP32 Performance: 38.71 TFLOPS