Zack Saadioui
8/27/2024
1
curl -fsSL https://ollama.com/install.sh | sh
1
docker pull ollama/ollama
1
ollama
1
2
bash
ollama run llama3.1
Model | Parameters | Size | Command |
---|---|---|---|
Llama 3.1 | 8B | 4.7GB | 1
ollama run llama3.1 |
Llama 3.1 | 70B | 40GB | 1
ollama run llama3.1:70b |
Llama 3.1 | 405B | 231GB | 1
ollama run llama3.1:405b |
Phi 3 Mini | 3.8B | 2.3GB | 1
ollama run phi3 |
Mistral | 7B | 4.1GB | 1
ollama run mistral |
Gemma 2 | 9B | 5.5GB | 1
ollama run gemma2 |
1
Modelfile
1
./vicuna-33b.Q4_0.gguf
1
2
bash
ollama create example -f Modelfile
1
2
bash
ollama run example
1
llama3.1
1
2
bash
ollama pull llama3.1
1
Modelfile
1
Next, create and run the customized model:
1
2
bash
ollama create mymodel -f ./Modelfile
1
2
bash
ollama pull llama3.1
1
2
bash
ollama rm llama3.1
1
2
bash
ollama cp llama3.1 my-model
1
2
bash
ollama list
1
curl
1
2
bash
curl http://localhost:11434/api/generate -d '{ "model": "llama3.1", "prompt":"Why is the sky blue?" }'
1
2
bash
curl http://localhost:11434/api/chat -d '{ "model": "llama3.1", "messages": [ { "role": "user", "content": "why is the sky blue?" } ] }'
1
.pdf
1
.txt
1
.csv
Copyright © Arsturn 2025