27 lines
471 B
Markdown
27 lines
471 B
Markdown
|
|
---
|
||
|
|
title: "Running Local LLMs with Ollama"
|
||
|
|
date: 2026-04-02
|
||
|
|
draft: false
|
||
|
|
tags: ['ollama', 'llm', 'tools', 'linux']
|
||
|
|
---
|
||
|
|
|
||
|
|
Ollama lets you run large language models locally with a single command.
|
||
|
|
|
||
|
|
## Quick start
|
||
|
|
|
||
|
|
```bash
|
||
|
|
ollama run llama3
|
||
|
|
```
|
||
|
|
|
||
|
|
## Why local?
|
||
|
|
|
||
|
|
- No API keys or rate limits.
|
||
|
|
- Data stays on your machine.
|
||
|
|
- Works offline.
|
||
|
|
|
||
|
|
## Models I use
|
||
|
|
|
||
|
|
- `llama3` — general purpose
|
||
|
|
- `codellama` — code generation
|
||
|
|
- `nomic-embed-text` — embeddings for vector search
|