Scripts: setup.sh, build.sh, serve.sh (Docker-based) Content: about, config, software, posts sections Custom: CSS overrides, HTML sitemap layout, extended_head partial Theme: hugo-theme-terminal via Hugo modules (go.mod)
26 lines
471 B
Markdown
26 lines
471 B
Markdown
---
|
|
title: "Running Local LLMs with Ollama"
|
|
date: 2026-04-02
|
|
draft: false
|
|
tags: ['ollama', 'llm', 'tools', 'linux']
|
|
---
|
|
|
|
Ollama lets you run large language models locally with a single command.
|
|
|
|
## Quick start
|
|
|
|
```bash
|
|
ollama run llama3
|
|
```
|
|
|
|
## Why local?
|
|
|
|
- No API keys or rate limits.
|
|
- Data stays on your machine.
|
|
- Works offline.
|
|
|
|
## Models I use
|
|
|
|
- `llama3` — general purpose
|
|
- `codellama` — code generation
|
|
- `nomic-embed-text` — embeddings for vector search
|