Scripts: setup.sh, build.sh, serve.sh (Docker-based) Content: about, config, software, posts sections Custom: CSS overrides, HTML sitemap layout, extended_head partial Theme: hugo-theme-terminal via Hugo modules (go.mod)
471 B
471 B
| title | date | draft | tags | ||||
|---|---|---|---|---|---|---|---|
| Running Local LLMs with Ollama | 2026-04-02 | false |
|
Ollama lets you run large language models locally with a single command.
Quick start
ollama run llama3
Why local?
- No API keys or rate limits.
- Data stays on your machine.
- Works offline.
Models I use
llama3— general purposecodellama— code generationnomic-embed-text— embeddings for vector search