Table of contents
What Is Hermes Agent?
Hermes Agent is an open-source personal AI agent built by Nous Research. It runs on your own machine, connects to an LLM provider of your choice, and talks to you through a terminal, a TUI, or 15+ messaging platforms. This lesson walks you through the official install path, the critical provider step, and the recommended 'don't run before you can walk' adoption principle.
Primary source: the official Hermes docs at https://hermes-agent.nousresearch.com/docs/getting-started/quickstart. This lesson summarises that page in our own words, for SetupClaw readers who are used to the OpenClaw learning format. For the canonical, always-up-to-date version, follow the link above.
Prerequisites
The hard prerequisite is a language model with at least a 64,000-token context window. That's because Hermes packs system prompt, tools, memory and conversation history into a single prompt every turn — if the window is too small, multi-step tool calls fall over. All mainstream hosted models (Claude, GPT, Gemini, Qwen, DeepSeek) clear that bar. Local models need to be chosen carefully.
Operating system support is pragmatic rather than universal:
- Linux and macOS are first-class — the installer script just works.
- Windows is supported via WSL2. Install WSL2 first, then follow the Linux path.
- Android is possible via Termux, but has its own dedicated guide and known limitations.
Install in One Line
Nous ships a single install script. Run it, then reload your shell so the new binary is on your PATH:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
source ~/.zshrc # or source ~/.bashrcAfter the script finishes you'll have a hermes binary. Running hermes version is a quick sanity check — if it prints a version string, the install worked.
Pick a Provider — the Single Most Important Step
The official docs are blunt on this: provider choice is the single most important setup step, because every subsequent feature inherits its quality, latency and cost. Run the interactive wizard and pick the one that matches your situation:
hermes model- Minimal friction, no existing credentials: Nous Portal or OpenRouter.
- You already have a Claude or Codex subscription: go direct via Anthropic or OpenAI.
- Privacy-first / air-gapped: Ollama or any OpenAI-compatible local endpoint.
- Want automatic resilience across many providers: OpenRouter.
Hermes separates secrets from plain config by design. API keys go to ~/.hermes/.env, everything else lives in ~/.hermes/config.yaml. When you run hermes config set, the CLI automatically writes each value to the right file — so secrets don't accidentally end up committed in a config repo.
Your First Conversation
Launch the chat in whichever interface you prefer:
hermes # classic terminal prompt
hermes --tui # modern TUI with mouse support and modalsBoth modes share the same sessions, slash commands and memory — you can flip between them freely. A good first-conversation checklist:
- The welcome banner shows your chosen model and the toolset — this is your proof that provider + tools both loaded.
- The agent replies to a plain chat message, end to end, without an error.
- When you ask it to run a terminal command, write a file, or search the web, the tool actually fires — watch the streaming output.
- Exit, then run
hermes --continueto resume. If the last session reappears, session persistence is working.
The 'Layered Adoption' Principle
This is the single rule most newcomers miss, and the Hermes docs call it out explicitly: if Hermes can't complete a normal chat, don't add more features yet. Don't plug in Telegram, don't install a skill, don't turn on voice mode. Stabilise the baseline first, then layer.
A reasonable adoption order looks like this:
- Plain chat in the CLI with your chosen provider.
- Slash commands —
/help,/tools,/model— so you're comfortable navigating. - Multi-line input with Alt+Enter and session resume with
hermes --continue. - One messaging gateway (e.g. Telegram), with
GATEWAY_ALLOWED_USERSstrictly limited. - Skills from the official skills hub, one at a time, not a dozen at once.
- Voice mode, Docker sandbox for the terminal tool, MCP servers — only once everything above is boring and stable.
When Things Go Wrong: the Diagnostic Sequence
If something feels off, don't guess — the docs publish a deterministic sequence that moves from ambiguous failure to a known-good state:
hermes doctor— validates config, dependencies, and API keys.hermes model— reverifies the provider is reachable.hermes setup— nuclear option: re-runs the full wizard.hermes sessions list— confirms session storage is intact.hermes gateway status— only relevant once you've wired a messaging bot.
Next Steps
Once your baseline is solid, the next lesson walks through the CLI itself — slash commands, session management, multi-line input and the background-task pattern. After that, pick whichever lesson matches your immediate goal (messaging, integrations, developer extension) rather than reading them in order.