Skip to main content

CLI Reference

The lf CLI is your control center for LlamaFarm projects. This reference captures global flags, command behaviours, and examples you can copy into your shell. Each subcommand shares the same auto-start logic: if the server or RAG worker is not running locally, the CLI will launch them (unless you override --server-url).

Global Flags

lf [command] [flags]
FlagDescription
--debug, -dEnable verbose logging.
--server-urlOverride the server endpoint (default http://localhost:8000).
--server-start-timeoutHow long to wait for local server startup (default 45s).
--cwdTreat another directory as the working project root.

Environment helpers:

  • LLAMAFARM_SESSION_ID – reuse a session for lf chat.
  • OLLAMA_HOST – point lf start to a different Ollama endpoint.

Command Matrix

CommandDescription
lf initScaffold a project and generate llamafarm.yaml.
lf startLaunch server + RAG services and open the dev chat UI.
lf chatSend single prompts, preview REST calls, manage sessions.
lf modelsList available models and manage multi-model configurations.
lf datasetsCreate, upload, process, and delete datasets.
lf ragQuery documents and access RAG maintenance tools.
lf projectsList projects by namespace.
lf versionPrint CLI version/build info.

Troubleshooting CLI Output

  • “Server is degraded” – At least one dependency (Celery, RAG worker, Ollama) is slow or offline. Commands may still succeed; check logs if they hang.
  • “No response received” – The runtime streamed nothing; run with --no-rag or change models if the provider struggles with tools output.
  • Dataset processing timeouts – The CLI times out after waiting for Celery. Re-run once ingestion finishes or increase worker availability.
  • Authorization redactionlf chat --curl hides API keys automatically; replace <redacted> before running.

Looking to add a new command? See Extending LlamaFarm for a Cobra walkthrough.