YAML guidance
Use faraday.yaml for stable defaults.
Use environment variables and CLI flags for secrets and one-off overrides.
Start with the mental model
Section titled “Start with the mental model”There are two knobs that matter:
| Knob | What it controls | Key |
|---|---|---|
| App | Where the Faraday process runs | app.mode |
| Sandbox | Where agent-generated code runs | sandbox.backend |
The three common combinations:
| Setup | app.mode | sandbox.backend |
|---|---|---|
| Local + Docker sandbox | host | docker |
| Docker app + sidecar sandbox | docker | docker |
| Local + Modal | host | modal |
If you only remember one thing: app = where Faraday lives, sandbox = where code runs.
Recommended baseline
Section titled “Recommended baseline”This is a good starting point for most users — local Faraday with a Docker sandbox:
llm: provider: openai model: gpt-5 api_key_env: OPENAI_API_KEY
backends: db: sqlite rag: in-memory
app: mode: host workspace: source_root: .
sandbox: backend: docker workspace: container_path: /workspace
outputs: root: ./run_outputs
storage: sqlite_path: ~/.faraday/faraday.db save_messages: true save_trajectory: trueWhy this is a good default:
sqliteworks without extra infrastructure.in-memorygives you workspace-aware retrieval without extra services.app.mode: hostkeeps the app easy to debug.sandbox.backend: dockerisolates code execution from your main Python environment.outputs.rootmakes run artifacts predictable.storagekeeps everything about persistence in one place.
Provider setup patterns
Section titled “Provider setup patterns”OpenAI
Section titled “OpenAI”llm: provider: openai model: gpt-5 api_key_env: OPENAI_API_KEYAzure OpenAI
Section titled “Azure OpenAI”llm: provider: azure model: gpt-5 api_key_env: AZURE_OPENAI_API_KEY base_url_env: AZURE_OPENAI_BASE_URL api_version: previewOpenRouter
Section titled “OpenRouter”llm: provider: openrouter model: openai/gpt-4.1-mini api_key_env: OPENROUTER_API_KEYGuidance:
- Prefer
api_key_envover storing secrets in YAML. - Keep provider-specific values inside the
llmblock.
Switching between modes
Section titled “Switching between modes”Start from the recommended baseline above. Each mode only requires changing a small number of lines.
Mode 1: Local + Docker sandbox (the default)
Section titled “Mode 1: Local + Docker sandbox (the default)”No changes needed — this is the baseline.
app: mode: host workspace: source_root: .
sandbox: backend: docker workspace: container_path: /workspaceBest for: everyday development, repository work where code should be isolated.
Mode 2: Docker app + sidecar sandbox
Section titled “Mode 2: Docker app + sidecar sandbox”Starting from the baseline, change these lines:
app: mode: docker # ← was: host app_image: faraday-oss # ← add: which image to run Faraday in workspace: source_root: /workspace # ← was: . (absolute path inside the container)
outputs: root: /workspace/run_outputs # ← add: so outputs land on the bind-mounted volumeThe sandbox block stays the same.
Best for: containerized deployments, consistent runtime across machines, demo environments.
Mode 3: Local + Modal
Section titled “Mode 3: Local + Modal”Starting from the baseline, change one line and add the modal config:
sandbox: backend: modal # ← was: docker modal: # ← add: Modal-specific settings cloud_storage_mode: optional bucket_name: my-faraday-bucketThe app block stays the same. Also set features.modal: true.
Best for: remote compute, workflows that already depend on Modal.
Bonus: Local app, local execution (no Docker)
Section titled “Bonus: Local app, local execution (no Docker)”For the simplest possible setup when Docker is unavailable:
app: mode: host
sandbox: backend: host # ← was: dockerTrade-offs: installed packages and filesystem writes affect the host directly, results may be less reproducible across machines.
Workspace guidance
Section titled “Workspace guidance”Bind the workspace directly
Section titled “Bind the workspace directly”This is the default behavior and the right choice for most runs. The agent reads and writes files directly at source_root.
app: workspace: source_root: .Use isolated workspace copies
Section titled “Use isolated workspace copies”Use this when each run should start from the same clean workspace snapshot.
app: workspace: source_root: . init_mode: copy copy_root: ./.faraday_runtime/workspace-copies keep_copy: falseBest for:
- repeated benchmark runs
- long-running experiments
- comparisons where mutations to the workspace should not leak between runs
Output and storage guidance
Section titled “Output and storage guidance”Faraday writes per-run artifacts under outputs.root.
Typical layout:
run_outputs/ run_{timestamp}_{chat_id}_{query_id}/ agent_outputs/ run_artifacts/Use these defaults unless you have a clear reason not to:
outputs: root: ./run_outputs
storage: sqlite_path: ~/.faraday/faraday.db save_messages: true save_trajectory: trueWhat each setting is for:
save_messages: persist conversation history so it can be restored across runssave_trajectory: writetrajectory.jsonfor replay, debugging, and handoffprevious_context: path to a priortrajectory.json; use when a follow-up run should resume from a known trace
Retrieval and storage guidance
Section titled “Retrieval and storage guidance”Simple local retrieval
Section titled “Simple local retrieval”backends: db: sqlite rag: in-memory
storage: sqlite_path: ~/.faraday/faraday.dbUse this when:
- you want easy local setup
- you want conversation history
- you want retrieval over the current workspace
Disable retrieval
Section titled “Disable retrieval”backends: db: sqlite rag: noneUse this when:
- you want simpler behavior
- you do not want workspace retrieval influencing answers
If you need retrieval over an external corpus instead of the current workspace, see Bring your own document store.
Batch-run guidance
Section titled “Batch-run guidance”If you regularly run the same prompt set, define it in YAML:
batch: enabled: true prompts: - "Summarize the main modules in this repository." - "Identify the biggest operational risks." max_concurrency: 1 max_retries: 2 continue_on_error: falseUse prompts_file when the list is large:
batch: enabled: true prompts_file: ./prompts.txtGuidance:
- keep
max_concurrency: 1until you know your provider limits - raise concurrency only when prompts are independent
- enable
continue_on_errorwhen you care more about batch completion than fail-fast behavior
Feature flags
Section titled “Feature flags”Most users can leave features unset.
Reach for it when you need to enable or disable optional runtime dependencies:
features: modal: false exa: false python_science_stack: true cheminformatics_stack: falseRules of thumb:
- enable Modal only when
sandbox.backend: modal - enable the science stack for common plotting and analysis workflows
- enable the cheminformatics stack only when you actually need RDKit or Datamol
Tool extension guidance
Section titled “Tool extension guidance”Use tool_modules when you want to load custom tools before each run.
tool_modules: - my_project.faraday_tools - ./tools/custom_tools.pyThis is the right approach when:
- you want reusable project-specific tools
- the same custom tools should be available from both the CLI and SDK
Keep YAML small
Section titled “Keep YAML small”The best Faraday YAML is usually shorter than you expect.
Add a key only when:
- you want behavior different from the default
- a runtime boundary needs an explicit path
- the environment is shared and you want predictable outputs
- a workflow will be reused enough that it deserves a checked-in config
Common mistakes
Section titled “Common mistakes”Mixing up app.mode and sandbox.backend
Section titled “Mixing up app.mode and sandbox.backend”Symptom:
- Faraday launches in the wrong place, or code executes somewhere unexpected.
Fix:
app.mode= where Faraday runs.sandbox.backend= where code runs.- set both blocks explicitly when you care about predictability
Storing secrets in YAML
Section titled “Storing secrets in YAML”Symptom:
- API keys end up in version control
Fix:
- use
llm.api_key_env - export secrets in the shell or
.env
Using a relative source_root with app.mode: docker
Section titled “Using a relative source_root with app.mode: docker”Symptom:
- The agent cannot find files, or workspace reads return empty results.
Fix:
- use an absolute path for
source_rootwhen running inside a container, for examplesource_root: /workspace
Related pages
Section titled “Related pages”- YAML reference for the full field-by-field schema
- Environment variables for overrides and secrets
- Docker for a container example
- Python SDK for programmatic usage