Skip to content

Configuration

Remind is configured via config files, environment variables, or CLI arguments. Settings resolve with this priority (highest first):

  1. CLI arguments (--llm, --embedding)
  2. Environment variables
  3. Project-local config file (<project>/.remind/remind.config.json)
  4. Global config file (~/.remind/remind.config.json)
  5. Defaults

Config files

Global config

Create ~/.remind/remind.config.json:

json
{
  "llm_provider": "anthropic",
  "embedding_provider": "openai",
  "consolidation_threshold": 5,
  "auto_consolidate": true,

  "anthropic": {
    "api_key": "sk-ant-...",
    "model": "claude-sonnet-4-20250514",
    "ingest_model": "claude-haiku-4-20250414"
  },

  "openai": {
    "api_key": "sk-...",
    "base_url": null,
    "model": "gpt-4.1",
    "embedding_model": "text-embedding-3-small",
    "ingest_model": "gpt-4.1-mini"
  },

  "azure_openai": {
    "api_key": "...",
    "base_url": "https://your-resource.openai.azure.com",
    "deployment_name": "gpt-4",
    "embedding_deployment_name": "text-embedding-3-small",
    "embedding_size": 1536,
    "ingest_deployment_name": "gpt-4-mini"
  },

  "ollama": {
    "url": "http://localhost:11434",
    "llm_model": "llama3.2",
    "embedding_model": "nomic-embed-text",
    "ingest_model": "llama3.2:1b"
  },

  "decay": {
    "enabled": true,
    "decay_interval": 20,
    "decay_rate": 0.1
  },

  "ingest_buffer_size": 4000,

  "db_url": null,

  "logging_enabled": false,

  "episode_types": ["observation", "decision", "question", "meta", "preference",
                     "spec", "plan", "task", "outcome", "fact"]
}

You only need to include settings you want to change from defaults. A minimal config:

json
{
  "llm_provider": "anthropic",
  "embedding_provider": "openai",
  "anthropic": { "api_key": "sk-ant-..." },
  "openai": { "api_key": "sk-..." }
}

Project-local config

You can place a remind.config.json inside a project's .remind/ directory to override global settings for that project:

myproject/
├── .remind/
│   ├── remind.config.json   ← project-local config
│   └── remind.db            ← project-local database
└── ...

Project-local config uses the same format as the global config. Settings in the project-local file override the global file, but are themselves overridden by environment variables and CLI arguments.

A typical use case is selecting a different provider or model for a specific project:

json
{
  "llm_provider": "ollama",
  "ollama": { "llm_model": "deepseek-coder-v2" }
}

The CLI automatically reads <cwd>/.remind/remind.config.json. When using the Python API, pass project_dir to create_memory() to enable project-local config loading.

Do not commit secrets

If your project-local config contains API keys or other secrets, make sure .remind/ is in your .gitignore. Better yet, keep secrets in the global config (~/.remind/remind.config.json) or in environment variables, and use the project-local file only for non-sensitive settings like provider choice and model selection.

Environment variables

Every config-file setting has a corresponding environment variable. Environment variables take precedence over both config files.

Complete reference

General

Env variableConfig fieldTypeDefault
LLM_PROVIDERllm_providerstringanthropic
EMBEDDING_PROVIDERembedding_providerstringopenai
CONSOLIDATION_THRESHOLDconsolidation_thresholdint5
CONCEPTS_PER_PASSconcepts_per_passint64
AUTO_CONSOLIDATEauto_consolidatebooltrue
EXTRACTION_BATCH_SIZEextraction_batch_sizeint50
EXTRACTION_LLM_BATCH_SIZEextraction_llm_batch_sizeint10
CONSOLIDATION_BATCH_SIZEconsolidation_batch_sizeint25
LLM_CONCURRENCYllm_concurrencyint3
INGEST_BUFFER_SIZEingest_buffer_sizeint4000
REMIND_DB_URLdb_urlstringnull (SQLite default)
REMIND_LOGGING_ENABLEDlogging_enabledboolfalse
REMIND_EPISODE_TYPESepisode_typescomma-separated listall built-in types

Anthropic (Claude)

Env variableConfig fieldTypeDefault
ANTHROPIC_API_KEYanthropic.api_keystring
ANTHROPIC_MODELanthropic.modelstringclaude-sonnet-4-20250514
ANTHROPIC_INGEST_MODELanthropic.ingest_modelstring

OpenAI

Env variableConfig fieldTypeDefault
OPENAI_API_KEYopenai.api_keystring
OPENAI_BASE_URLopenai.base_urlstring
OPENAI_MODELopenai.modelstringgpt-4.1
OPENAI_EMBEDDING_MODELopenai.embedding_modelstringtext-embedding-3-small
OPENAI_INGEST_MODELopenai.ingest_modelstring

Azure OpenAI

Env variableConfig fieldTypeDefault
AZURE_OPENAI_API_KEYazure_openai.api_keystring
AZURE_OPENAI_API_BASE_URLazure_openai.base_urlstring
AZURE_OPENAI_DEPLOYMENT_NAMEazure_openai.deployment_namestring
AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAMEazure_openai.embedding_deployment_namestring
AZURE_OPENAI_EMBEDDING_SIZEazure_openai.embedding_sizeint1536
AZURE_OPENAI_INGEST_DEPLOYMENT_NAMEazure_openai.ingest_deployment_namestring

Ollama (local)

Env variableConfig fieldTypeDefault
OLLAMA_URLollama.urlstringhttp://localhost:11434
OLLAMA_LLM_MODELollama.llm_modelstringllama3.2
OLLAMA_EMBEDDING_MODELollama.embedding_modelstringnomic-embed-text
OLLAMA_INGEST_MODELollama.ingest_modelstring

No API keys needed. Install Ollama and pull models:

bash
ollama pull llama3.2           # LLM
ollama pull nomic-embed-text   # Embeddings

Memory decay

Env variableConfig fieldTypeDefault
REMIND_DECAY_ENABLEDdecay.enabledbooltrue
REMIND_DECAY_INTERVALdecay.decay_intervalint20
REMIND_DECAY_RATEdecay.decay_ratefloat0.1

Boolean env vars accept true, 1, yes (case-insensitive) as truthy values; anything else is falsy.

Quick-start examples

Anthropic + OpenAI embeddings (most common):

bash
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...

OpenAI for everything:

bash
export OPENAI_API_KEY=sk-...
export LLM_PROVIDER=openai
export EMBEDDING_PROVIDER=openai

Azure OpenAI:

bash
export AZURE_OPENAI_API_KEY=...
export AZURE_OPENAI_API_BASE_URL=https://your-resource.openai.azure.com
export AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4
export AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=text-embedding-3-small
export LLM_PROVIDER=azure_openai
export EMBEDDING_PROVIDER=azure_openai

Ollama (fully local):

bash
export LLM_PROVIDER=ollama
export EMBEDDING_PROVIDER=ollama

Database

Remind uses SQLite by default but supports any database backend via SQLAlchemy (PostgreSQL, MySQL, etc.).

Database location (SQLite)

ContextDefault path
CLI (no --db flag)<cwd>/.remind/remind.db (project-local)
CLI with --db name~/.remind/name.db
MCP Server / Python API~/.remind/{name}.db

Using PostgreSQL or MySQL

Set db_url in config, the REMIND_DB_URL environment variable, or use the --db CLI flag with a full URL:

bash
# Via environment variable
export REMIND_DB_URL="postgresql+psycopg://user:pass@localhost:5432/remind"

# Via CLI flag
remind --db "postgresql+psycopg://user:pass@localhost:5432/remind" remember "hello"

# Via config file
{
  "db_url": "postgresql+psycopg://user:pass@localhost:5432/remind"
}

Install the appropriate driver extra:

bash
pip install "remind-mcp[postgres]"   # PostgreSQL (psycopg)
pip install "remind-mcp[mysql]"      # MySQL (PyMySQL)

Memory decay

Concepts that are rarely recalled gradually lose retrieval priority, mimicking how human memory fades.

OptionDefaultDescription
decay.enabledtrueSet false to disable
decay.decay_interval20Recalls between decay passes
decay.decay_rate0.1How much decay_factor drops per interval (0.0-1.0)

When a concept is recalled, it gets rejuvenated — its decay factor gets a boost proportional to match strength. Recently recalled concepts are protected by a 60-second grace window.

View decay stats with remind stats.

Consolidation

OptionDefaultDescription
consolidation_threshold5Episodes before auto-consolidation triggers
concepts_per_pass64Max concepts included per consolidation LLM pass
auto_consolidatetrueWhether to auto-consolidate after remember
extraction_batch_size50Episodes fetched per extraction loop pass (independent of consolidation batch size)
extraction_llm_batch_size10Episodes grouped into each extraction LLM call
consolidation_batch_size25Episodes fetched and generalized per consolidation loop pass
llm_concurrency3Max concurrent LLM calls across extraction + consolidation; also bounds topic-group parallelism

Legacy aliases remain supported: consolidation_concepts_per_pass, entity_extraction_batch_size, and consolidation_llm_concurrency.

Auto-ingest

Settings for the ingest() pipeline, which buffers raw text and extracts memory-worthy episodes automatically. The LLM decides directly what's worth remembering -- no numeric density threshold is needed.

OptionDefaultDescription
ingest_buffer_size4000Character threshold before buffer flushes and triggers triage

Each provider config has an optional ingest_model field (or ingest_deployment_name for Azure) to use a cheaper/faster model for triage without affecting consolidation quality. When unset, triage uses the same model as consolidation.

ProviderConfig fieldEnv varExample
Anthropicanthropic.ingest_modelANTHROPIC_INGEST_MODELclaude-haiku-4-20250414
OpenAIopenai.ingest_modelOPENAI_INGEST_MODELgpt-4.1-mini
Azure OpenAIazure_openai.ingest_deployment_nameAZURE_OPENAI_INGEST_DEPLOYMENT_NAMEgpt-4-mini
Ollamaollama.ingest_modelOLLAMA_INGEST_MODELllama3.2:1b

Logging

When enabled, Remind writes detailed debug logs to remind.log in the same directory as the database. This includes full LLM prompts and responses for triage, extraction, and consolidation — useful for debugging why episodes were scored a certain way or how concepts were derived.

OptionDefaultDescription
logging_enabledfalseWrite debug logs to remind.log next to the database

The log file location follows the database:

Database pathLog path
~/.remind/myproject.db~/.remind/remind.log
<project>/.remind/remind.db<project>/.remind/remind.log

Enable via config file:

json
{
  "logging_enabled": true
}

Or environment variable:

bash
REMIND_LOGGING_ENABLED=true

Episode types

Control which episode types are available. This affects which CLI commands and MCP tools are registered.

OptionDefaultDescription
episode_typesall built-in typesList of enabled episode types

Built-in types: observation, decision, question, meta, preference, spec, plan, task, outcome, fact.

By default all types are enabled. To restrict to a subset:

json
{
  "episode_types": ["observation", "decision", "question", "outcome", "fact"]
}

Or via environment variable (comma-separated):

bash
REMIND_EPISODE_TYPES=observation,decision,question,outcome,fact

Custom type names are also accepted — they will be used in LLM prompts for triage and extraction with generic descriptions.

Feature gating

When spec, plan, or task types are excluded from episode_types:

  • CLI: The corresponding commands (specs, plans, tasks, task add/start/done/block/unblock) are hidden from remind --help and unavailable
  • MCP: The corresponding tools (list_specs, list_plans, task_add, task_update_status, list_tasks) are not registered and won't appear in the tool list

Released under the Apache 2.0 License.