Python API
Use Remind as a library in your own Python applications.
Basic usage
import asyncio
from dotenv import load_dotenv
from remind import create_memory, EpisodeType
load_dotenv()
async def main():
memory = create_memory(
llm_provider="openai", # or "anthropic", "azure_openai", "ollama"
embedding_provider="openai", # or "azure_openai", "ollama"
)
# Log experiences — fast, no LLM calls
memory.remember("User mentioned they prefer Python for backend work")
memory.remember("User is building a distributed system")
memory.remember("User values type safety")
# Typed episodes
memory.remember("Chose PostgreSQL over MySQL for persistence", episode_type=EpisodeType.DECISION)
memory.remember("User prefers dark mode in the IDE", episode_type=EpisodeType.PREFERENCE)
# Topic-scoped episodes
memory.remember("Use event sourcing for audit trail", topic="architecture")
memory.remember("Users want offline mode", topic="product", source_type="slack")
# Run consolidation — this is where the LLM does its work
result = await memory.consolidate(force=True)
print(f"Created {result.concepts_created} concepts")
# Retrieve relevant concepts
context = await memory.recall("What programming preferences?")
print(context)
# Topic-scoped retrieval
context = await memory.recall("database design", topic="architecture")
# Explore topics
topics = memory.list_topics()
overview = memory.get_topic_overview("architecture")
asyncio.run(main())Auto-ingest
Stream raw text and let Remind decide what's worth remembering:
# Stream conversation fragments — no topic → episodes get topic_id=None
await memory.ingest("User: How should we handle rate limiting?")
await memory.ingest("Assistant: I'd suggest a token bucket at the gateway...")
# With explicit topic — all episodes assigned to "architecture"
await memory.ingest("Chose Redis for caching", topic="architecture")
# With instructions — steer what gets extracted
await memory.ingest(transcript, instructions="extract all config values and version numbers")
await memory.ingest(meeting_notes, instructions="focus on decisions and risks")
# At session end, flush remaining buffer
await memory.flush_ingest()ingest() buffers text until a threshold (~4000 chars) is reached, then runs triage: an LLM extracts memory-worthy episodes. A density score may be produced for logging only; extraction is not gated by a numeric threshold. When topic is given, all episodes go to that topic. When omitted, episodes get topic_id=None — no automatic inference. When instructions is given, the triage LLM uses those instructions to decide what to extract — useful for focused ingestion of meeting notes, transcripts, or documentation. Use remember() when you already know what's important; use ingest() when you want the triage LLM to filter and distill.
Fact and outcome episodes
# Facts: concrete values preserved verbatim through consolidation
memory.remember("Redis TTL is 300s for auth tokens", episode_type=EpisodeType.FACT)
# Outcomes: action-result pairs for causal pattern learning
memory.remember(
"Grep search for 'auth' missed verify_credentials",
episode_type=EpisodeType.OUTCOME,
metadata={"strategy": "grep search", "result": "partial", "prediction_error": "high"},
)Key design decisions
remember()is synchronous and fast — No LLM calls, just stores the episode. This keeps the write path non-blocking.ingest()is async with LLM triage — Buffers raw text, extracts memory-worthy episodes, and consolidates automatically. Episodes gettopic_id=Noneunless you pass atopic.consolidate()is async — This is where all LLM work happens (extraction, generalization). Call it explicitly or let auto-consolidation handle it.recall()is async — Uses embeddings and spreading activation.
Database and project config
from pathlib import Path
# Default db_path "memory" → ~/.remind/memory.db (not the same default as the CLI)
memory = create_memory()
# Named SQLite under ~/.remind/
memory = create_memory(db_path="my-project")
# Any SQLAlchemy URL (PostgreSQL, MySQL, etc.) — overrides db_path
memory = create_memory(db_url="postgresql+psycopg://user:pass@localhost:5432/remind")
# Load <project>/.remind/remind.config.json (and merge with global config / env)
memory = create_memory(project_dir=Path("/path/to/myproject"))db_url in remind.config.json or REMIND_DB_URL is used when you do not pass db_url to create_memory(). See Configuration for driver extras and URL examples.
Provider selection
Providers can be set via create_memory(), config file, or environment variables:
# Explicit
memory = create_memory(llm_provider="anthropic", embedding_provider="openai")
# From config file / env vars
memory = create_memory() # Uses ~/.remind/remind.config.json or env varsSee Configuration and Providers for all options.