Stars
46
Forks
7
Language
TypeScript
Watching
2
Memento MCP: A Knowledge Graph Memory System for LLMs
Scalable, high performance knowledge graph memory system with semantic retrieval, contextual recall, and temporal awareness. Provides any LLM client that supports the model context protocol (e.g., Claude Desktop, Cursor, Github Copilot) with resilient, adaptive, and persistent long-term ontological memory.
Entities are the primary nodes in the knowledge graph. Each entity has:
Example:
{
"name": "John_Smith",
"entityType": "person",
"observations": ["Speaks fluent Spanish"]
}
Relations define directed connections between entities with enhanced properties:
Example:
{
"from": "John_Smith",
"to": "Anthropic",
"relationType": "works_at",
"strength": 0.9,
"confidence": 0.95,
"metadata": {
"source": "linkedin_profile",
"last_verified": "2025-03-21"
}
}
Memento MCP uses Neo4j as its storage backend, providing a unified solution for both graph storage and vector search capabilities.
The easiest way to get started with Neo4j is to use Neo4j Desktop:
memento_password
(or your preferred password)The Neo4j database will be available at:
bolt://127.0.0.1:7687
(for driver connections)http://127.0.0.1:7474
(for Neo4j Browser UI)neo4j
, password: memento_password
(or whatever you configured)Alternatively, you can use Docker Compose to run Neo4j:
# Start Neo4j container
docker-compose up -d neo4j
# Stop Neo4j container
docker-compose stop neo4j
# Remove Neo4j container (preserves data)
docker-compose rm neo4j
When using Docker, the Neo4j database will be available at:
bolt://127.0.0.1:7687
(for driver connections)http://127.0.0.1:7474
(for Neo4j Browser UI)neo4j
, password: memento_password
Neo4j data persists across container restarts and even version upgrades due to the Docker volume configuration in the docker-compose.yml
file:
volumes:
- ./neo4j-data:/data
- ./neo4j-logs:/logs
- ./neo4j-import:/import
These mappings ensure that:
/data
directory (contains all database files) persists on your host at ./neo4j-data
/logs
directory persists on your host at ./neo4j-logs
/import
directory (for importing data files) persists at ./neo4j-import
You can modify these paths in your docker-compose.yml
file to store data in different locations if needed.
You can change Neo4j editions and versions without losing data:
docker-compose.yml
docker-compose down && docker-compose up -d neo4j
npm run neo4j:init
The data will persist through this process as long as the volume mappings remain the same.
If you need to completely reset your Neo4j database:
# Stop the container
docker-compose stop neo4j
# Remove the container
docker-compose rm -f neo4j
# Delete the data directory contents
rm -rf ./neo4j-data/*
# Restart the container
docker-compose up -d neo4j
# Reinitialize the schema
npm run neo4j:init
To back up your Neo4j data, you can simply copy the data directory:
# Make a backup of the Neo4j data
cp -r ./neo4j-data ./neo4j-data-backup-$(date +%Y%m%d)
Memento MCP includes command-line utilities for managing Neo4j operations:
Test the connection to your Neo4j database:
# Test with default settings
npm run neo4j:test
# Test with custom settings
npm run neo4j:test -- --uri bolt://127.0.0.1:7687 --username myuser --password mypass --database neo4j
For normal operation, Neo4j schema initialization happens automatically when Memento MCP connects to the database. You don't need to run any manual commands for regular usage.
The following commands are only necessary for development, testing, or advanced customization scenarios:
# Initialize with default settings (only needed for development or troubleshooting)
npm run neo4j:init
# Initialize with custom vector dimensions
npm run neo4j:init -- --dimensions 768 --similarity euclidean
# Force recreation of all constraints and indexes
npm run neo4j:init -- --recreate
# Combine multiple options
npm run neo4j:init -- --vector-index custom_index --dimensions 384 --recreate
Find semantically related entities based on meaning rather than just keywords:
Track complete history of entities and relations with point-in-time graph retrieval:
Relations automatically decay in confidence over time based on configurable half-life:
Rich metadata support for both entities and relations with custom fields:
The following tools are available to LLM client hosts through the Model Context Protocol:
create_entities
entities
(array of objects)
name
(string): Entity identifierentityType
(string): Type classificationobservations
(string[]): Associated observationsadd_observations
observations
(array of objects)
entityName
(string): Target entitycontents
(string[]): New observations to adddelete_entities
entityNames
(string[])delete_observations
deletions
(array of objects)
entityName
(string): Target entityobservations
(string[]): Observations to removecreate_relations
relations
(array of objects)
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Relationship typestrength
(number, optional): Relation strength (0.0-1.0)confidence
(number, optional): Confidence level (0.0-1.0)metadata
(object, optional): Custom metadata fieldsget_relation
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Relationship typeupdate_relation
relation
(object):
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Relationship typestrength
(number, optional): Relation strength (0.0-1.0)confidence
(number, optional): Confidence level (0.0-1.0)metadata
(object, optional): Custom metadata fieldsdelete_relations
relations
(array of objects)
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Relationship typeread_graph
search_nodes
query
(string)open_nodes
names
(string[])semantic_search
query
(string): The text query to search for semanticallylimit
(number, optional): Maximum results to return (default: 10)min_similarity
(number, optional): Minimum similarity threshold (0.0-1.0, default: 0.6)entity_types
(string[], optional): Filter results by entity typeshybrid_search
(boolean, optional): Combine keyword and semantic search (default: true)semantic_weight
(number, optional): Weight of semantic results in hybrid search (0.0-1.0, default: 0.6)get_entity_embedding
entity_name
(string): The name of the entity to get the embedding forget_entity_history
entityName
(string)get_relation_history
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Relationship typeget_graph_at_time
timestamp
(number): Unix timestamp (milliseconds since epoch)get_decayed_graph
options
(object, optional):
reference_time
(number): Reference timestamp for decay calculation (milliseconds since epoch)decay_factor
(number): Optional decay factor overrideConfigure Memento MCP with these environment variables:
# Neo4j Connection Settings
NEO4J_URI=bolt://127.0.0.1:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=memento_password
NEO4J_DATABASE=neo4j
# Vector Search Configuration
NEO4J_VECTOR_INDEX=entity_embeddings
NEO4J_VECTOR_DIMENSIONS=1536
NEO4J_SIMILARITY_FUNCTION=cosine
# Embedding Service Configuration
MEMORY_STORAGE_TYPE=neo4j
OPENAI_API_KEY=your-openai-api-key
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
# Debug Settings
DEBUG=true
The Neo4j CLI tools support the following options:
--uri <uri> Neo4j server URI (default: bolt://127.0.0.1:7687)
--username <username> Neo4j username (default: neo4j)
--password <password> Neo4j password (default: memento_password)
--database <n> Neo4j database name (default: neo4j)
--vector-index <n> Vector index name (default: entity_embeddings)
--dimensions <number> Vector dimensions (default: 1536)
--similarity <function> Similarity function (cosine|euclidean) (default: cosine)
--recreate Force recreation of constraints and indexes
--no-debug Disable detailed output (debug is ON by default)
Available OpenAI embedding models:
text-embedding-3-small
: Efficient, cost-effective (1536 dimensions)text-embedding-3-large
: Higher accuracy, more expensive (3072 dimensions)text-embedding-ada-002
: Legacy model (1536 dimensions)To use semantic search, you'll need to configure OpenAI API credentials:
# OpenAI API Key for embeddings
OPENAI_API_KEY=your-openai-api-key
# Default embedding model
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
Note: For testing environments, the system will mock embedding generation if no API key is provided. However, using real embeddings is recommended for integration testing.
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"memento": {
"command": "npx",
"args": [
"-y",
"@gannonh/memento-mcp"
],
"env": {
"MEMORY_STORAGE_TYPE": "neo4j",
"NEO4J_URI": "bolt://127.0.0.1:7687",
"NEO4J_USERNAME": "neo4j",
"NEO4J_PASSWORD": "memento_password",
"NEO4J_DATABASE": "neo4j",
"NEO4J_VECTOR_INDEX": "entity_embeddings",
"NEO4J_VECTOR_DIMENSIONS": "1536",
"NEO4J_SIMILARITY_FUNCTION": "cosine",
"OPENAI_API_KEY": "your-openai-api-key",
"OPENAI_EMBEDDING_MODEL": "text-embedding-3-small",
"DEBUG": "true"
}
}
}
}
Alternatively, for local development, you can use:
{
"mcpServers": {
"memento": {
"command": "/path/to/node",
"args": [
"/path/to/memento-mcp/dist/index.js"
],
"env": {
"MEMORY_STORAGE_TYPE": "neo4j",
"NEO4J_URI": "bolt://127.0.0.1:7687",
"NEO4J_USERNAME": "neo4j",
"NEO4J_PASSWORD": "memento_password",
"NEO4J_DATABASE": "neo4j",
"NEO4J_VECTOR_INDEX": "entity_embeddings",
"NEO4J_VECTOR_DIMENSIONS": "1536",
"NEO4J_SIMILARITY_FUNCTION": "cosine",
"OPENAI_API_KEY": "your-openai-api-key",
"OPENAI_EMBEDDING_MODEL": "text-embedding-3-small",
"DEBUG": "true"
}
}
}
}
Important: Always explicitly specify the embedding model in your Claude Desktop configuration to ensure consistent behavior.
For optimal integration with Claude, add these statements to your system prompt:
You have access to the Memento MCP knowledge graph memory system, which provides you with persistent memory capabilities.
Your memory tools are provided by Memento MCP, a sophisticated knowledge graph implementation.
When asked about past conversations or user information, always check the Memento MCP knowledge graph first.
You should use semantic_search to find relevant information in your memory when answering questions.
Once configured, Claude can access the semantic search capabilities through natural language:
To create entities with semantic embeddings:
User: "Remember that Python is a high-level programming language known for its readability and JavaScript is primarily used for web development."
To search semantically:
User: "What programming languages do you know about that are good for web development?"
To retrieve specific information:
User: "Tell me everything you know about Python."
The power of this approach is that users can interact naturally, while the LLM handles the complexity of selecting and using the appropriate memory tools.
Memento's adaptive search capabilities provide practical benefits:
Query Versatility: Users don't need to worry about how to phrase questions - the system adapts to different query types automatically
Failure Resilience: Even when semantic matches aren't available, the system can fall back to alternative methods without user intervention
Performance Efficiency: By intelligently selecting the optimal search method, the system balances performance and relevance for each query
Improved Context Retrieval: LLM conversations benefit from better context retrieval as the system can find relevant information across complex knowledge graphs
For example, when a user asks "What do you know about machine learning?", the system can retrieve conceptually related entities even if they don't explicitly mention "machine learning" - perhaps entities about neural networks, data science, or specific algorithms. But if semantic search yields insufficient results, the system automatically adjusts its approach to ensure useful information is still returned.
Memento MCP includes built-in diagnostic capabilities to help troubleshoot vector search issues:
Additional diagnostic tools become available when debug mode is enabled:
To completely reset your Neo4j database during development:
# Stop the container (if using Docker)
docker-compose stop neo4j
# Remove the container (if using Docker)
docker-compose rm -f neo4j
# Delete the data directory (if using Docker)
rm -rf ./neo4j-data/*
# For Neo4j Desktop, right-click your database and select "Drop database"
# Restart the database
# For Docker:
docker-compose up -d neo4j
# For Neo4j Desktop:
# Click the "Start" button for your database
# Reinitialize the schema
npm run neo4j:init
# Clone the repository
git clone https://github.com/gannonh/memento-mcp.git
cd memento-mcp
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
# Check test coverage
npm run test:coverage
To install memento-mcp for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @gannonh/memento-mcp --client claude
You can run Memento MCP directly using npx without installing it globally:
npx -y @gannonh/memento-mcp
This method is recommended for use with Claude Desktop and other MCP-compatible clients.
For development or contributing to the project:
# Install locally
npm install @gannonh/memento-mcp
# Or clone the repository
git clone https://github.com/gannonh/memento-mcp.git
cd memento-mcp
npm install
MIT
How is this guide?