Get up and running in minutes with our Python SDK.
pip install agentcache
import agentcache
# Drop-in replacement for OpenAI
response = agentcache.completion(
model="gpt-4",
messages=[{"role": "user", "content": "What is Python?"}],
provider="openai"
)
if response.get('hit'):
print(f"💚 Cache hit! Saved ${response.get('billing', {}).get('cost_saved', 0)}")
print(response['response'])
else:
print("Cache miss - call your LLM provider normally")
AgentCache sits between your application and AI providers, acting as an intelligent caching layer.
┌─────────────┐
│ Your App │
└──────┬──────┘
│
▼
┌─────────────────┐
│ AgentCache.ai │◄──── Check cache first
└──────┬──────────┘
│
┌──┴──┐
│ │
▼ ▼
┌───────┐ ┌──────────┐
│ Hit │ │ Miss │
│ <50ms │ │ Call LLM │
│ $0 │ │ + Cache │
└───────┘ └──────────┘
For air-gapped or high-compliance environments, deploy AgentCache Edge using Docker or Kubernetes.
version: '3'
services:
agentcache:
image: xinetex/agentcache-edge:latest
ports:
- "3000:3000"
environment:
- REDIS_URL=redis://redis:6379
- AGENTCACHE_API_KEY=your_secret_key
- NODE_ENV=production
depends_on:
- redis
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
- redis-data:/data
volumes:
redis-data:
Run docker-compose up -d to start the service. Your local instance will be available at
http://localhost:3000.
For production environments, deploy using Kubernetes with Helm or raw manifests.
helm repo add agentcache https://charts.agentcache.ai
helm install agentcache agentcache/agentcache-edge \
--set redis.enabled=true \
--set apiKey=your_secret_key \
--set replicas=3
apiVersion: apps/v1
kind: Deployment
metadata:
name: agentcache-edge
spec:
replicas: 3
selector:
matchLabels:
app: agentcache
template:
metadata:
labels:
app: agentcache
spec:
containers:
- name: agentcache
image: xinetex/agentcache-edge:latest
ports:
- containerPort: 3000
env:
- name: REDIS_URL
value: redis://redis-service:6379
- name: AGENTCACHE_API_KEY
valueFrom:
secretKeyRef:
name: agentcache-secrets
key: api-key
---
apiVersion: v1
kind: Service
metadata:
name: agentcache-service
spec:
selector:
app: agentcache
ports:
- port: 3000
targetPort: 3000
type: LoadBalancer
Deploy AgentCache in fully isolated environments without external internet access.
# Save Docker images
docker save xinetex/agentcache-edge:latest -o agentcache-edge.tar
docker save redis:alpine -o redis.tar
# Transfer to air-gapped environment, then load:
docker load -i agentcache-edge.tar
docker load -i redis.tar
# docker-compose-airgapped.yml
version: '3'
services:
agentcache:
image: xinetex/agentcache-edge:latest
ports:
- "3000:3000"
environment:
- REDIS_URL=redis://redis:6379
- AGENTCACHE_API_KEY=${AGENTCACHE_API_KEY}
- AIR_GAPPED_MODE=true
- DISABLE_TELEMETRY=true
- DISABLE_UPDATES=true
networks:
- isolated
redis:
image: redis:alpine
networks:
- isolated
volumes:
- redis-data:/data
networks:
isolated:
internal: true
volumes:
redis-data:
Configure AgentCache Edge with these environment variables.
REDIS_URL
Redis connection string (required)
REDIS_URL=redis://localhost:6379
AGENTCACHE_API_KEY
API key for authentication (required)
AGENTCACHE_API_KEY=your_secret_key_here
DATABASE_URL
PostgreSQL connection for analytics (optional)
DATABASE_URL=postgresql://user:pass@localhost:5432/agentcache
AIR_GAPPED_MODE
Enable for isolated environments (default: false)
AIR_GAPPED_MODE=true
LOG_LEVEL
Logging verbosity: debug, info, warn, error (default: info)
LOG_LEVEL=info
CACHE_TTL
Default cache TTL in seconds (default: 86400)
CACHE_TTL=86400
GET /health/metricsFor enterprise deployments, compliance questions, or technical support: