Troubleshooting¶
Solutions to common issues
Authentication Issues¶
403 Forbidden - Missing API Key¶
Symptoms:
Cause: Missing or invalid X-API-Key header
Solution:
# Ensure API key header is included
curl -H "X-API-Key: your-secret-key-here" ...
# Verify API_ACCESS_KEY is set in .env
grep API_ACCESS_KEY backend/.env
401 Unauthorized¶
Symptoms:
Cause: API key doesn't match server configuration
Solution: 1. Check API_ACCESS_KEY in backend/.env 2. Regenerate key if compromised:
Database Issues¶
Connection Refused¶
Symptoms:
Cause: PostgreSQL not running or incorrect connection string
Solution:
# Check PostgreSQL is running
docker-compose ps postgres
# Verify DATABASE_URL in .env
grep DATABASE_URL backend/.env
# Restart PostgreSQL
docker-compose restart postgres
pgvector Extension Not Found¶
Symptoms:
Cause: pgvector extension not installed
Solution:
# For Docker Compose
docker-compose down -v
docker-compose up # Will run init-pgvector.sql
# For local PostgreSQL
psql -d greengovrag -c "CREATE EXTENSION vector;"
Migration Errors¶
Symptoms:
Cause: Database schema out of sync
Solution:
cd backend
# Check current revision
alembic current
# Upgrade to latest
alembic upgrade head
# If still failing, reset database (WARNING: data loss)
alembic downgrade base
alembic upgrade head
Vector Store Issues¶
Qdrant Connection Timeout¶
Symptoms:
Cause: Qdrant not running or wrong URL
Solution:
# Check Qdrant is running
curl http://localhost:6333/health
# Verify QDRANT_URL in .env
grep QDRANT_URL backend/.env
# Restart Qdrant
docker-compose restart qdrant
Collection Not Found¶
Symptoms:
Cause: Collection hasn't been created
Solution:
# Run ETL pipeline to create collection
docker-compose up etl
# Or manually create via API
curl -X PUT http://localhost:6333/collections/greengovrag \
-H "Content-Type: application/json" \
-d '{
"vectors": {
"size": 384,
"distance": "Cosine"
}
}'
FAISS Index Missing¶
Symptoms:
Cause: FAISS index not yet created
Solution:
# Create data directory
mkdir -p data/vectors
# Run ETL to build index
greengovrag-cli etl run-pipeline --config configs/documents_config.yml
LLM Provider Issues¶
OpenAI Rate Limit¶
Symptoms:
Cause: Exceeded OpenAI rate limits
Solution:
- Wait and retry with exponential backoff
- Upgrade OpenAI plan for higher limits
- Switch to Azure OpenAI (higher limits):
Invalid API Key¶
Symptoms:
Cause: Invalid or expired LLM API key
Solution:
# Verify key is set
grep OPENAI_API_KEY backend/.env
# Test key directly
curl https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY"
# For Azure, check endpoint and key
grep AZURE_OPENAI backend/.env
Model Not Found¶
Symptoms:
Cause: Model name doesn't exist or isn't available
Solution:
# For Azure, use deployment name (not model name)
LLM_MODEL=your-deployment-name # e.g., "gpt-4o"
# For OpenAI, use valid model name
LLM_MODEL=gpt-4o # or gpt-4o-mini
ETL Issues¶
Document Download Failed¶
Symptoms:
Cause: Source URL is broken or moved
Solution:
- Check document source URL in
configs/documents_config.yml - Manually verify URL in browser
- Update config with new URL
- Disable source if permanently unavailable:
PDF Parsing Errors¶
Symptoms:
Cause: Corrupted PDF or unsupported format
Solution:
# Check PDF is valid
file document.pdf
# Try different parser strategy
UNSTRUCTURED_STRATEGY=fast # Options: hi_res, fast, auto
# Skip problematic document
# Edit configs/documents_config.yml to exclude
Out of Memory During Indexing¶
Symptoms:
Cause: Too many chunks being processed at once
Solution:
# Reduce batch size in .env
CHUNK_BATCH_SIZE=50 # Down from 100
# Increase Docker memory limit
# docker-compose.yml: mem_limit: 4g
Performance Issues¶
Slow Query Response¶
Symptoms: Queries taking >5 seconds
Possible causes and solutions:
-
Cache disabled:
-
Too many sources requested:
-
Vector store not optimized:
- Switch from FAISS to Qdrant for production
-
Enable HNSW index in Qdrant
-
LLM response slow:
- Use faster model:
gpt-4o-miniinstead ofgpt-4o - Reduce
max_tokensin query
High Memory Usage¶
Symptoms: Container OOM killed
Solution:
# Increase Docker memory
docker-compose.yml:
services:
backend:
mem_limit: 4g # Up from 2g
# Reduce vector store memory usage
VECTOR_STORE_TYPE=qdrant # Qdrant uses less memory than FAISS
Docker Issues¶
Port Already in Use¶
Symptoms:
Solution:
# Find process using port
lsof -i :8000
# Kill process
kill -9 <PID>
# Or change port in docker-compose.yml
ports:
- "8001:8000" # Use port 8001 instead
Container Won't Start¶
Symptoms: Container exits immediately after starting
Solution:
# Check logs
docker-compose logs backend
# Common causes:
# 1. Missing environment variables
# Solution: Verify .env file exists and is complete
# 2. Database migration needed
# Solution: Run migrations manually
docker-compose exec backend alembic upgrade head
# 3. Dependency conflict
# Solution: Rebuild images
docker-compose build --no-cache
Network Issues¶
Cannot Reach API from Host¶
Symptoms: Connection refused when calling API
Solution:
# Check API is running
docker-compose ps
# Check API is listening
docker-compose exec backend netstat -tlnp | grep 8000
# Test from inside container
docker-compose exec backend curl http://localhost:8000/api/health
# If working inside, check port mapping
docker-compose ps # Should show 0.0.0.0:8000->8000/tcp
Getting Help¶
If you're still stuck:
- Check logs:
docker-compose logs -f backend - Enable debug logging:
LOG_LEVEL=DEBUGin.env - Search issues: GitHub Issues
- Create issue: Include logs, .env (redacted), and error messages
See Also¶
- Monitoring Guide - System health monitoring
- Configuration Guide - Environment variables
- Deployment Guide - Production deployment