Overview
A production-ready Retrieval-Augmented Generation template you can fork, customize, and deploy on bult.ai. Upload documents, ask questions, get answers with source citations. Supports multiple LLM providers (OpenAI, Anthropic, Google, Ollama), hybrid search with reranking, OCR for scanned PDFs, multi-user authentication, and conversation export.Features
- Multi-model LLM support — switch between OpenAI, Anthropic, Google AI, or local Ollama models via environment variable
- Advanced RAG pipeline — hybrid search (BM25 + vector), cross-encoder reranking, HyDE query transformation, multi-query retrieval, query decomposition
- Document processing — PDF, DOCX, PPTX, TXT, MD, CSV, JSON, HTML; automatic OCR for scanned PDFs via Tesseract
- Inline citations — every response cites source documents with relevance scores
- Authentication — JWT login/register + optional Google OAuth
- Analytics dashboard — usage metrics, cost tracking, query latency, top projects
- Conversation export — Markdown, JSON, and PDF export with full Unicode support
- Background processing — async job queue with progress tracking and retry logic
- Single-page frontend — clean UI with streaming responses, markdown rendering, dark mode
Deploy on bult.ai
bult.ai is a PaaS that deploys from GitHub with built-in database templates and Docker support. You need three services: the app (GitHub), a PostgreSQL database, and a pgvector instance. Prerequisites- GitHub account
- OpenAI API key (get one here)
- bult.ai account
- On bult.ai, click Create > GitHub
- Select your forked repository
- Go to the Git tab and change build settings from Nixpacks to Dockerfile. Set Dockerfile Path to Dockerfile and Dockerfile Context to .
- Inside the service settings, set the port to 8002
- Go to Environment Variables and add the variables from .env.example. The required ones:
- Click Create > in the list of services, go to Databases > Postgres
- This creates a Postgres instance from a built-in template — it’s automated
- The only thing you need to configure is the environment variables:
- Click Create > Docker
- Docker image: ankane/pgvector:latest
- Name this service to match the hostname in your PG_CONN (e.g., pgvector)
- Deploy the service
- After it’s running, add a volume mounted at /var/lib/postgresql/data for persistent storage
- Add internal port 5432
- All three services should show as running
- Check the app service logs — you should see database migrations and the worker starting
- Open the public URL for your app service. Register a user and start chatting.

