A web-based simulation game where you manage a vault full of dwellers, balancing their needs and resources to keep the vault thriving. Built with modern Python tooling.
See ROADMAP.md for recent updates and upcoming features.
Backend: FastAPI Β· SQLModel Β· PostgreSQL 18 Β· Celery Β· Redis Β· MinIO Β· PydanticAI Frontend: Vue 3.5 Β· TypeScript Β· Vite Β· Pinia Β· TailwindCSS v4 Β· Vitest Tooling: uv Β· ruff Β· Rolldown Β· Oxlint Β· Docker/Podman
Required:
- Python 3.12+ (3.13 recommended)
- Node.js 22 LTS
- Docker Compose (v2 - use
docker compose, notdocker-compose)
Installation:
- uv (Python package manager):
- macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh - Windows:
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
- macOS/Linux:
- pnpm (via Corepack):
corepack enable && corepack use pnpm@latest
Recommended setup: Run infrastructure in Docker; run backend + frontend locally for hot reload.
# 1. Clone and setup environment
git clone https://github.com/ElderEvil/falloutProject && cd falloutProject
cp .env.example .env # Edit with your settings (keep localhost hostnames)
# 2. Start infrastructure services (PostgreSQL, Redis, MinIO, Mailpit)
docker compose -f docker-compose.infra.yml up -d
# 3. Setup and run backend (http://localhost:8000)
cd backend
cp ../.env .env
uv sync --all-extras --dev
uv run alembic upgrade head
uv run fastapi dev main.py
# 4. In separate terminals, start Celery workers
# Terminal 2:
uv run celery -A app.core.celery worker -l info
# Terminal 3:
uv run celery -A app.core.celery beat -l info --scheduler sqlalchemy_celery_beat.schedulers:DatabaseScheduler
# 5. Setup and run frontend (http://localhost:5173)
# β οΈ IMPORTANT: Backend API must be accessible at http://localhost:8000
cd ../frontend
pnpm install
pnpm run devVerify everything works:
# Backend health check
curl -sf http://localhost:8000/healthcheck
# Frontend (open in browser)
# Windows (PowerShell): Start-Process http://localhost:5173
# Mac: open http://localhost:5173
# Linux: xdg-open http://localhost:5173Optional: Ollama for Local AI (Hybrid Mode)
# Install Ollama: https://ollama.ai/download
# Pull a model (run once):
ollama pull llama2
# Ollama runs as service after install (http://localhost:11434)
# Update .env: AI_PROVIDER=ollamaPlatform Notes:
- Windows: Use PowerShell, Git Bash, or WSL2. Commands work identically.
- Mac/Linux: All commands work as-is in Terminal.
- First run: Backend will create database schema automatically via migrations
Run everything in containers (no local Node/Python needed):
# 1. Clone and setup environment
git clone https://github.com/ElderEvil/falloutProject && cd falloutProject
cp .env.example .env # Edit SECRET_KEY, passwords, API keys as needed
# 2. Start all services (environment overrides handled automatically)
docker compose up -d
# 3. Wait for services to be ready (30-60 seconds)
docker compose logs -f fastapi # Watch startup (Ctrl+C to exit)Access:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000/docs (Swagger UI)
- Mailpit (email testing): http://localhost:8025
- Flower (Celery monitor): http://localhost:5555
- MinIO Console: http://localhost:9001 (login: minioadmin/minioadmin)
Notes:
- No need to edit hostnames in
.env- Docker Compose automatically overrides them - First build takes 5-10 minutes (downloads images + builds backend/frontend)
- Subsequent starts are fast (~30 seconds)
cd backend
uv sync --all-extras --dev && prek install
uv run pytest app/tests/ # Run tests
uv run ruff check . && uv run ruff format . # Lint & format
uv run alembic upgrade head # Migrationscd frontend
pnpm install
pnpm test # Run tests
pnpm run lint # Lint
pnpm run build # Build for productionSee frontend/README.md and frontend/STYLEGUIDE.md for details.
# Hybrid development (infra only)
docker compose -f docker-compose.infra.yml up -d
# Full stack (all services)
docker compose up -d
# Access frontend: http://localhost:3000
# Access backend: http://localhost:8000
# Local dev with hot reload
docker compose -f docker-compose.local.yml up -d
# TrueNAS staging
# See docs/deployment/TRUENAS_SETUP.mdPre-built images (automated by CI/CD):
- Backend:
elerevil/fo-shelter-be:latest - Frontend:
elerevil/fo-shelter-fe:latest
See docs/DEPLOYMENT.md for complete deployment guide.
A backup script is provided at scripts/backup-db.sh:
# Set environment variables (or use .env file)
export POSTGRES_DB=fallout_db
export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=your_password
export POSTGRES_SERVER=localhost
# Run backup
./scripts/backup-db.sh
# Backups are stored in: /mnt/dead-pool/backups/fallout/
# - Timestamped filenames (fallout_YYYYMMDD_HHMMSS.sql.gz)
# - Automatic compression
# - 14-day retention (old backups auto-deleted)# Using pg_dump directly
docker exec -t fallout-postgres pg_dump -U postgres fallout_db > backup.sql
# Or with compression
docker exec -t fallout-postgres pg_dump -U postgres fallout_db | gzip > backup.sql.gz# Stop the application
docker compose stop fastapi
# Restore from backup (uncompressed)
gunzip backup.sql.gz # if compressed
docker exec -i fallout-postgres psql -U postgres -d fallout_db < backup.sql
# Or restore to a fresh database
docker exec -i fallout-postgres psql -U postgres -c "DROP DATABASE fallout_db; CREATE DATABASE fallout_db;"
docker exec -i fallout-postgres psql -U postgres -d fallout_db < backup.sql
# Restart application
docker compose start fastapiEnvironment files:
.env.example- Template with localhost hostnames (for hybrid development).env- Your local copy (create from.env.example).env.local- Used bydocker-compose.local.yml(dev with volume mounts)backend/.env- Backend runtime requires this (copy from root.env)
Configuration strategy:
- Hybrid mode: Use
.envwith localhost hostnames (as-is from.env.example) - Full Docker mode: Use
.envas-is - Docker Compose auto-overrides hostnames - Do NOT manually edit hostnames for Docker - compose files handle it
Key variables:
- Required:
SECRET_KEY- Change in production (useopenssl rand -hex 32)POSTGRES_PASSWORD- Database passwordFIRST_SUPERUSER_PASSWORD- Admin account password
- Optional:
AI_PROVIDER-openai(default),anthropic, orollama(local/free)OPENAI_API_KEY- Only if using OpenAI (leave empty for ollama)- Database:
POSTGRES_SERVER,POSTGRES_DB,POSTGRES_USER - Redis:
REDIS_HOST,REDIS_PORT - MinIO:
MINIO_HOSTNAME,MINIO_ROOT_USER,MINIO_ROOT_PASSWORD
AI Setup Notes:
- Ollama (Free): For Docker: already runs in
ollamacontainer. For hybrid: install locally - OpenAI: Set
AI_PROVIDER=openaiand add yourOPENAI_API_KEY - No AI: App works without AI (conversations/chat features disabled)
"Connection refused" errors in Docker:
# Check all services are running
docker compose ps
# View logs for specific service
docker compose logs fastapi
docker compose logs db
# Restart services
docker compose restartPort already in use:
# Check what's using port 8000 (backend)
# Linux/Mac: lsof -i :8000
# Windows: netstat -ano | findstr :8000
# Stop conflicting service or change port in docker-compose.ymlBackend can't connect to database (hybrid mode):
# Verify infrastructure is running
docker compose -f docker-compose.infra.yml ps
# Check .env has localhost (not 'db')
grep POSTGRES_SERVER .env # Should show: POSTGRES_SERVER=localhostFrontend can't generate types:
# Ensure backend is running and accessible
curl http://localhost:8000/docs
# If backend is in Docker, ensure port 8000 is exposed
docker compose ps fastapi # Should show 0.0.0.0:8000->8000/tcpAI features not working:
- Check
AI_PROVIDERin.envmatches your setup - For OpenAI: Verify
OPENAI_API_KEYis set correctly - For Ollama: Ensure service is running (
ollama serveor check Docker container) - App works without AI - conversation features will be disabled
- ROADMAP.md - Changelog and upcoming features
- docs/DEPLOYMENT.md - Deployment guide
- docs/deployment/TRUENAS_SETUP.md - TrueNAS staging setup
- frontend/README.md - Frontend architecture
- frontend/STYLEGUIDE.md - Design system
MIT License - See LICENSE file for details.
Built by ElderEvil Β· Inspired by Fallout Shelter (Bethesda)