-
Notifications
You must be signed in to change notification settings - Fork 0
Database Setup
This guide covers setting up PostgreSQL and Redis for TMI deployment.
TMI requires two database systems:
- PostgreSQL: Primary data storage for threat models, users, and persistent data
- Redis: Session storage, caching, and real-time collaboration coordination
[TMI Server]
↓
[Connection Pool]
↓
[PostgreSQL] ←→ [Read Replicas (optional)]
↓
[Persistent Storage]
[TMI Server] → [Redis] → [In-Memory Storage]
# Add PostgreSQL repository
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
# Install PostgreSQL
sudo apt update
sudo apt install postgresql-15 postgresql-contrib-15
# Start and enable service
sudo systemctl start postgresql
sudo systemctl enable postgresql# Install PostgreSQL repository
sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
# Disable built-in PostgreSQL module
sudo dnf -qy module disable postgresql
# Install PostgreSQL
sudo yum install -y postgresql15-server postgresql15-contrib
# Initialize database
sudo /usr/pgsql-15/bin/postgresql-15-setup initdb
# Start and enable service
sudo systemctl start postgresql-15
sudo systemctl enable postgresql-15# Install PostgreSQL
brew install postgresql@15
# Start service
brew services start postgresql@15# Run PostgreSQL container
docker run -d \
--name tmi-postgres \
-e POSTGRES_USER=tmi_user \
-e POSTGRES_PASSWORD=secure_password \
-e POSTGRES_DB=tmi \
-v postgres_data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:15
# Verify running
docker ps | grep tmi-postgres# Connect as postgres superuser
sudo -u postgres psql
# Or with Docker
docker exec -it tmi-postgres psql -U postgres-- Create application user
CREATE USER tmi_user WITH PASSWORD 'secure_password';
-- Create database
CREATE DATABASE tmi OWNER tmi_user;
-- Grant privileges
GRANT ALL PRIVILEGES ON DATABASE tmi TO tmi_user;
-- Connect to tmi database
\c tmi
-- Grant schema privileges
GRANT ALL ON SCHEMA public TO tmi_user;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO tmi_user;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO tmi_user;
-- Exit
\qEdit /etc/postgresql/15/main/postgresql.conf:
# Listen on all interfaces (or specific IP)
listen_addresses = '*'
# Connection limits
max_connections = 100
shared_buffers = 256MB
# Performance tuning
effective_cache_size = 1GB
maintenance_work_mem = 128MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1Edit /etc/postgresql/15/main/pg_hba.conf:
# TYPE DATABASE USER ADDRESS METHOD
# Allow local connections
local all postgres peer
local all all md5
# IPv4 local connections
host all all 127.0.0.1/32 md5
# Remote connections (adjust IP range as needed)
host tmi tmi_user 10.0.0.0/8 md5
host tmi tmi_user 172.16.0.0/12 md5
host tmi tmi_user 192.168.0.0/16 md5
# SSL connections for production
hostssl tmi tmi_user 0.0.0.0/0 md5
Restart PostgreSQL:
sudo systemctl restart postgresqlFor production deployments, enable SSL:
# Generate self-signed certificate (or use proper certificate)
sudo openssl req -new -x509 -days 365 -nodes -text \
-out /var/lib/postgresql/15/main/server.crt \
-keyout /var/lib/postgresql/15/main/server.key \
-subj "/CN=postgres.example.com"
# Set permissions
sudo chown postgres:postgres /var/lib/postgresql/15/main/server.*
sudo chmod 600 /var/lib/postgresql/15/main/server.keyEdit postgresql.conf:
ssl = on
ssl_cert_file = 'server.crt'
ssl_key_file = 'server.key'TMI includes automated database migrations.
From TMI server directory:
# Run all pending migrations
cd cmd/migrate && go run main.go up
# Or use environment file
go run cmd/migrate/main.go --env=.env.production
# Check migration status
go run cmd/migrate/main.go --statusMigrations are in auth/migrations/:
| Migration | Description |
|---|---|
000001_initial_schema.up.sql |
Base tables |
000002_add_indexes.up.sql |
Performance indexes |
000003_add_constraints.up.sql |
Foreign keys |
000004_add_refresh_tokens.up.sql |
OAuth refresh tokens |
000005_add_user_providers.up.sql |
Multi-provider auth |
000006_update_schema.up.sql |
Schema refinements |
000007_add_missing_indexes.up.sql |
Additional indexes |
000008_add_additional_constraints.up.sql |
CHECK constraints |
# Use check-db tool
go run cmd/check-db/main.go
# Expected output:
# ✓ Database connection successful
# ✓ All tables exist
# ✓ All columns validated
# ✓ All indexes present
# ✓ All constraints verifiedConfigure PostgreSQL connection in TMI server:
Environment Variables:
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USER=tmi_user
POSTGRES_PASSWORD=secure_password
POSTGRES_DATABASE=tmi
POSTGRES_SSL_MODE=require # or disable for local devConfiguration File (config-production.yml):
database:
postgres:
host: "postgres.example.com"
port: "5432"
user: "tmi_user"
password: "${POSTGRES_PASSWORD}"
database: "tmi"
sslmode: "require"
max_connections: 25
max_idle_connections: 5
connection_lifetime: "5m"# Test connection from command line
psql -h localhost -U tmi_user -d tmi -c "SELECT version();"
# Test from TMI server
curl http://localhost:8080/version
# Server will log database connection status# Install Redis
sudo apt update
sudo apt install redis-server
# Start and enable service
sudo systemctl start redis-server
sudo systemctl enable redis-server# Install EPEL repository
sudo yum install epel-release
# Install Redis
sudo yum install redis
# Start and enable service
sudo systemctl start redis
sudo systemctl enable redis# Install Redis
brew install redis
# Start service
brew services start redis# Run Redis container
docker run -d \
--name tmi-redis \
-p 6379:6379 \
-v redis_data:/data \
redis:7-alpine redis-server --appendonly yes
# Or with password
docker run -d \
--name tmi-redis \
-p 6379:6379 \
-v redis_data:/data \
redis:7-alpine redis-server \
--appendonly yes \
--requirepass "secure_redis_password"Edit /etc/redis/redis.conf:
# Bind to specific interfaces
bind 127.0.0.1 ::1
# For remote access (be careful!)
# bind 0.0.0.0
# Set password
requirepass your_redis_password
# Persistence
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec
# Memory management
maxmemory 1gb
maxmemory-policy allkeys-lru
# Disable dangerous commands
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command CONFIG ""
# Performance
tcp-backlog 511
timeout 300
tcp-keepalive 300Restart Redis:
sudo systemctl restart redisGenerate certificates:
openssl req -x509 -nodes -newkey rsa:4096 \
-keyout redis.key -out redis.crt -days 365Configure in redis.conf:
tls-port 6380
port 0 # Disable non-TLS
tls-cert-file /etc/redis/redis.crt
tls-key-file /etc/redis/redis.key
tls-ca-cert-file /etc/redis/ca.crt# Allow Redis only from application servers
sudo ufw allow from 10.0.0.0/24 to any port 6379
sudo ufw deny 6379Configure Redis connection:
Environment Variables:
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
REDIS_DB=0Configuration File (config-production.yml):
database:
redis:
host: "redis.example.com"
port: "6379"
password: "${REDIS_PASSWORD}"
db: 0
max_retries: 3
pool_size: 10
idle_timeout: "5m"# Test Redis connection
redis-cli -h localhost -p 6379 -a your_redis_password ping
# Expected: PONG
# Test from application
redis-cli -h localhost -p 6379 -a your_redis_password
> SET test "Hello"
> GET test
> DEL test
> QUITTMI creates these tables:
| Table | Purpose |
|---|---|
users |
User accounts and profiles |
user_providers |
OAuth provider associations |
threat_models |
Threat model metadata |
threats |
Individual threats |
diagrams |
Data flow diagrams |
diagram_cells |
Diagram elements |
documents |
Reference documents |
sources |
Source code repositories |
metadata |
Key-value metadata |
threat_model_access |
Access control (RBAC) |
schema_migrations |
Migration tracking |
TMI uses these Redis key patterns:
| Pattern | Purpose | TTL |
|---|---|---|
session:{user_id}:{session_id} |
User sessions | 24h |
auth:token:{token_id} |
JWT token cache | Token expiry |
auth:refresh:{refresh_token_id} |
Refresh tokens | 30 days |
cache:user:{user_id} |
User profile cache | 15m |
cache:threat_model:{id} |
Threat model cache | 10m |
cache:diagram:{id} |
Diagram cache | 2m |
cache:auth:{threat_model_id} |
Authorization cache | 15m |
rate_limit:* |
Rate limiting | 1m-1h |
lock:{resource}:{id} |
Distributed locks | 30s |
Create /usr/local/bin/backup-tmi-db.sh:
#!/bin/bash
BACKUP_DIR="/var/backups/postgresql/tmi"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
DB_NAME="tmi"
DB_USER="tmi_user"
DB_HOST="localhost"
# Create backup directory
mkdir -p $BACKUP_DIR
# Perform backup (compressed)
pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME -Fc \
-f "$BACKUP_DIR/tmi_$TIMESTAMP.dump"
# Keep only last 7 days
find $BACKUP_DIR -name "tmi_*.dump" -mtime +7 -delete
# Log completion
echo "$(date): Backup completed - tmi_$TIMESTAMP.dump" >> /var/log/tmi-backup.logMake executable and schedule:
chmod +x /usr/local/bin/backup-tmi-db.sh
# Add to crontab (daily at 2 AM)
crontab -e
0 2 * * * /usr/local/bin/backup-tmi-db.sh# Full database backup (compressed)
pg_dump -h localhost -U tmi_user -d tmi -Fc \
-f tmi_backup_$(date +%Y%m%d).dump
# Schema only
pg_dump -h localhost -U tmi_user -d tmi --schema-only \
-f tmi_schema.sql
# SQL format (human-readable)
pg_dump -h localhost -U tmi_user -d tmi \
-f tmi_backup_$(date +%Y%m%d).sql# From compressed dump
pg_restore -h localhost -U tmi_user -d tmi_new \
tmi_backup_20251112.dump
# From SQL file
psql -h localhost -U tmi_user -d tmi_new \
-f tmi_backup_20251112.sql
# Drop and recreate database (DANGEROUS!)
dropdb -h localhost -U postgres tmi
createdb -h localhost -U postgres -O tmi_user tmi
pg_restore -h localhost -U tmi_user -d tmi tmi_backup_20251112.dumpIn redis.conf:
# RDB snapshots
save 900 1 # After 900 sec if at least 1 key changed
save 300 10 # After 300 sec if at least 10 keys changed
save 60 10000 # After 60 sec if at least 10000 keys changed
# AOF persistence
appendonly yes
appendfsync everysec# Trigger RDB snapshot
redis-cli -a your_redis_password BGSAVE
# Copy RDB file
cp /var/lib/redis/dump.rdb /backup/redis_dump_$(date +%Y%m%d).rdb
# Copy AOF file
cp /var/lib/redis/appendonly.aof /backup/redis_aof_$(date +%Y%m%d).aof# Stop Redis
sudo systemctl stop redis
# Restore RDB file
sudo cp /backup/redis_dump_20251112.rdb /var/lib/redis/dump.rdb
sudo chown redis:redis /var/lib/redis/dump.rdb
# Start Redis
sudo systemctl start redisEdit postgresql.conf:
# Memory (adjust based on server RAM)
shared_buffers = 256MB # 25% of RAM
effective_cache_size = 1GB # 50-75% of RAM
maintenance_work_mem = 128MB
work_mem = 16MB
# Checkpoints
checkpoint_completion_target = 0.9
wal_buffers = 16MB
max_wal_size = 2GB
min_wal_size = 1GB
# Query planner
default_statistics_target = 100
random_page_cost = 1.1 # For SSD storage
effective_io_concurrency = 200 # For SSD storageUse PgBouncer for connection pooling:
# Install PgBouncer
sudo apt install pgbouncer
# Configure /etc/pgbouncer/pgbouncer.ini
[databases]
tmi = host=localhost port=5432 dbname=tmi
[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 100
default_pool_size = 25Connect TMI server to PgBouncer:
POSTGRES_HOST=localhost
POSTGRES_PORT=6432 # PgBouncer portConfigure in redis.conf:
# Set memory limit
maxmemory 1gb
# Eviction policy
maxmemory-policy allkeys-lru
# Memory sampling
maxmemory-samples 5# Disable slow operations in production
slowlog-log-slower-than 10000
slowlog-max-len 128
# Client output buffer limits
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60-- Active connections
SELECT count(*) FROM pg_stat_activity;
-- Database size
SELECT pg_size_pretty(pg_database_size('tmi'));
-- Table sizes
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
-- Slow queries (requires pg_stat_statements)
SELECT
query,
calls,
mean_exec_time,
max_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;#!/bin/bash
# postgresql-health.sh
# Check if PostgreSQL is running
if ! pg_isready -h localhost -U tmi_user > /dev/null 2>&1; then
echo "CRITICAL: PostgreSQL is not responding"
exit 2
fi
# Check connection count
CONN_COUNT=$(psql -h localhost -U tmi_user -d tmi -t -c "SELECT count(*) FROM pg_stat_activity;")
if [ $CONN_COUNT -gt 80 ]; then
echo "WARNING: High connection count: $CONN_COUNT"
exit 1
fi
echo "OK: PostgreSQL is healthy"
exit 0# Memory usage
redis-cli -a password info memory | grep used_memory_human
# Connected clients
redis-cli -a password info clients | grep connected_clients
# Keys by pattern
redis-cli -a password --scan --pattern "cache:*" | wc -l
# Slow log
redis-cli -a password slowlog get 10#!/bin/bash
# redis-health.sh
# Check if Redis is running
if ! redis-cli -a your_redis_password ping > /dev/null 2>&1; then
echo "CRITICAL: Redis is not responding"
exit 2
fi
# Check memory usage
MEMORY=$(redis-cli -a your_redis_password info memory | grep used_memory_rss_human | cut -d: -f2)
echo "OK: Redis is healthy (Memory: $MEMORY)"
exit 0# Check if PostgreSQL is running
sudo systemctl status postgresql
# Check logs
sudo tail -f /var/log/postgresql/postgresql-15-main.log
# Test connection
psql -h localhost -U tmi_user -d tmi -c "SELECT 1;"-- Check for locks
SELECT * FROM pg_locks WHERE NOT granted;
-- Check for long-running queries
SELECT pid, now() - pg_stat_activity.query_start AS duration, query
FROM pg_stat_activity
WHERE state = 'active'
AND now() - pg_stat_activity.query_start > interval '1 minute';
-- Vacuum tables
VACUUM ANALYZE;# Check if Redis is running
sudo systemctl status redis
# Check logs
sudo tail -f /var/log/redis/redis-server.log
# Test connection
redis-cli -a your_redis_password ping# Check memory usage
redis-cli -a password info memory
# Clear cache if needed (DANGEROUS in production!)
redis-cli -a password --scan --pattern "cache:*" | xargs redis-cli -a password DEL- Component Integration - Connect server to databases
- Post-Deployment - Verify database functionality
- Database Operations - Ongoing maintenance
- Using TMI for Threat Modeling
- Accessing TMI
- Creating Your First Threat Model
- Understanding the User Interface
- Working with Data Flow Diagrams
- Managing Threats
- Collaborative Threat Modeling
- Using Notes and Documentation
- Metadata and Extensions
- Planning Your Deployment
- Deploying TMI Server
- Deploying TMI Web Application
- Setting Up Authentication
- Database Setup
- Component Integration
- Post-Deployment
- Monitoring and Health
- Database Operations
- Security Operations
- Performance and Scaling
- Maintenance Tasks