Sasha Studio V1 Security Essentials Guide
Generated: 2025-01-05 UTC
Purpose: Practical security hardening for V1 deployment without adding complexity
Applicable To: Single-container Sasha Studio deployment
Overview
This guide provides essential security measures for Sasha Studio V1 that maintain architectural simplicity while addressing critical vulnerabilities. Every recommendation can be implemented through configuration changes or minimal code additions - no new infrastructure components required.
Related Guides:
- Sasha Studio Implementation Guide - Core architecture
- Local LLM Integration Guide - Secure local model deployment
- PII Scanning and Removal Guide - Data privacy
Security Philosophy for V1
- Simplicity First: No complex systems or additional infrastructure
- Essential Protection: Focus on critical vulnerabilities
- Easy Implementation: Configuration changes and simple code
- Incremental Improvement: Foundation for V2 enhancements
Container Security Basics
Run as Non-Root User
Simple one-line change with significant security impact:
# In your Dockerfile, after installing dependencies:
USER node:node # Switch from root to node user
# Or create a dedicated user:
RUN useradd -m -u 1001 -s /bin/bash sasha
USER sasha
Remove Unnecessary Tools
Reduce attack surface by removing tools after setup:
# After using curl/wget for setup
RUN apt-get remove --purge -y curl wget netcat && \
apt-get autoremove -y && \
rm -rf /var/lib/apt/lists/*
Read-Only Container Filesystem
# docker-compose.yml
services:
sasha:
security_opt:
- no-new-privileges:true # Prevent privilege escalation
read_only: true # Make filesystem read-only
tmpfs:
- /tmp
- /app/uploads
- /var/run
- /var/cache
Secrets Management
File-Based Secrets (Simple & Secure)
Replace environment variables with file-based secrets:
// config/secrets.js - Simple secret loading
const fs = require('fs');
const path = require('path');
class SecretManager {
constructor() {
this.secretsPath = process.env.SECRETS_PATH || '/run/secrets';
}
loadSecret(name, required = true) {
try {
const secretPath = path.join(this.secretsPath, name);
return fs.readFileSync(secretPath, 'utf8').trim();
} catch (error) {
if (required) {
throw new Error(`Required secret '${name}' not found`);
}
return null;
}
}
loadSecrets() {
return {
jwtSecret: this.loadSecret('jwt_secret'),
sessionSecret: this.loadSecret('session_secret'),
encryptionKey: this.loadSecret('encryption_key'),
anthropicKey: this.loadSecret('anthropic_key', false),
openaiKey: this.loadSecret('openai_key', false),
databaseUrl: this.loadSecret('database_url')
};
}
}
module.exports = new SecretManager();
Docker Secrets Setup
# docker-compose.yml
services:
sasha:
secrets:
- jwt_secret
- session_secret
- encryption_key
- anthropic_key
- database_url
secrets:
jwt_secret:
file: ./secrets/jwt_secret.txt
session_secret:
file: ./secrets/session_secret.txt
encryption_key:
file: ./secrets/encryption_key.txt
anthropic_key:
file: ./secrets/anthropic_key.txt
database_url:
file: ./secrets/database_url.txt
Creating Strong Secrets
#!/bin/bash
# generate-secrets.sh
mkdir -p ./secrets
# Generate strong random secrets
openssl rand -hex 32 > ./secrets/jwt_secret.txt
openssl rand -hex 32 > ./secrets/session_secret.txt
openssl rand -hex 32 > ./secrets/encryption_key.txt
# Set proper permissions
chmod 600 ./secrets/*.txt
Nginx Security Headers
Add these essential headers to your existing nginx.conf:
# /etc/nginx/nginx.conf or sites-available/default
server {
listen 80;
server_name _;
# Redirect all HTTP to HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name your-domain.com;
# SSL configuration
ssl_certificate /etc/ssl/certs/sasha.crt;
ssl_certificate_key /etc/ssl/private/sasha.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Security headers
add_header X-Frame-Options "DENY" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
# Basic CSP (adjust as needed)
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self';" always;
# Hide version information
server_tokens off;
proxy_hide_header X-Powered-By;
# Limit request sizes
client_max_body_size 10M;
client_body_buffer_size 1M;
# Rate limiting at nginx level
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req zone=api burst=20 nodelay;
# Proxy settings
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/ {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
# Apply rate limiting to API
limit_req zone=api;
}
}
π¦ Rate Limiting & Request Protection
API Rate Limiting
// middleware/rateLimiter.js
const rateLimit = require('express-rate-limit');
const RedisStore = require('rate-limit-redis');
// Different limits for different endpoints
const createRateLimiter = (options) => {
const defaults = {
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // default requests
message: 'Too many requests, please try again later.',
standardHeaders: true,
legacyHeaders: false,
};
return rateLimit({
...defaults,
...options,
store: new RedisStore({
client: redis,
prefix: 'rl:',
}),
keyGenerator: (req) => {
// Use user ID if authenticated, otherwise IP
return req.user?.id || req.ip;
}
});
};
// Apply different limits
app.use('/api/auth/', createRateLimiter({ max: 5, windowMs: 15 * 60 * 1000 })); // 5 requests per 15 min
app.use('/api/chat/', createRateLimiter({ max: 50, windowMs: 60 * 1000 })); // 50 per minute
app.use('/api/', createRateLimiter({ max: 100 })); // 100 per 15 min default
Request Size Limiting
// Prevent large payload attacks
app.use(express.json({ limit: '1mb' }));
app.use(express.urlencoded({ extended: true, limit: '1mb' }));
// File upload limits
const multer = require('multer');
const upload = multer({
limits: {
fileSize: 10 * 1024 * 1024, // 10MB
files: 5 // max 5 files at once
},
fileFilter: (req, file, cb) => {
// Whitelist safe file types
const allowedTypes = /jpeg|jpg|png|gif|pdf|docx|xlsx|txt|md/;
const extname = allowedTypes.test(path.extname(file.originalname).toLowerCase());
const mimetype = allowedTypes.test(file.mimetype);
if (mimetype && extname) {
return cb(null, true);
} else {
cb(new Error('Invalid file type'));
}
}
});
LLM Security
Input Sanitization
// security/llmSecurity.js
class LLMSecurityFilter {
constructor() {
// Patterns that might indicate prompt injection
this.dangerousPatterns = [
/ignore\s+(previous|all|above)/gi,
/system\s*:/gi,
/assistant\s*:/gi,
/\[INST\]/gi,
/\[\/INST\]/gi,
/###\s*instruction/gi,
/###\s*response/gi,
/forget\s+everything/gi,
/new\s+instructions?:/gi,
/you\s+are\s+now/gi,
/act\s+as\s+(a|an)/gi,
/roleplay\s+as/gi
];
// Patterns that might leak sensitive data
this.sensitivePatterns = [
/password\s*[:=]/gi,
/api[_\s-]?key\s*[:=]/gi,
/secret\s*[:=]/gi,
/token\s*[:=]/gi,
/private[_\s-]?key/gi
];
}
sanitizeInput(input) {
if (!input || typeof input !== 'string') {
return '';
}
// Length limit
let cleaned = input.substring(0, 10000);
// Check for dangerous patterns
for (const pattern of this.dangerousPatterns) {
if (pattern.test(cleaned)) {
// Log security event
this.logSecurityEvent('prompt_injection_attempt', {
pattern: pattern.source,
input: cleaned.substring(0, 100) + '...'
});
// Replace dangerous content
cleaned = cleaned.replace(pattern, '[BLOCKED]');
}
}
// Check for sensitive data
for (const pattern of this.sensitivePatterns) {
if (pattern.test(cleaned)) {
this.logSecurityEvent('sensitive_data_blocked', {
pattern: pattern.source
});
cleaned = cleaned.replace(pattern, '[REDACTED]');
}
}
return cleaned;
}
sanitizeOutput(output) {
// Remove any accidentally included secrets
const secrets = this.detectSecrets(output);
if (secrets.length > 0) {
this.logSecurityEvent('output_contains_secrets', {
count: secrets.length
});
let cleaned = output;
for (const secret of secrets) {
cleaned = cleaned.replace(secret, '[REDACTED]');
}
return cleaned;
}
return output;
}
detectSecrets(text) {
const patterns = [
/sk-[a-zA-Z0-9]{48}/g, // OpenAI keys
/anthropic-[a-zA-Z0-9]{32}/g, // Anthropic keys
/[a-f0-9]{64}/g, // SHA256 hashes (possible secrets)
/[a-zA-Z0-9+/]{40,}={0,2}/g // Base64 encoded secrets
];
const found = [];
for (const pattern of patterns) {
const matches = text.match(pattern) || [];
found.push(...matches);
}
return [...new Set(found)];
}
logSecurityEvent(event, details) {
const logEntry = {
timestamp: new Date().toISOString(),
event,
details,
severity: this.getSeverity(event)
};
// Write to security log
require('./logger').security(logEntry);
// Alert on critical events
if (logEntry.severity === 'critical') {
// Send alert (webhook, email, etc.)
this.sendSecurityAlert(logEntry);
}
}
getSeverity(event) {
const severityMap = {
'prompt_injection_attempt': 'high',
'sensitive_data_blocked': 'medium',
'output_contains_secrets': 'critical'
};
return severityMap[event] || 'low';
}
}
module.exports = new LLMSecurityFilter();
Safe LLM Integration
// Wrap LLM calls with security
async function safeLLMCall(messages, options = {}) {
const security = require('./security/llmSecurity');
// Sanitize all message content
const sanitizedMessages = messages.map(msg => ({
...msg,
content: security.sanitizeInput(msg.content)
}));
try {
// Make LLM call
const response = await llmClient.chat({
messages: sanitizedMessages,
...options,
// Safety settings
temperature: Math.min(options.temperature || 0.7, 0.9),
max_tokens: Math.min(options.max_tokens || 2000, 4000)
});
// Sanitize output
const sanitizedResponse = security.sanitizeOutput(response.content);
return {
...response,
content: sanitizedResponse
};
} catch (error) {
// Don't leak error details
console.error('LLM call failed:', error.message);
throw new Error('AI service temporarily unavailable');
}
}
Session Security
Secure Session Configuration
// config/session.js
const session = require('express-session');
const RedisStore = require('connect-redis')(session);
const sessionConfig = {
store: new RedisStore({
client: redis,
prefix: 'sess:',
ttl: 60 * 60 // 1 hour
}),
secret: secrets.sessionSecret,
name: 'sasha.sid', // Don't use default 'connect.sid'
resave: false,
saveUninitialized: false,
rolling: true, // Reset expiry on activity
cookie: {
secure: true, // HTTPS only
httpOnly: true, // No JS access
maxAge: 1000 * 60 * 60, // 1 hour
sameSite: 'strict', // CSRF protection
domain: process.env.COOKIE_DOMAIN || undefined
}
};
// In production, trust proxy
if (process.env.NODE_ENV === 'production') {
app.set('trust proxy', 1);
}
app.use(session(sessionConfig));
// Session security middleware
app.use((req, res, next) => {
// Regenerate session ID on login
if (req.session.justLoggedIn) {
req.session.regenerate((err) => {
if (err) next(err);
req.session.justLoggedIn = false;
next();
});
} else {
next();
}
});
// Concurrent session limiting
const activeSessions = new Map();
app.use((req, res, next) => {
if (req.session.userId) {
const userSessions = activeSessions.get(req.session.userId) || new Set();
// Limit to 3 concurrent sessions
if (userSessions.size >= 3 && !userSessions.has(req.sessionID)) {
return res.status(403).json({ error: 'Maximum concurrent sessions exceeded' });
}
userSessions.add(req.sessionID);
activeSessions.set(req.session.userId, userSessions);
}
next();
});
Backup & Recovery
Automated Daily Backups
#!/bin/bash
# scripts/backup.sh - Run via cron daily
BACKUP_DIR="/backups"
DATE=$(date +%Y%m%d-%H%M%S)
RETENTION_DAYS=7
# Create backup directory
mkdir -p $BACKUP_DIR
# Backup database
echo "Backing up database..."
docker exec sasha-studio pg_dump -U postgres sasha | gzip > $BACKUP_DIR/sasha-db-$DATE.sql.gz
# Backup uploaded files
echo "Backing up files..."
tar -czf $BACKUP_DIR/sasha-files-$DATE.tar.gz /data/uploads
# Backup configuration
echo "Backing up configuration..."
tar -czf $BACKUP_DIR/sasha-config-$DATE.tar.gz /config
# Test backup integrity
echo "Verifying backups..."
gzip -t $BACKUP_DIR/sasha-db-$DATE.sql.gz || echo "Database backup corrupted!"
tar -tzf $BACKUP_DIR/sasha-files-$DATE.tar.gz > /dev/null || echo "Files backup corrupted!"
# Clean old backups
echo "Cleaning old backups..."
find $BACKUP_DIR -name "*.gz" -mtime +$RETENTION_DAYS -delete
# Log backup status
echo "Backup completed: $DATE" >> $BACKUP_DIR/backup.log
# Optional: Upload to cloud storage
# aws s3 sync $BACKUP_DIR s3://your-backup-bucket/ --exclude "*" --include "*.gz"
Backup Cron Setup
# Add to crontab
0 2 * * * /app/scripts/backup.sh >> /logs/backup.log 2>&1
Quick Recovery Script
#!/bin/bash
# scripts/restore.sh
if [ $# -eq 0 ]; then
echo "Usage: ./restore.sh YYYY-MM-DD"
exit 1
fi
DATE=$1
BACKUP_DIR="/backups"
# Restore database
echo "Restoring database from $DATE..."
gunzip -c $BACKUP_DIR/sasha-db-$DATE*.sql.gz | docker exec -i sasha-studio psql -U postgres sasha
# Restore files
echo "Restoring files..."
tar -xzf $BACKUP_DIR/sasha-files-$DATE*.tar.gz -C /
echo "Restore completed!"
Security Monitoring
Security Event Logging
// utils/securityLogger.js
const fs = require('fs');
const path = require('path');
class SecurityLogger {
constructor() {
this.logPath = '/logs/security.log';
this.alertThreshold = {
failed_login: 5,
rate_limit: 20,
file_upload_blocked: 10
};
this.eventCounts = new Map();
}
log(event) {
const entry = {
timestamp: new Date().toISOString(),
type: event.type,
severity: event.severity || 'info',
user: event.user || 'anonymous',
ip: event.ip,
userAgent: event.userAgent,
details: event.details || {}
};
// Write to log file
fs.appendFileSync(this.logPath, JSON.stringify(entry) + '\n');
// Check for alert conditions
this.checkAlerts(event.type);
// Log critical events to console
if (entry.severity === 'critical') {
console.error('π¨ SECURITY EVENT:', entry);
}
}
checkAlerts(eventType) {
const count = (this.eventCounts.get(eventType) || 0) + 1;
this.eventCounts.set(eventType, count);
// Check threshold
const threshold = this.alertThreshold[eventType];
if (threshold && count >= threshold) {
this.sendAlert({
message: `Security threshold exceeded: ${eventType} (${count} events)`,
severity: 'high'
});
// Reset counter
this.eventCounts.set(eventType, 0);
}
}
sendAlert(alert) {
// Simple console alert for V1
console.error('π¨ SECURITY ALERT:', alert.message);
// V1.1: Add email/webhook notification
// sendEmail(admin@example.com, alert);
// postToWebhook(process.env.SECURITY_WEBHOOK, alert);
}
// Common security events
failedLogin(username, ip) {
this.log({
type: 'failed_login',
severity: 'warning',
user: username,
ip: ip,
details: { timestamp: Date.now() }
});
}
suspiciousActivity(userId, activity, ip) {
this.log({
type: 'suspicious_activity',
severity: 'high',
user: userId,
ip: ip,
details: { activity }
});
}
fileUploadBlocked(userId, filename, reason) {
this.log({
type: 'file_upload_blocked',
severity: 'medium',
user: userId,
details: { filename, reason }
});
}
}
module.exports = new SecurityLogger();
Daily Security Report
// scripts/dailySecurityReport.js
const fs = require('fs');
const readline = require('readline');
async function generateSecurityReport() {
const events = [];
const summary = {
total: 0,
byType: {},
bySeverity: {},
topIPs: {},
topUsers: {}
};
// Read security log
const fileStream = fs.createReadStream('/logs/security.log');
const rl = readline.createInterface({
input: fileStream,
crlfDelay: Infinity
});
for await (const line of rl) {
try {
const event = JSON.parse(line);
events.push(event);
// Update summary
summary.total++;
summary.byType[event.type] = (summary.byType[event.type] || 0) + 1;
summary.bySeverity[event.severity] = (summary.bySeverity[event.severity] || 0) + 1;
summary.topIPs[event.ip] = (summary.topIPs[event.ip] || 0) + 1;
summary.topUsers[event.user] = (summary.topUsers[event.user] || 0) + 1;
} catch (e) {
// Skip invalid lines
}
}
// Generate report
const report = {
date: new Date().toISOString().split('T')[0],
summary,
criticalEvents: events.filter(e => e.severity === 'critical'),
recommendations: generateRecommendations(summary)
};
// Save report
fs.writeFileSync(
`/logs/security-report-${report.date}.json`,
JSON.stringify(report, null, 2)
);
console.log('Security Report:', report.date);
console.log('Total Events:', summary.total);
console.log('Critical Events:', report.criticalEvents.length);
return report;
}
function generateRecommendations(summary) {
const recommendations = [];
if (summary.byType.failed_login > 50) {
recommendations.push('High number of failed logins detected. Consider implementing CAPTCHA.');
}
if (summary.bySeverity.critical > 0) {
recommendations.push('Critical security events detected. Immediate review required.');
}
// Check for suspicious IPs
const suspiciousIPs = Object.entries(summary.topIPs)
.filter(([ip, count]) => count > 100)
.map(([ip]) => ip);
if (suspiciousIPs.length > 0) {
recommendations.push(`Consider blocking IPs: ${suspiciousIPs.join(', ')}`);
}
return recommendations;
}
// Run if called directly
if (require.main === module) {
generateSecurityReport().catch(console.error);
}
Health Monitoring
Comprehensive Health Check
// routes/health.js
const si = require('systeminformation');
router.get('/health', async (req, res) => {
const checks = {
status: 'checking',
timestamp: new Date().toISOString(),
services: {}
};
try {
// Database check
checks.services.database = await checkDatabase();
// Redis check
checks.services.redis = await checkRedis();
// Disk space check
const disk = await si.fsSize();
const mainDisk = disk.find(d => d.mount === '/') || disk[0];
checks.services.disk = {
healthy: mainDisk.use < 90,
usage: `${mainDisk.use}%`,
available: `${Math.round(mainDisk.available / 1024 / 1024 / 1024)}GB`
};
// Memory check
const mem = await si.mem();
checks.services.memory = {
healthy: (mem.used / mem.total) < 0.85,
usage: `${Math.round((mem.used / mem.total) * 100)}%`,
available: `${Math.round(mem.available / 1024 / 1024 / 1024)}GB`
};
// LLM service check
checks.services.llm = await checkLLMService();
// Overall health
checks.healthy = Object.values(checks.services)
.every(service => service.healthy !== false);
checks.status = checks.healthy ? 'healthy' : 'unhealthy';
res.status(checks.healthy ? 200 : 503).json(checks);
} catch (error) {
checks.status = 'error';
checks.error = error.message;
res.status(503).json(checks);
}
});
async function checkDatabase() {
try {
const result = await db.query('SELECT 1');
return { healthy: true, latency: `${Date.now() - start}ms` };
} catch (error) {
return { healthy: false, error: error.message };
}
}
async function checkRedis() {
try {
const start = Date.now();
await redis.ping();
return { healthy: true, latency: `${Date.now() - start}ms` };
} catch (error) {
return { healthy: false, error: error.message };
}
}
async function checkLLMService() {
try {
// Check local Ollama
const response = await fetch('http://localhost:11434/api/tags');
const data = await response.json();
return {
healthy: true,
models: data.models.length,
provider: 'ollama'
};
} catch (error) {
// Fallback to cloud check
try {
// Simple connectivity check
const response = await fetch('https://api.anthropic.com/v1/messages', {
method: 'OPTIONS'
});
return { healthy: true, provider: 'cloud' };
} catch (e) {
return { healthy: false, error: 'No LLM service available' };
}
}
}
Data Privacy Features
PII Detection on Upload
// middleware/piiDetection.js
const PIIScanner = require('../utils/piiScanner');
const piiDetectionMiddleware = (options = {}) => {
const scanner = new PIIScanner();
return async (req, res, next) => {
if (!req.file && !req.files) {
return next();
}
const files = req.files || [req.file];
const warnings = [];
for (const file of files) {
try {
const scanResult = await scanner.scanFile(file.path);
if (scanResult.hasPII) {
warnings.push({
filename: file.originalname,
piiCount: scanResult.findings.length,
types: [...new Set(scanResult.findings.map(f => f.type))],
severity: scanResult.severity
});
// Log PII detection
securityLogger.log({
type: 'pii_detected',
severity: 'medium',
user: req.user?.id,
details: {
filename: file.originalname,
findings: scanResult.findings.length
}
});
}
} catch (error) {
console.error('PII scan failed:', error);
}
}
// Add warnings to request
if (warnings.length > 0) {
req.piiWarnings = warnings;
// Optionally block upload based on severity
const criticalPII = warnings.some(w => w.severity === 'critical');
if (options.blockOnCritical && criticalPII) {
// Clean up uploaded files
for (const file of files) {
fs.unlinkSync(file.path);
}
return res.status(400).json({
error: 'Upload blocked: Critical PII detected',
warnings
});
}
}
next();
};
};
// Usage
app.post('/api/upload',
upload.single('file'),
piiDetectionMiddleware({ blockOnCritical: true }),
async (req, res) => {
// Warn user about PII
if (req.piiWarnings) {
res.json({
success: true,
file: req.file,
warnings: req.piiWarnings,
message: 'File uploaded but contains PII. Please review.'
});
} else {
res.json({
success: true,
file: req.file
});
}
}
);
Data Retention Policy
// scripts/dataRetention.js - Run daily via cron
const moment = require('moment');
async function enforceDataRetention() {
const policies = {
chat_sessions: 90, // days
uploaded_files: 30,
audit_logs: 730, // 2 years
security_logs: 365,
temp_files: 1
};
console.log('Enforcing data retention policies...');
// Clean chat sessions
const sessionCutoff = moment().subtract(policies.chat_sessions, 'days').toDate();
await db.query('DELETE FROM sessions WHERE created_at < $1', [sessionCutoff]);
// Clean uploaded files
const fileCutoff = moment().subtract(policies.uploaded_files, 'days');
const uploadDir = '/data/uploads';
const files = fs.readdirSync(uploadDir);
for (const file of files) {
const filePath = path.join(uploadDir, file);
const stats = fs.statSync(filePath);
if (moment(stats.mtime).isBefore(fileCutoff)) {
fs.unlinkSync(filePath);
console.log(`Deleted old file: ${file}`);
}
}
// Archive old logs
const logCutoff = moment().subtract(policies.audit_logs, 'days');
// Implementation depends on your log rotation strategy
console.log('Data retention enforcement completed');
}
// Run if called directly
if (require.main === module) {
enforceDataRetention()
.then(() => process.exit(0))
.catch(err => {
console.error(err);
process.exit(1);
});
}
Security Checklist for V1
Essential Security Measures
Container Security:
- Run as non-root user
- Remove unnecessary tools after setup
- Enable read-only filesystem with specific writable mounts
- Set resource limits in docker-compose
Secrets Management:
- Replace environment variables with file-based secrets
- Generate strong random secrets
- Set proper file permissions (600) on secret files
- Never commit secrets to version control
Network Security:
- Enable HTTPS only (redirect HTTP)
- Configure all security headers in Nginx
- Hide version information
- Implement request size limits
- Enable rate limiting at Nginx level
Application Security:
- Implement API rate limiting with Redis
- Add request size validation
- Sanitize all LLM inputs
- Filter LLM outputs for secrets
- Validate file uploads (type and size)
Session Security:
- Use secure session cookies (httpOnly, secure, sameSite)
- Implement session timeout (1 hour)
- Regenerate session ID on login
- Limit concurrent sessions per user
Data Protection:
- Set up automated daily backups
- Test backup restoration process
- Implement data retention policies
- Scan uploads for PII
- Log all security events
Monitoring:
- Implement health check endpoint
- Set up security event logging
- Generate daily security reports
- Monitor disk space and memory usage
- Alert on critical security events
Quick Implementation Priority
Day 1 (Critical):
- Non-root user
- HTTPS with security headers
- File-based secrets
- Rate limiting
Week 1 (Important):
- LLM input sanitization
- Session security
- Backup automation
- Health monitoring
Month 1 (Enhancement):
- PII scanning
- Security reporting
- Data retention
- Advanced monitoring
Next Steps for V2
Once V1 is stable, consider these enhancements:
- Multi-Factor Authentication (MFA)
- Web Application Firewall (WAF)
- Intrusion Detection System (IDS)
- Centralized logging with ELK stack
- Automated security scanning
- Compliance automation (SOC2, ISO27001)
- Zero-trust network architecture
- Hardware security module (HSM) for encryption keys
This guide provides practical security hardening for Sasha Studio V1 without adding architectural complexity. Every recommendation can be implemented incrementally while maintaining the simple single-container design.