Sliplane Container-Per-User Deployment Strategy
Generated: 2025-01-12 UTC
Purpose: Complete deployment strategy for independent container-per-user architecture
Architecture: Direct port access, no nginx, no central authentication
Executive Summary
This document provides the deployment strategy for Sasha Studio on Sliplane using independent container-per-user architecture, where each user/organization receives their own completely isolated Sasha container instance.
Key Principles
- Complete Isolation: Each container is independent with its own database, auth, and storage
- Direct Access: No nginx reverse proxy - users connect directly to their container port
- No Central Services: No shared authentication or shared services between containers
- Independent Cloud Storage: Each container can mount its own Google Drive/SharePoint/S3
Key Findings
- Optimal Starting Point: Medium tier (β¬24/month) supporting 8-10 isolated users
- Best Scale Value: Large tier (β¬44/month) supporting 20-25 isolated users
- Enterprise Scale: X-Large tier (β¬76/month) supporting 40-50 isolated users
- Cost Per User: β¬1.52-3.00 depending on tier and utilization
Architecture Overview
Independent Container Architecture
Each user/organization receives a completely isolated Sasha container with dedicated resources.
Port 3001
Own Auth
Own Storage
Own Cloud Drives] C2[Organization 2
Port 3002
Own Auth
Own Storage
Own Cloud Drives] C3[Organization 3
Port 3003
Own Auth
Own Storage
Own Cloud Drives] CN[Organization N
Port 300N
Own Auth
Own Storage
Own Cloud Drives] end end U1[User 1] -->|Direct: server:3001| C1 U2[User 2] -->|Direct: server:3002| C2 U3[User 3] -->|Direct: server:3003| C3 UN[User N] -->|Direct: server:300N| CN GD1[Google Drive 1] -.->|Mounted| C1 SP2[SharePoint 2] -.->|Mounted| C2 S33[S3 Bucket 3] -.->|Mounted| C3 CDN[Cloud Storage N] -.->|Mounted| CN
What Each Container Has
- Own Database: SQLite database for user management and settings
- Own Authentication: Independent login system, no shared auth
- Own Workspace: Isolated file system and workspace
- Own Cloud Mounts: Can connect to their own Google Drive, SharePoint, S3
- Own Configuration: Independent API keys and settings
- Own Port: Direct access via unique port number
Capacity Analysis
Resource Requirements Per Container
- RAM: 300-400MB typical, 600MB peak (during Claude CLI operations)
- CPU: 0.1-0.2 vCPU average usage
- Disk: 2-5GB per container (app + workspace + documents)
- Network: Minimal bandwidth, burst during file sync
Sliplane Tier Capacity
| Tier | Specs | Price | Max Containers | Safe Containers | Cost Per User |
|---|---|---|---|---|---|
| Base | 2 vCPU, 2GB RAM, 40GB | β¬9/mo | 5-6 | 3-4 | β¬2.25-3.00 |
| Medium | 3 vCPU, 4GB RAM, 80GB | β¬24/mo | 12-13 | 8-10 | β¬2.40-3.00 |
| Large | 4 vCPU, 8GB RAM, 160GB | β¬44/mo | 26-28 | 20-25 | β¬1.76-2.20 |
| X-Large | 8 vCPU, 16GB RAM, 240GB | β¬76/mo | 53-55 | 40-50 | β¬1.52-1.90 |
| XX-Large | 16 vCPU, 32GB RAM, 360GB | β¬224/mo | 106-110 | 80-100 | β¬2.24-2.80 |
Note: "Safe Containers" accounts for:
- Operating system overhead (~500MB)
- Docker daemon overhead (~200MB)
- Peak usage spikes
- Buffer for good performance
Implementation
Docker Compose Configuration
version: '3.8'
services:
# Organization 1 - Acme Corp
sasha-acme:
image: sasha-studio:latest
container_name: sasha-acme
restart: unless-stopped
ports:
- "3001:3005" # External:Internal
volumes:
- acme-workspace:/app/workspaces
- acme-data:/app/data
- acme-uploads:/app/uploads
- acme-config:/app/config
# Optional: Mount their Google Drive
- /mnt/gdrive-acme:/app/workspaces/google-drive:ro
environment:
- NODE_ENV=production
- ORG_NAME=Acme Corp
- RUNNING_IN_DOCKER=true
# Each container has its own secrets
- JWT_SECRET=acme-unique-secret-key
- SESSION_SECRET=acme-session-secret
mem_limit: 400m
cpus: 0.2
cap_add:
- SYS_ADMIN # For FUSE mounting
devices:
- /dev/fuse # For cloud storage mounting
# Organization 2 - TechStart
sasha-techstart:
image: sasha-studio:latest
container_name: sasha-techstart
restart: unless-stopped
ports:
- "3002:3005"
volumes:
- techstart-workspace:/app/workspaces
- techstart-data:/app/data
- techstart-uploads:/app/uploads
- techstart-config:/app/config
# Their SharePoint mount
- /mnt/sharepoint-techstart:/app/workspaces/sharepoint:ro
environment:
- NODE_ENV=production
- ORG_NAME=TechStart
- RUNNING_IN_DOCKER=true
- JWT_SECRET=techstart-unique-secret
- SESSION_SECRET=techstart-session-secret
mem_limit: 400m
cpus: 0.2
cap_add:
- SYS_ADMIN
devices:
- /dev/fuse
# Add more organizations as needed...
volumes:
# Acme volumes
acme-workspace:
acme-data:
acme-uploads:
acme-config:
# TechStart volumes
techstart-workspace:
techstart-data:
techstart-uploads:
techstart-config:
Docker Volume Isolation Explained
How Each Container Has Its Own Volumes
Docker volumes provide complete data isolation between containers. Each organization's container has its own set of named volumes that are:
- Physically separated on the host filesystem
- Independently managed by Docker
- Persist independently across container restarts
- Cannot be accessed by other containers
Volume Structure Per Container
Each container has 4 dedicated volumes:
# For organization "acme"
acme-workspace: # /app/workspaces - Claude CLI projects, documents
acme-data: # /app/data - SQLite database, application data
acme-uploads: # /app/uploads - User uploaded files
acme-config: # /app/config - API keys, configuration files
Physical Storage Location
Docker stores these volumes on the host filesystem:
# On the Sliplane server
/var/lib/docker/volumes/
βββ acme-workspace/_data/ # Acme's workspace files
βββ acme-data/_data/ # Acme's database
βββ acme-uploads/_data/ # Acme's uploads
βββ acme-config/_data/ # Acme's configuration
βββ techstart-workspace/_data/ # TechStart's workspace (completely separate)
βββ techstart-data/_data/ # TechStart's database (isolated)
βββ techstart-uploads/_data/ # TechStart's uploads (independent)
βββ techstart-config/_data/ # TechStart's configuration (private)
Why This Matters
- Data Privacy: Acme's data is physically separated from TechStart's data
- No Cross-Contamination: Container crashes or issues don't affect other organizations' data
- Independent Backups: Each organization's volumes can be backed up separately
- Easy Migration: Move one organization without affecting others
- Clear Billing: Storage usage is clearly attributable to each organization
Volume Lifecycle Management
# Create volumes for new organization
docker volume create buildco-workspace
docker volume create buildco-data
docker volume create buildco-uploads
docker volume create buildco-config
# List volumes for specific organization
docker volume ls | grep buildco
# Backup specific organization's data
docker run --rm \
-v buildco-data:/source \
-v /backups:/backup \
alpine tar czf /backup/buildco-data-$(date +%Y%m%d).tar.gz /source
# Remove organization (preserves volumes by default)
docker-compose stop sasha-buildco
docker-compose rm sasha-buildco
# Volumes still exist and contain all data!
# Completely remove organization including data
docker volume rm buildco-workspace buildco-data buildco-uploads buildco-config
Volume Independence Example
# Even if containers run on same server, volumes are isolated:
services:
sasha-acme:
volumes:
- acme-data:/app/data # Only sasha-acme can access this
sasha-techstart:
volumes:
- techstart-data:/app/data # Only sasha-techstart can access this
# Attempting cross-access would fail:
# sasha-acme CANNOT access techstart-data
# sasha-techstart CANNOT access acme-data
Storage Considerations
- Typical Usage: 2-5GB per organization
- Growth: Depends on document uploads and Claude CLI usage
- Monitoring:
docker system df -vshows volume usage per organization - Cleanup: Old volumes persist until explicitly removed
Management Scripts
Container Management (manage-containers.sh)
#!/bin/bash
# Start all containers
start_all() {
docker-compose up -d
echo "β
All containers started"
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
}
# Stop specific organization
stop_org() {
ORG=$1
docker stop sasha-${ORG}
echo "β
Stopped container for ${ORG}"
}
# Restart specific organization
restart_org() {
ORG=$1
docker restart sasha-${ORG}
echo "β
Restarted container for ${ORG}"
}
# View logs for organization
logs_org() {
ORG=$1
docker logs -f sasha-${ORG}
}
# Backup organization data
backup_org() {
ORG=$1
BACKUP_DIR="/backups/${ORG}-$(date +%Y%m%d)"
docker run --rm \
-v ${ORG}-data:/data \
-v ${BACKUP_DIR}:/backup \
alpine tar czf /backup/data.tar.gz /data
echo "β
Backed up ${ORG} to ${BACKUP_DIR}"
}
# Monitor all containers
monitor() {
watch -n 2 'docker stats --no-stream --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"'
}
Add New Organization (add-org.sh)
#!/bin/bash
ORG_NAME=$1
PORT=$2
if [ -z "$ORG_NAME" ] || [ -z "$PORT" ]; then
echo "Usage: ./add-org.sh <org-name> <port>"
exit 1
fi
# Generate unique secrets
JWT_SECRET=$(openssl rand -base64 32)
SESSION_SECRET=$(openssl rand -base64 32)
# Add to docker-compose.yml
cat >> docker-compose.yml << EOF
sasha-${ORG_NAME}:
image: sasha-studio:latest
container_name: sasha-${ORG_NAME}
restart: unless-stopped
ports:
- "${PORT}:3005"
volumes:
- ${ORG_NAME}-workspace:/app/workspaces
- ${ORG_NAME}-data:/app/data
- ${ORG_NAME}-uploads:/app/uploads
- ${ORG_NAME}-config:/app/config
environment:
- NODE_ENV=production
- ORG_NAME=${ORG_NAME}
- RUNNING_IN_DOCKER=true
- JWT_SECRET=${JWT_SECRET}
- SESSION_SECRET=${SESSION_SECRET}
mem_limit: 400m
cpus: 0.2
cap_add:
- SYS_ADMIN
devices:
- /dev/fuse
EOF
# Add volumes
cat >> docker-compose.yml << EOF
${ORG_NAME}-workspace:
${ORG_NAME}-data:
${ORG_NAME}-uploads:
${ORG_NAME}-config:
EOF
echo "β
Added ${ORG_NAME} on port ${PORT}"
echo "π Run 'docker-compose up -d sasha-${ORG_NAME}' to start"
Cloud Storage Integration
Each Container's Independent Cloud Storage
Every container can mount its own cloud storage independently:
# In docker-compose.yml for each container
volumes:
# Their Google Drive
- /mnt/gdrive-${ORG}:/app/workspaces/google-drive
# Their SharePoint
- /mnt/sharepoint-${ORG}:/app/workspaces/sharepoint
# Their S3 bucket
- /mnt/s3-${ORG}:/app/workspaces/s3
Cloud Mount Setup Per Organization
# Mount Google Drive for Acme
rclone mount gdrive-acme: /mnt/gdrive-acme \
--daemon \
--allow-other \
--vfs-cache-mode writes
# Mount SharePoint for TechStart
rclone mount sharepoint-techstart: /mnt/sharepoint-techstart \
--daemon \
--allow-other \
--vfs-cache-mode writes
# Mount S3 for Enterprise Corp
rclone mount s3-enterprise: /mnt/s3-enterprise \
--daemon \
--allow-other \
--vfs-cache-mode writes
Benefits of Independent Cloud Storage
- Data Isolation: No cross-organization data access
- Independent Auth: Each org uses their own cloud credentials
- Custom Configurations: Different sync intervals, cache settings
- Compliance: Clear data boundaries for regulatory requirements
Scaling Strategy
Phase 1: Startup (1-10 organizations)
Server: Medium tier (β¬24/mo)
Management: Manual docker-compose
Access: Direct ports (3001-3010)
# Simple deployment
docker-compose up -d
# Organizations access via:
# - acme.sliplane.app:3001
# - techstart.sliplane.app:3002
# - startup3.sliplane.app:3003
Phase 2: Growth (10-25 organizations)
Server: Large tier (β¬44/mo)
Management: Script-generated docker-compose
Access: Still direct ports (3001-3025)
# Generate compose file for 25 orgs
./generate-compose.sh 25
docker-compose up -d
Phase 3: Scale (25+ organizations)
Servers: Multiple Large/X-Large tiers
Management: Orchestration platform
Access: Multiple servers
Server 1 (Large): Organizations 1-25
Server 2 (Large): Organizations 26-50
Server 3 (X-Large): Organizations 51-100
Phase 4: Enterprise (100+ organizations)
Infrastructure: Kubernetes cluster
Management: Helm charts
Access: Load balanced with ingress
Benefits of This Architecture
Complete Isolation
Security: No data leakage between organizations
Compliance: Clear boundaries for GDPR/HIPAA
Customization: Each org can have different configurations
Fault Isolation: One crash doesn't affect others
Operational Simplicity
No Nginx Complexity: Direct port access
No Central Auth: Each container manages its own users
Easy Debugging: Issues isolated to specific container
Simple Backups: Backup individual organizations
Flexibility
Independent Upgrades: Update one org without affecting others
Custom Resources: Allocate more resources to specific orgs
Independent Cloud Storage: Each org's own drives
Custom Branding: Per-organization customization
Monitoring
Simple Monitoring with Docker Stats
# Real-time resource usage
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"
# Check specific organization
docker stats sasha-acme
# Health checks
for container in $(docker ps --format '{{.Names}}' | grep sasha); do
echo -n "$container: "
docker exec $container curl -s http://localhost:3005/api/health || echo "UNHEALTHY"
done
Optional Advanced Monitoring
For larger deployments, consider:
- Prometheus + Grafana (separate container)
- ELK stack for log aggregation
- Uptime monitoring per organization
Security Considerations
Container Isolation
- Network: Containers cannot communicate with each other
- Filesystem: Complete filesystem isolation
- Process: Process namespace isolation
- Resources: cgroup limits prevent resource hogging
Data Protection
- Encryption: Each container can have encrypted volumes
- Backups: Independent backup schedules per org
- Access Control: Each container's own authentication
- Audit Logs: Separate logs per organization
Cloud Storage Security
- Credentials: Each org's cloud credentials stored separately
- Mount Permissions: Read-only mounts where appropriate
- Token Rotation: Independent token refresh per organization
Organization Access Documentation
Port Assignment Table
| Organization | Container Name | Port | URL | Status |
|---|---|---|---|---|
| Acme Corp | sasha-acme | 3001 | server.sliplane.app:3001 | Active |
| TechStart | sasha-techstart | 3002 | server.sliplane.app:3002 | Active |
| BuildCo | sasha-buildco | 3003 | server.sliplane.app:3003 | Active |
| DataInc | sasha-datainc | 3004 | server.sliplane.app:3004 | Active |
| CloudNet | sasha-cloudnet | 3005 | server.sliplane.app:3005 | Suspended |
Access Instructions for Organizations
Dear [Organization],
Your Sasha Studio instance is ready at:
URL: https://sasha.sliplane.app:[PORT]
This is your dedicated, isolated instance with:
- Your own user management
- Your own data storage
- Your own cloud drive connections
- Complete privacy and isolation
To connect your cloud storage:
1. Log in to your Sasha instance
2. Go to Settings > Tools > Cloud Storage
3. Connect your Google Drive/SharePoint/S3
4. Your files will be available immediately
For support: support@example.com
Getting Started Checklist
Initial Setup
- Choose Sliplane tier based on organization count
- Create docker-compose.yml with initial organizations
- Generate unique secrets for each container
- Deploy containers:
docker-compose up -d - Document port assignments
For Each New Organization
- Assign unique port number
- Generate JWT and session secrets
- Add to docker-compose.yml
- Create volumes
- Start container
- Provide access instructions
- Optional: Set up cloud storage mounts
Maintenance Tasks
- Weekly: Check container health
- Monthly: Review resource usage
- Quarterly: Backup all organization data
- As needed: Scale to larger tier or additional servers
Cost Analysis
Per-Organization Costs
| Organizations | Recommended Tier | Total Cost | Cost Per Org | Includes |
|---|---|---|---|---|
| 1-4 | Base | β¬9/mo | β¬2.25-9.00 | Container, storage, bandwidth |
| 5-10 | Medium | β¬24/mo | β¬2.40-4.80 | Better performance |
| 11-25 | Large | β¬44/mo | β¬1.76-4.00 | Best value |
| 26-50 | X-Large | β¬76/mo | β¬1.52-2.92 | Scale pricing |
| 51-100 | 2x Large | β¬88/mo | β¬0.88-1.73 | Multi-server |
ROI Calculation
- Traditional VPS per org: β¬5-10/month
- Sasha on Sliplane: β¬1.52-3.00/month
- Savings: 40-70% on infrastructure costs
- Plus: Simplified management, better resource utilization
Conclusion
This container-per-user architecture provides:
- Complete isolation between organizations
- Simple management without nginx or central services
- Independent cloud storage per organization
- Direct access via dedicated ports
- Cost-effective scaling from 1 to 100+ organizations
The architecture prioritizes simplicity, security, and isolation over complex orchestration, making it ideal for B2B SaaS deployments where data isolation and reliability are paramount.
Remember: The best architecture is the one you can debug at 3 AM. This simple, isolated approach ensures that when something goes wrong, it's contained to one organization and easy to fix.