Last updated: Aug 12, 2025, 01:09 PM UTC

Claude Code CLI + Amazon Bedrock Customer Accounts Integration

Generated: 2025-01-12 UTC
Purpose: Technical implementation guide for integrating Claude Code CLI with customer-owned Amazon Bedrock accounts
Architecture: Each organization uses their own AWS account for complete isolation


Executive Summary

This document provides the complete technical implementation for enabling Sasha Studio customers to use their own Amazon Bedrock accounts with Claude Code CLI. This architecture ensures:

  • Complete Data Sovereignty: No customer data leaves their AWS account
  • Direct Cost Control: Customers pay AWS directly with no markup
  • Full Compliance: Leverages customer's existing AWS compliance certifications
  • Total Isolation: Each organization's AI processing is completely separated

Architecture Overview

Customer Account Model

Each Sasha Studio container connects directly to the customer's own AWS Bedrock service:

graph TB subgraph "Customer's AWS Account" B1[Amazon Bedrock] IAM1[IAM Policies] CT1[CloudTrail Logs] CW1[CloudWatch Metrics] end subgraph "Sasha Studio Infrastructure" subgraph "Customer Containers" C1[Acme Corp Container
Uses Acme's AWS] C2[TechStart Container
Uses TechStart's AWS] C3[BuildCo Container
Uses BuildCo's AWS] end end C1 -->|API Calls| B1 C2 -->|API Calls| B2[TechStart's Bedrock] C3 -->|API Calls| B3[BuildCo's Bedrock] IAM1 -->|Controls| B1 B1 -->|Logs| CT1 B1 -->|Metrics| CW1

Data Flow

  1. User interacts with Sasha Studio UI
  2. Request sent to Claude Code CLI in container
  3. Claude Code CLI uses customer's AWS credentials
  4. API call made directly to customer's Bedrock
  5. Response returned to user
  6. No data stored or logged by Sasha Studio

Technical Implementation

Container Environment Configuration

Each customer container requires specific environment variables:

# docker-compose.yml per customer
version: '3.8'

services:
  sasha-${CUSTOMER_ID}:
    image: sasha-studio:latest
    container_name: sasha-${CUSTOMER_ID}
    environment:
      # Core Sasha Configuration
      - NODE_ENV=production
      - ORG_NAME=${CUSTOMER_NAME}
      - RUNNING_IN_DOCKER=true
      
      # Claude Code Bedrock Configuration
      - CLAUDE_CODE_USE_BEDROCK=1
      - AWS_REGION=${CUSTOMER_AWS_REGION}
      - ANTHROPIC_MODEL=${CUSTOMER_MODEL_CHOICE}
      
      # Performance Tuning
      - CLAUDE_CODE_MAX_OUTPUT_TOKENS=4096
      - MAX_THINKING_TOKENS=1024
      
      # Authentication Method (choose one):
      # Option 1: Access Keys
      - AWS_ACCESS_KEY_ID=${CUSTOMER_ACCESS_KEY}
      - AWS_SECRET_ACCESS_KEY=${CUSTOMER_SECRET_KEY}
      
      # Option 2: Assume Role (for cross-account)
      - AWS_ROLE_ARN=${CUSTOMER_ROLE_ARN}
      - AWS_EXTERNAL_ID=${UNIQUE_EXTERNAL_ID}
      
    volumes:
      - ${CUSTOMER_ID}-workspace:/app/workspaces
      - ${CUSTOMER_ID}-data:/app/data
      
      # Optional: Mount AWS config/credentials
      - ./secure/customers/${CUSTOMER_ID}/.aws:/root/.aws:ro
    
    secrets:
      - source: ${CUSTOMER_ID}_aws_credentials
        target: /run/secrets/aws_credentials

Authentication Methods

Method 1: IAM User with Access Keys

Pros: Simple setup, direct control
Cons: Requires key rotation, less secure than roles

# Customer creates IAM user with programmatic access
aws iam create-user --user-name sasha-studio-integration

# Attach policy (see IAM section below)
aws iam attach-user-policy \
  --user-name sasha-studio-integration \
  --policy-arn arn:aws:iam::ACCOUNT:policy/SashaStudioBedrockAccess

# Create access key
aws iam create-access-key --user-name sasha-studio-integration

Method 2: Cross-Account IAM Role

Pros: More secure, automatic credential rotation
Cons: More complex setup

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::SASHA_ACCOUNT:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "${UNIQUE_EXTERNAL_ID}"
        }
      }
    }
  ]
}

Method 3: Bedrock API Keys (When Available)

Pros: Simpler than IAM, Bedrock-specific
Cons: Limited availability by region

# Future implementation when Bedrock supports API keys
export BEDROCK_API_KEY=${CUSTOMER_BEDROCK_KEY}
export BEDROCK_ENDPOINT=${CUSTOMER_BEDROCK_ENDPOINT}

Security Implementation

Customer IAM Policy Template

Minimal permissions required for Sasha Studio:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "BedrockModelInvocation",
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:InvokeModelWithResponseStream"
      ],
      "Resource": [
        "arn:aws:bedrock:*:${AWS_ACCOUNT_ID}:model/anthropic.claude-3-sonnet*",
        "arn:aws:bedrock:*:${AWS_ACCOUNT_ID}:model/anthropic.claude-3-haiku*",
        "arn:aws:bedrock:*:${AWS_ACCOUNT_ID}:model/anthropic.claude-3-opus*",
        "arn:aws:bedrock:*:${AWS_ACCOUNT_ID}:model/anthropic.claude-3-5-sonnet*"
      ]
    },
    {
      "Sid": "BedrockModelDiscovery",
      "Effect": "Allow",
      "Action": [
        "bedrock:ListFoundationModels",
        "bedrock:GetFoundationModel"
      ],
      "Resource": "*"
    },
    {
      "Sid": "OptionalCloudWatchMetrics",
      "Effect": "Allow",
      "Action": [
        "cloudwatch:PutMetricData"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "cloudwatch:namespace": "SashaStudio/Bedrock"
        }
      }
    }
  ]
}

Credential Storage Architecture

/secure/customers/
β”œβ”€β”€ acme/
β”‚   β”œβ”€β”€ .aws/
β”‚   β”‚   β”œβ”€β”€ credentials    # Encrypted at rest
β”‚   β”‚   └── config         # Region and output format
β”‚   β”œβ”€β”€ bedrock-config.json
β”‚   └── last-rotation.log
β”œβ”€β”€ techstart/
β”‚   └── ... (similar structure)

Encryption at Rest

# Encrypt customer credentials using AES-256
openssl enc -aes-256-cbc -salt \
  -in credentials.txt \
  -out credentials.enc \
  -k $ENCRYPTION_KEY

# Decrypt when needed
openssl enc -aes-256-cbc -d \
  -in credentials.enc \
  -out credentials.txt \
  -k $ENCRYPTION_KEY

Customer Onboarding Process

Step 1: Customer Prerequisites

Customer must complete before onboarding:

  • AWS account with Bedrock enabled
  • Bedrock model access approved (Claude 3 models)
  • IAM user or role created
  • Budget alerts configured (optional but recommended)

Step 2: Information Collection

# Customer onboarding form
customer_info:
  organization: "Acme Corp"
  contact_email: "admin@acme.com"
  aws_account_id: "123456789012"
  preferred_region: "us-east-1"
  model_preference: "claude-3-sonnet"
  authentication_method: "access_keys" # or "iam_role"
  expected_monthly_tokens: 10000000
  compliance_requirements:
    - HIPAA
    - SOC2

Step 3: Container Configuration

#!/bin/bash
# configure-customer-bedrock.sh

CUSTOMER_ID=$1
AWS_REGION=$2
ACCESS_KEY=$3
SECRET_KEY=$4

# Create secure storage
mkdir -p /secure/customers/${CUSTOMER_ID}/.aws

# Store credentials securely
cat > /secure/customers/${CUSTOMER_ID}/.aws/credentials << EOF
[default]
aws_access_key_id = ${ACCESS_KEY}
aws_secret_access_key = ${SECRET_KEY}
EOF

# Set permissions
chmod 600 /secure/customers/${CUSTOMER_ID}/.aws/credentials

# Test connectivity
docker exec sasha-${CUSTOMER_ID} \
  aws bedrock list-foundation-models \
  --region ${AWS_REGION}

echo "βœ… Customer ${CUSTOMER_ID} Bedrock configured"

Step 4: Validation

#!/bin/bash
# validate-bedrock-access.sh

CUSTOMER_ID=$1

# Test model invocation
docker exec sasha-${CUSTOMER_ID} bash -c '
  echo "Testing Bedrock connectivity..."
  
  # Set environment
  export CLAUDE_CODE_USE_BEDROCK=1
  
  # Simple test prompt
  echo "Say hello" | claude-code --model claude-3-haiku
'

if [ $? -eq 0 ]; then
  echo "βœ… Bedrock access validated for ${CUSTOMER_ID}"
else
  echo "❌ Bedrock access failed for ${CUSTOMER_ID}"
  exit 1
fi

Monitoring and Observability

Customer-Side Monitoring

Customers can monitor their usage through AWS:

# CloudWatch dashboard for customer
{
  "MetricWidget": {
    "metrics": [
      ["AWS/Bedrock", "InvocationCount", {"stat": "Sum"}],
      [".", "InvocationLatency", {"stat": "Average"}],
      [".", "InputTokenCount", {"stat": "Sum"}],
      [".", "OutputTokenCount", {"stat": "Sum"}]
    ],
    "period": 300,
    "stat": "Average",
    "region": "us-east-1",
    "title": "Sasha Studio Bedrock Usage"
  }
}

Our Monitoring

Track container health without accessing customer data:

// Monitor connectivity without storing prompts
async function checkBedrockHealth(customerId) {
  try {
    const start = Date.now();
    await execInContainer(customerId, 'aws bedrock list-foundation-models');
    const latency = Date.now() - start;
    
    return {
      customerId,
      status: 'healthy',
      latency,
      timestamp: new Date().toISOString()
    };
  } catch (error) {
    return {
      customerId,
      status: 'unhealthy',
      error: error.message,
      timestamp: new Date().toISOString()
    };
  }
}

Cost Analysis for Customers

Bedrock Pricing (as of 2025)

Model Input (per 1K tokens) Output (per 1K tokens)
Claude 3 Haiku $0.00025 $0.00125
Claude 3 Sonnet $0.003 $0.015
Claude 3.5 Sonnet $0.003 $0.015
Claude 3 Opus $0.015 $0.075

Monthly Cost Estimation

function estimateMonthlyCost(usage) {
  const pricing = {
    'claude-3-haiku': { input: 0.00025, output: 0.00125 },
    'claude-3-sonnet': { input: 0.003, output: 0.015 },
    'claude-3-opus': { input: 0.015, output: 0.075 }
  };
  
  const model = usage.model || 'claude-3-sonnet';
  const inputCost = (usage.inputTokens / 1000) * pricing[model].input;
  const outputCost = (usage.outputTokens / 1000) * pricing[model].output;
  
  return {
    model,
    inputTokens: usage.inputTokens,
    outputTokens: usage.outputTokens,
    inputCost,
    outputCost,
    totalCost: inputCost + outputCost,
    withContingency: (inputCost + outputCost) * 1.2
  };
}

// Example: 10M input, 2M output tokens per month
// Sonnet: $30 + $30 = $60/month
// Haiku: $2.50 + $2.50 = $5/month

Cost Optimization Strategies

  1. Model Selection: Use Haiku for simple tasks, Sonnet for complex
  2. Prompt Caching: Enable where available to reduce input tokens
  3. Reserved Capacity: For predictable workloads
  4. Cross-Region: Use cheapest available region

Advanced Features

Multi-Region Support

# Support customers in different regions
environments:
  - customer: acme
    primary_region: us-east-1
    fallback_region: us-west-2
  - customer: techstart-eu
    primary_region: eu-west-1
    fallback_region: eu-central-1

Model Routing

// Route to different models based on task
function selectModel(task) {
  const routing = {
    'simple_query': 'claude-3-haiku',
    'code_generation': 'claude-3-sonnet',
    'complex_analysis': 'claude-3-opus',
    'default': process.env.DEFAULT_MODEL || 'claude-3-sonnet'
  };
  
  return routing[task.type] || routing.default;
}

Automatic Failover

// Failover to different region if primary fails
async function invokeWithFailover(prompt, config) {
  const regions = [config.primary_region, config.fallback_region];
  
  for (const region of regions) {
    try {
      return await invokeBedrock(prompt, { ...config, region });
    } catch (error) {
      console.log(`Failed in ${region}, trying next...`);
    }
  }
  
  throw new Error('All regions failed');
}

Troubleshooting Guide

Common Issues and Solutions

1. "Access Denied" Error

# Check IAM policy is attached
aws iam list-attached-user-policies --user-name sasha-studio-integration

# Verify model access in Bedrock console
aws bedrock list-foundation-models --region us-east-1

2. "Model Not Found" Error

# Ensure customer has requested model access
aws bedrock get-foundation-model \
  --model-identifier anthropic.claude-3-sonnet \
  --region us-east-1

3. Rate Limiting

// Implement exponential backoff
async function invokeWithRetry(prompt, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await invokeBedrock(prompt);
    } catch (error) {
      if (error.code === 'ThrottlingException') {
        await sleep(Math.pow(2, i) * 1000);
      } else {
        throw error;
      }
    }
  }
}

4. Credential Rotation

#!/bin/bash
# rotate-credentials.sh

CUSTOMER_ID=$1
NEW_ACCESS_KEY=$2
NEW_SECRET_KEY=$3

# Backup old credentials
cp /secure/customers/${CUSTOMER_ID}/.aws/credentials \
   /secure/customers/${CUSTOMER_ID}/.aws/credentials.backup

# Update credentials
cat > /secure/customers/${CUSTOMER_ID}/.aws/credentials << EOF
[default]
aws_access_key_id = ${NEW_ACCESS_KEY}
aws_secret_access_key = ${NEW_SECRET_KEY}
EOF

# Test new credentials
if validate-bedrock-access.sh ${CUSTOMER_ID}; then
  echo "βœ… Credentials rotated successfully"
  rm /secure/customers/${CUSTOMER_ID}/.aws/credentials.backup
else
  echo "❌ New credentials failed, reverting"
  mv /secure/customers/${CUSTOMER_ID}/.aws/credentials.backup \
     /secure/customers/${CUSTOMER_ID}/.aws/credentials
fi

Customer Success Metrics

Usage Tracking (Customer Side)

-- Customer can query their own CloudTrail/CloudWatch
SELECT 
  DATE(timestamp) as date,
  COUNT(*) as invocations,
  SUM(input_tokens) as total_input,
  SUM(output_tokens) as total_output,
  AVG(latency_ms) as avg_latency
FROM bedrock_usage
WHERE timestamp >= DATE_SUB(NOW(), INTERVAL 30 DAY)
GROUP BY DATE(timestamp);

Health Metrics (Our Side)

// Track connectivity without storing customer data
const healthMetrics = {
  connectivity: {
    success_rate: 0.999,
    avg_latency_ms: 250,
    last_check: new Date()
  },
  containers: {
    total: 25,
    healthy: 25,
    unhealthy: 0
  },
  alerts: []
};

Benefits Summary

For Customers

  • Complete Control: Full ownership of AI infrastructure
  • Data Privacy: No data leaves their AWS account
  • Cost Transparency: Direct AWS billing, usage visible in AWS console
  • Compliance: Inherits their AWS compliance certifications
  • Flexibility: Choose regions, models, and capacity

For Sasha Studio

  • Reduced Liability: No customer data storage
  • Simplified Compliance: Customer owns compliance burden
  • Scalability: No API rate limits to manage
  • Cost Efficiency: No markup or infrastructure costs

Implementation Checklist

Phase 1: Foundation

  • Create IAM policy templates
  • Build credential encryption system
  • Develop onboarding scripts
  • Write customer documentation

Phase 2: Integration

  • Update Docker configurations
  • Implement credential injection
  • Add health monitoring
  • Create validation tests

Phase 3: Customer Pilot

  • Select pilot customers
  • Run onboarding process
  • Monitor performance
  • Gather feedback

Phase 4: Production

  • Finalize documentation
  • Create support runbooks
  • Launch to all customers
  • Continuous optimization

Related Documentation


Next Steps: Create customer-facing documentation and onboarding scripts to operationalize this architecture.