ECS Deployment Examples
This section contains examples of deploying containerized applications to AWS ECS Fargate using Simple Container.
Available Examples
Backend Service
Deploy a Node.js backend service with MongoDB integration.
Use Case: REST APIs, GraphQL services, microservices backends
Configuration:
# .sc/stacks/backend/client.yaml
schemaVersion: 1.0
stacks:
production:
type: cloud-compose
parent: myorg/infrastructure
config:
domain: api.mycompany.com
size:
cpu: 2048
memory: 4096
dockerComposeFile: ${git:root}/docker-compose.yaml
uses: [mongodb-shared, redis-cache]
runs: [backend-service]
# Enhanced with new placeholder extensions in environment variables
env:
MONGODB_URL: "${resource:mongodb-shared.uri}"
REDIS_HOST: "${resource:redis-cache.host}"
REDIS_PORT: "${resource:redis-cache.port}"
NODE_ENV: production
# Deployment tracking with new date and git placeholders
DEPLOYMENT_TIME: "${date:iso8601}" # 2024-10-24T20:46:41Z
BUILD_VERSION: "${date:dateOnly}.${git:commit.short}" # 2024-10-24.a1b2c3d
GIT_BRANCH: "${git:branch.clean}" # Clean branch name
GIT_COMMIT: "${git:commit.short}" # Short commit hash
BUILD_ID: "${git:branch}-${date:timestamp}" # Unique build identifier
alerts:
slack:
webhookUrl: ${secret:alerts-slack-webhook}
# ECS Service Monitoring
maxMemory:
threshold: 80
alertName: backend-max-memory
description: "Backend memory usage exceeds 80%"
maxCPU:
threshold: 70
alertName: backend-max-cpu
description: "Backend CPU usage exceeds 70%"
# ALB Monitoring
serverErrors:
threshold: 10
alertName: backend-server-errors
description: "High 5XX error rate detected"
periodSec: 300
unhealthyHosts:
threshold: 1
alertName: backend-unhealthy-hosts
description: "Unhealthy targets behind load balancer"
periodSec: 300
responseTime:
threshold: 2.0
alertName: backend-response-time
description: "Response time exceeds 2 seconds"
periodSec: 300
Docker Compose:
# docker-compose.yaml (for local development)
version: '3.8'
services:
backend-service:
build: .
ports:
- "3000:3000"
environment:
MONGODB_URL: "mongodb://localhost:27017/backend_dev"
REDIS_URL: "redis://localhost:6379"
NODE_ENV: development
Features:
- MongoDB Atlas integration
- Redis caching layer
- Auto-scaling configuration
- Health checks and monitoring
- Comprehensive ALB monitoring (5XX errors, unhealthy hosts, response time)
- ECS service monitoring (CPU, memory)
- Multi-channel alerting (Slack, Discord, Telegram, Email)
- Secure environment variable injection
Vector Database
Deploy a high-performance vector database service.
Use Case: AI/ML applications, similarity search, recommendation engines
Configuration:
# .sc/stacks/vectordb/client.yaml
schemaVersion: 1.0
stacks:
production:
type: cloud-compose
parent: myorg/infrastructure
config:
domain: vectordb.mycompany.com
size:
cpu: 4096
memory: 8192
dockerComposeFile: ${git:root}/docker-compose.yaml
uses: [vector-storage, nlb-loadbalancer]
runs: [vector-service]
alerts:
slack:
webhookUrl: ${secret:alerts-slack-webhook}
maxMemory:
threshold: 85
alertName: vectordb-max-memory
description: "Vector DB memory usage exceeds 85%"
maxCPU:
threshold: 75
alertName: vectordb-max-cpu
description: "Vector DB CPU usage exceeds 75%"
Features:
- Network Load Balancer for high performance
- Auto-scaling based on CPU utilization
- Persistent vector storage
- High-throughput configuration
- GPU support for ML workloads
Blockchain Service
Deploy blockchain integration services.
Use Case: Web3 applications, cryptocurrency services, smart contract interaction
Configuration:
# .sc/stacks/blockchain/client.yaml
schemaVersion: 1.0
stacks:
production:
type: cloud-compose
parent: myorg/infrastructure
config:
uses: [ethereum-node, postgres-db]
domain: blockchain.mycompany.com
size:
cpu: 2048
memory: 4096
dockerComposeFile: ${git:root}/docker-compose.yaml
runs: [blockchain-service]
dependencies:
- name: ethereum-shared
owner: myproject/blockchain-infrastructure
resource: ethereum-node-cluster
Features:
- Ethereum node integration
- Cross-service dependencies
- PostgreSQL for transaction storage
- Secure API endpoints
- Real-time blockchain monitoring
Blog Platform
Deploy a multi-service blog platform.
Use Case: Content management, publishing platforms, media sites
Configuration:
# .sc/stacks/blog/client.yaml
schemaVersion: 1.0
stacks:
production:
type: cloud-compose
parent: myorg/infrastructure
config:
uses: [postgres-db, redis-cache, s3-media]
domain: blog.mycompany.com
size:
cpu: 1024
memory: 2048
dockerComposeFile: ${git:root}/docker-compose.yaml
runs: [blog-api, blog-admin]
Docker Compose:
version: '3.8'
services:
blog-api:
build: ./api
ports:
- "3000:3000"
environment:
DATABASE_URL: ${DATABASE_URL}
REDIS_URL: ${REDIS_URL}
S3_BUCKET: ${S3_BUCKET}
blog-admin:
build: ./admin
ports:
- "3001:3001"
environment:
API_URL: "https://blog.mycompany.com/api"
Features:
- Multi-service deployment
- Reverse proxy configuration
- Media storage with S3
- Admin interface separation
- Content delivery optimization
Meteor.js Application
Deploy a Meteor.js full-stack application.
Use Case: Real-time applications, collaborative tools, full-stack JavaScript apps
Configuration:
# .sc/stacks/meteor-app/client.yaml
schemaVersion: 1.0
stacks:
production:
type: cloud-compose
parent: myorg/infrastructure
config:
uses: [mongodb-shared]
domain: app.mycompany.com
size:
cpu: 1024
memory: 2048
dockerComposeFile: ${git:root}/docker-compose.yaml
runs: [meteor-app]
Features:
- MongoDB integration
- Real-time data synchronization
- WebSocket support
- Meteor-specific optimizations
- Session affinity configuration
Common Patterns
Multi-Service Architecture
stacks:
production:
type: cloud-compose
parent: myorg/infrastructure
config:
uses: [postgres-db, redis-cache, s3-storage]
runs: [api-service, worker-service, scheduler]
dependencies:
- name: billing
owner: myproject/billing
resource: mongo-cluster2
Health Checks
# docker-compose.yaml
services:
api-service:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Deployment Commands
Deploy to staging:
Deploy to production:
Best Practices
- Use health checks for all services to ensure proper deployment
- Configure auto-scaling based on actual usage patterns
- Implement proper logging with structured log formats
- Use environment-specific configurations for different environments
- Set up comprehensive monitoring and alerting for production services:
- ALB Monitoring: Monitor server errors (5XX), unhealthy hosts, and response times
- ECS Monitoring: Track CPU and memory utilization
- Multi-channel alerts: Configure Slack, Discord, Telegram, or email notifications
- Implement graceful shutdown handling in your applications
- Use secrets management for sensitive configuration values
ALB Monitoring Configuration
For production deployments with Application Load Balancers, configure monitoring for:
alerts:
# Notification channels
slack:
webhookUrl: ${secret:alerts-slack-webhook}
email:
addresses:
- "ops@company.com"
# ALB Health Monitoring
serverErrors:
threshold: 5.0 # Alert on 5+ server errors per period
periodSec: 300 # 5-minute evaluation period
alertName: "prod-server-errors"
description: "High 5XX error rate detected"
unhealthyHosts:
threshold: 1 # Alert when any target becomes unhealthy
periodSec: 300
alertName: "prod-unhealthy-hosts"
description: "Unhealthy targets detected"
responseTime:
threshold: 1.5 # Alert when response time > 1.5 seconds
periodSec: 300
alertName: "prod-response-time"
description: "Response time degradation"
Technical Details:
- Server errors use HTTPCode_Target_5XX_Count metric with LoadBalancer dimension
- Unhealthy hosts use UnHealthyHostCount metric with LoadBalancer + TargetGroup dimensions
- Response time uses TargetResponseTime metric with LoadBalancer dimension
- All metrics use full AWS load balancer identifiers for reliable monitoring