Deploy und DevOps mit Copilot
Die Bereitstellung ist der letzte Schritt, um Ihr Projekt der Welt zugänglich zu machen. Aber das ist es nicht einfach „online stellen“: ein professioneller Einsatz bedeutet Containerisierung, Automatisiertes CI/CD, Gesundheitschecks, Protokollierung e Überwachung. GitHub Copilot kann Ihnen bei der Einrichtung helfen richtig und sicher.
In diesem Artikel erfahren Sie, wie Sie optimierte Docker-Dateien erstellen und konfigurieren Docker Compose für Entwicklung und Produktion, Implementierung kompletter CI/CD-Pipelines mit GitHub Actions und bereiten Sie die Anwendung für eine produktionsbereite Umgebung vor.
📚 Übersicht der Serie
| # | Artikel | Schwerpunkt |
|---|---|---|
| 1 | Grundlagen und Mindset | Setup und Denkweise |
| 2 | Ideenfindung und Anforderungen | Von der Idee zum MVP |
| 3 | Backend-Architektur | API und Datenbank |
| 4 | Frontend-Struktur | UI und Komponenten |
| 5 | Prompt Engineering | Prompts und MCP-Agenten |
| 6 | Testing und Qualität | Unit, Integration, E2E |
| 7 | Dokumentation | README, API Docs, ADR |
| 8 | 📍 Aktuell → Deploy und DevOps | Docker, CI/CD |
| 9 | Evolution | Skalierbarkeit und Wartung |
Checklist Pre-Deploy
Stellen Sie vor der Bereitstellung sicher, dass Ihr Projekt produktionsbereit ist.
🚀 Checklist Pre-Produzione
| Area | Erfordernis | Verificato |
|---|---|---|
| Sicherheit | Geheimnisse in Umgebungsvariablen (nicht im Code) | ☐ |
| Sicherheit | HTTPS configurato | ☐ |
| Sicherheit | Rate limiting attivo | ☐ |
| Sicherheit | CORS configurato correttamente | ☐ |
| Performance | Build ottimizzata (minification, tree-shaking) | ☐ |
| Performance | Compression abilitata (gzip/brotli) | ☐ |
| Monitoring | Health check endpoint | ☐ |
| Monitoring | Logging strutturato configurato | ☐ |
| Monitoring | Error tracking (Sentry, etc.) | ☐ |
| Resilienza | Graceful shutdown implementato | ☐ |
| Resilienza | Database connection pooling | ☐ |
| Backup | Backup database automatico | ☐ |
Docker: Containerizzazione Professionale
Eine gut geschriebene Docker-Datei erzeugt Images, die klein, sicher und schnell zu erstellen sind. Der Schlüssel ist der mehrstufiger Aufbau.
Aufforderung zur Docker-Datei
Create a production-ready Dockerfile for my Node.js + TypeScript application.
PROJECT INFO:
- Runtime: Node.js 20
- Package manager: npm
- Build: TypeScript compilation
- Entry point: dist/server.js
- Port: 3000
REQUIREMENTS:
1. Multi-stage build (minimize final image size)
2. Non-root user for security
3. Health check command
4. Proper layer caching for faster builds
5. Only production dependencies in final image
6. Environment variables support
7. Metadata labels (maintainer, version, description)
8. .dockerignore recommendations
SECURITY:
- No secrets in image
- Minimal attack surface (alpine base)
- Read-only filesystem where possible
Vollständige mehrstufige Docker-Datei
# ═══════════════════════════════════════════════════════════════
# Stage 1: Dependencies (cached layer)
# ═══════════════════════════════════════════════════════════════
FROM node:20-alpine AS deps
WORKDIR /app
# Copy package files first (better cache utilization)
COPY package.json package-lock.json ./
# Install ALL dependencies (including devDependencies for build)
RUN npm ci --include=dev
# ═══════════════════════════════════════════════════════════════
# Stage 2: Builder
# ═══════════════════════════════════════════════════════════════
FROM node:20-alpine AS builder
WORKDIR /app
# Copy dependencies from deps stage
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Build TypeScript
RUN npm run build
# Remove devDependencies after build
RUN npm prune --production
# ═══════════════════════════════════════════════════════════════
# Stage 3: Production
# ═══════════════════════════════════════════════════════════════
FROM node:20-alpine AS production
# Metadata
LABEL maintainer="your-email@example.com"
LABEL version="1.0.0"
LABEL description="TaskFlow API Server"
# Security: Create non-root user
RUN addgroup -g 1001 -S nodejs \
&& adduser -S nodejs -u 1001
WORKDIR /app
# Copy only production artifacts
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/package.json ./
# Environment variables (defaults, override at runtime)
ENV NODE_ENV=production
ENV PORT=3000
# Switch to non-root user
USER nodejs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
# Start application
CMD ["node", "dist/server.js"]
.dockerignore
# Dependencies
node_modules
npm-debug.log
# Build output (built inside container)
dist
build
# Development files
.git
.gitignore
.env
.env.*
!.env.example
# IDE
.vscode
.idea
*.swp
*.swo
# Documentation
README.md
docs
*.md
# Tests
test
tests
__tests__
coverage
.nyc_output
*.test.ts
*.spec.ts
jest.config.js
# CI/CD
.github
.gitlab-ci.yml
Jenkinsfile
# Docker
Dockerfile*
docker-compose*
.dockerignore
# OS files
.DS_Store
Thumbs.db
# Logs
logs
*.log
# Temporary files
tmp
temp
Docker Compose: Orchestrazione Locale e Production
Mit Docker Compose können Sie Umgebungen mit mehreren Containern verwalten. Erstellen Sie separate Dateien für Entwicklung und Produktion.
Docker Compose für die Entwicklung
version: '3.8'
services:
# ─────────────────────────────────────────────────────────────
# Application (with hot reload)
# ─────────────────────────────────────────────────────────────
app:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
- "9229:9229" # Node.js debugger
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:postgres@db:5432/taskflow_dev
- REDIS_URL=redis://redis:6379
- JWT_SECRET=dev-secret-change-in-production
volumes:
- .:/app
- /app/node_modules # Don't override node_modules
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
command: npm run dev
# ─────────────────────────────────────────────────────────────
# PostgreSQL Database
# ─────────────────────────────────────────────────────────────
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: taskflow_dev
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./scripts/init-db.sql:/docker-entrypoint-initdb.d/init.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d taskflow_dev"]
interval: 5s
timeout: 5s
retries: 5
# ─────────────────────────────────────────────────────────────
# Redis Cache
# ─────────────────────────────────────────────────────────────
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
command: redis-server --appendonly yes
# ─────────────────────────────────────────────────────────────
# pgAdmin (Database GUI)
# ─────────────────────────────────────────────────────────────
pgadmin:
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: admin@admin.com
PGADMIN_DEFAULT_PASSWORD: admin
ports:
- "5050:80"
depends_on:
- db
profiles:
- tools # Only start with: docker-compose --profile tools up
# ─────────────────────────────────────────────────────────────
# Redis Commander (Redis GUI)
# ─────────────────────────────────────────────────────────────
redis-commander:
image: rediscommander/redis-commander:latest
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8081:8081"
depends_on:
- redis
profiles:
- tools
volumes:
postgres_data:
redis_data:
networks:
default:
name: taskflow-network
Docker Compose für die Produktion
version: '3.8'
services:
# ─────────────────────────────────────────────────────────────
# Application
# ─────────────────────────────────────────────────────────────
app:
image: ghcr.io/username/taskflow:${TAG:-latest}
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=${DATABASE_URL}
- REDIS_URL=${REDIS_URL}
- JWT_SECRET=${JWT_SECRET}
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
deploy:
replicas: 2
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
rollback_config:
parallelism: 1
delay: 10s
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# ─────────────────────────────────────────────────────────────
# Nginx Reverse Proxy
# ─────────────────────────────────────────────────────────────
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- app
restart: unless-stopped
# ─────────────────────────────────────────────────────────────
# PostgreSQL Database
# ─────────────────────────────────────────────────────────────
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
# ─────────────────────────────────────────────────────────────
# Redis Cache
# ─────────────────────────────────────────────────────────────
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
restart: unless-stopped
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
driver: local
redis_data:
driver: local
networks:
default:
name: taskflow-prod
driver: bridge
CI/CD mit GitHub-Aktionen
Eine CI/CD-Pipeline automatisiert Tests, Builds und Bereitstellung. Mit GitHub Actions wird alles im Repository konfiguriert.
Aufforderung zur Eingabe einer CI/CD-Pipeline
Create a complete GitHub Actions CI/CD workflow for my Node.js project.
REQUIREMENTS:
1. Trigger on push to main/develop and pull requests to main
2. Run linting and type checking
3. Run unit tests with coverage report
4. Run integration tests (with PostgreSQL service)
5. Build Docker image
6. Push to GitHub Container Registry (on main only)
7. Deploy to staging (on develop branch)
8. Deploy to production (on main branch, manual approval)
9. Cache npm dependencies between runs
10. Send Slack notification on failure
ENVIRONMENTS:
- Staging: staging.taskflow.dev
- Production: taskflow.dev
SECRETS NEEDED:
- CODECOV_TOKEN
- SLACK_WEBHOOK_URL
- SSH_PRIVATE_KEY (for deployment)
Include proper job dependencies and failure handling.
GitHub-Aktionen vervollständigen den Workflow
name: CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ '{{' }} github.repository {{ '}}' }}
NODE_VERSION: '20'
# Limit concurrent runs
concurrency:
group: ${{ '{{' }} github.workflow {{ '}}' }}-${{ '{{' }} github.ref {{ '}}' }}
cancel-in-progress: true
jobs:
# ═══════════════════════════════════════════════════════════════
# LINT AND TYPE CHECK
# ═══════════════════════════════════════════════════════════════
lint:
name: Lint & Type Check
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ '{{' }} env.NODE_VERSION {{ '}}' }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint
- name: Run TypeScript check
run: npm run type-check
# ═══════════════════════════════════════════════════════════════
# UNIT TESTS
# ═══════════════════════════════════════════════════════════════
unit-tests:
name: Unit Tests
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ '{{' }} env.NODE_VERSION {{ '}}' }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run unit tests with coverage
run: npm run test:coverage
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
token: ${{ '{{' }} secrets.CODECOV_TOKEN {{ '}}' }}
files: ./coverage/lcov.info
fail_ci_if_error: true
# ═══════════════════════════════════════════════════════════════
# INTEGRATION TESTS
# ═══════════════════════════════════════════════════════════════
integration-tests:
name: Integration Tests
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: taskflow_test
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7-alpine
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ '{{' }} env.NODE_VERSION {{ '}}' }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run database migrations
run: npm run db:migrate
env:
DATABASE_URL: postgresql://test:test@localhost:5432/taskflow_test
- name: Run integration tests
run: npm run test:integration
env:
DATABASE_URL: postgresql://test:test@localhost:5432/taskflow_test
REDIS_URL: redis://localhost:6379
JWT_SECRET: test-secret-for-ci
# ═══════════════════════════════════════════════════════════════
# BUILD AND PUSH DOCKER IMAGE
# ═══════════════════════════════════════════════════════════════
build:
name: Build Docker Image
runs-on: ubuntu-latest
needs: [lint, unit-tests, integration-tests]
if: github.event_name == 'push'
permissions:
contents: read
packages: write
outputs:
image-tag: ${{ '{{' }} steps.meta.outputs.tags {{ '}}' }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ '{{' }} env.REGISTRY {{ '}}' }}
username: ${{ '{{' }} github.actor {{ '}}' }}
password: ${{ '{{' }} secrets.GITHUB_TOKEN {{ '}}' }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ '{{' }} env.REGISTRY {{ '}}' }}/${{ '{{' }} env.IMAGE_NAME {{ '}}' }}
tags: |
type=sha,prefix=
type=ref,event=branch
type=raw,value=latest,enable=${{ '{{' }} github.ref == 'refs/heads/main' {{ '}}' }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ '{{' }} steps.meta.outputs.tags {{ '}}' }}
labels: ${{ '{{' }} steps.meta.outputs.labels {{ '}}' }}
cache-from: type=gha
cache-to: type=gha,mode=max
# ═══════════════════════════════════════════════════════════════
# DEPLOY TO STAGING
# ═══════════════════════════════════════════════════════════════
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: [build]
if: github.ref == 'refs/heads/develop'
environment:
name: staging
url: https://staging.taskflow.dev
steps:
- name: Deploy to staging server
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ '{{' }} secrets.STAGING_HOST {{ '}}' }}
username: ${{ '{{' }} secrets.STAGING_USER {{ '}}' }}
key: ${{ '{{' }} secrets.SSH_PRIVATE_KEY {{ '}}' }}
script: |
cd /opt/taskflow
docker compose -f docker-compose.staging.yml pull
docker compose -f docker-compose.staging.yml up -d
docker system prune -f
- name: Verify deployment
run: |
sleep 30
curl -f https://staging.taskflow.dev/health || exit 1
# ═══════════════════════════════════════════════════════════════
# DEPLOY TO PRODUCTION
# ═══════════════════════════════════════════════════════════════
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: [build]
if: github.ref == 'refs/heads/main'
environment:
name: production
url: https://taskflow.dev
steps:
- name: Deploy to production server
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ '{{' }} secrets.PRODUCTION_HOST {{ '}}' }}
username: ${{ '{{' }} secrets.PRODUCTION_USER {{ '}}' }}
key: ${{ '{{' }} secrets.SSH_PRIVATE_KEY {{ '}}' }}
script: |
cd /opt/taskflow
docker compose -f docker-compose.prod.yml pull
docker compose -f docker-compose.prod.yml up -d --no-deps app
docker system prune -f
- name: Verify deployment
run: |
sleep 30
curl -f https://taskflow.dev/health || exit 1
- name: Notify on success
if: success()
uses: slackapi/slack-github-action@v1.25.0
with:
payload: |
{
"text": "🚀 Production deployment successful!",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "🚀 *Production deployment successful!*\n*Commit:* `${{ '{{' }} github.sha {{ '}}' }}`\n*By:* ${{ '{{' }} github.actor {{ '}}' }}"
}
}
]
}
env:
SLACK_WEBHOOK_URL: ${{ '{{' }} secrets.SLACK_WEBHOOK_URL {{ '}}' }}
# ═══════════════════════════════════════════════════════════════
# NOTIFY ON FAILURE
# ═══════════════════════════════════════════════════════════════
notify-failure:
name: Notify on Failure
runs-on: ubuntu-latest
needs: [lint, unit-tests, integration-tests, build]
if: failure()
steps:
- name: Send Slack notification
uses: slackapi/slack-github-action@v1.25.0
with:
payload: |
{
"text": "❌ CI/CD Pipeline failed!",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "❌ *CI/CD Pipeline failed!*\n*Branch:* `${{ '{{' }} github.ref_name {{ '}}' }}`\n*Commit:* `${{ '{{' }} github.sha {{ '}}' }}`\n*By:* ${{ '{{' }} github.actor {{ '}}' }}\n<${{ '{{' }} github.server_url {{ '}}' }}/${{ '{{' }} github.repository {{ '}}' }}/actions/runs/${{ '{{' }} github.run_id {{ '}}' }}|View Run>"
}
}
]
}
env:
SLACK_WEBHOOK_URL: ${{ '{{' }} secrets.SLACK_WEBHOOK_URL {{ '}}' }}
Health Check Endpoint
Ein Health-Check-Endpunkt ermöglicht Orchestratoren und Load Balancern die Überprüfung ob die Anwendung betriebsbereit ist.
import { Router, Request, Response } from 'express';
import { db } from '../database';
import { redis } from '../cache';
import { logger } from '../logger';
const router = Router();
interface HealthStatus {
status: 'healthy' | 'degraded' | 'unhealthy';
timestamp: string;
version: string;
uptime: number;
checks: Record<string, CheckResult>;
}
interface CheckResult {
status: 'ok' | 'error';
latency?: number;
message?: string;
}
/**
* Lightweight liveness probe
* Returns 200 if the server is running
*/
router.get('/health/live', (req: Request, res: Response) => {
res.status(200).json({ status: 'ok' });
});
/**
* Comprehensive readiness probe
* Checks all dependencies
*/
router.get('/health', async (req: Request, res: Response) => {
const health: HealthStatus = {
status: 'healthy',
timestamp: new Date().toISOString(),
version: process.env.npm_package_version || '1.0.0',
uptime: process.uptime(),
checks: {},
};
// Check Database
try {
const start = Date.now();
await db.query('SELECT 1');
health.checks.database = {
status: 'ok',
latency: Date.now() - start,
};
} catch (error) {
health.checks.database = {
status: 'error',
message: 'Database connection failed',
};
health.status = 'unhealthy';
logger.error('Health check: Database failed', { error });
}
// Check Redis
try {
const start = Date.now();
await redis.ping();
health.checks.redis = {
status: 'ok',
latency: Date.now() - start,
};
} catch (error) {
health.checks.redis = {
status: 'error',
message: 'Redis connection failed',
};
// Redis failure = degraded (app can still work without cache)
if (health.status === 'healthy') {
health.status = 'degraded';
}
logger.warn('Health check: Redis failed', { error });
}
// Check disk space (optional)
try {
const { execSync } = require('child_process');
const diskUsage = execSync("df -h / | awk 'NR==2 {print $5}'")
.toString()
.trim()
.replace('%', '');
health.checks.disk = {
status: parseInt(diskUsage) > 90 ? 'error' : 'ok',
message: `${diskUsage}% used`,
};
} catch {
// Ignore disk check errors
}
// Check memory
const memUsage = process.memoryUsage();
const heapUsedMB = Math.round(memUsage.heapUsed / 1024 / 1024);
const heapTotalMB = Math.round(memUsage.heapTotal / 1024 / 1024);
health.checks.memory = {
status: heapUsedMB / heapTotalMB > 0.9 ? 'error' : 'ok',
message: `${heapUsedMB}MB / ${heapTotalMB}MB`,
};
// Set HTTP status based on health
const statusCode = health.status === 'healthy' ? 200 :
health.status === 'degraded' ? 200 : 503;
res.status(statusCode).json(health);
});
export default router;
Logging in Produzione
Eine strukturierte Protokollierung ist für das Debuggen und Überwachen in der Produktion unerlässlich. Verwenden Sie das JSON-Format, um das Parsen zu erleichtern.
import pino from 'pino';
import { config } from './config';
// Create base logger
export const logger = pino({
level: config.env === 'production' ? 'info' : 'debug',
// Pretty print only in development
transport: config.env === 'development'
? {
target: 'pino-pretty',
options: {
colorize: true,
translateTime: 'HH:MM:ss',
ignore: 'pid,hostname',
},
}
: undefined,
// Structured format for production
formatters: {
level: (label) => ({ level: label }),
bindings: (bindings) => ({
pid: bindings.pid,
host: bindings.hostname,
service: 'taskflow-api',
version: process.env.npm_package_version,
}),
},
// ISO timestamp
timestamp: pino.stdTimeFunctions.isoTime,
// Redact sensitive fields
redact: {
paths: [
'req.headers.authorization',
'req.headers.cookie',
'res.headers["set-cookie"]',
'*.password',
'*.token',
'*.secret',
'*.apiKey',
],
censor: '[REDACTED]',
},
});
// Child logger with request context
export function createRequestLogger(requestId: string, userId?: string) {
return logger.child({
requestId,
userId,
});
}
// Express middleware for request logging
export function requestLogger(req, res, next) {
const start = Date.now();
const requestId = req.headers['x-request-id'] || crypto.randomUUID();
// Attach request ID for correlation
req.requestId = requestId;
res.setHeader('X-Request-Id', requestId);
// Create child logger
req.log = createRequestLogger(requestId, req.user?.id);
// Log request start
req.log.info({
msg: 'Request started',
method: req.method,
url: req.url,
userAgent: req.headers['user-agent'],
ip: req.ip,
});
// Log response on finish
res.on('finish', () => {
const duration = Date.now() - start;
const logData = {
msg: 'Request completed',
method: req.method,
url: req.url,
status: res.statusCode,
duration,
};
// Use appropriate log level based on status code
if (res.statusCode >= 500) {
req.log.error(logData);
} else if (res.statusCode >= 400) {
req.log.warn(logData);
} else {
req.log.info(logData);
}
});
next();
}
// Error logger
export function logError(error: Error, context?: Record<string, any>) {
logger.error({
msg: error.message,
error: {
name: error.name,
message: error.message,
stack: error.stack,
},
...context,
});
}
Graceful Shutdown
Durch das ordnungsgemäße Herunterfahren kann die Anwendung laufende Anforderungen abschließen bevor Sie fertig sind, um Datenverlust zu vermeiden.
import { Server } from 'http';
import { logger } from './logger';
import { db } from './database';
import { redis } from './cache';
let isShuttingDown = false;
export function setupGracefulShutdown(server: Server) {
const shutdown = async (signal: string) => {
if (isShuttingDown) {
logger.warn('Shutdown already in progress, ignoring signal');
return;
}
isShuttingDown = true;
logger.info(`Received ${signal}, starting graceful shutdown...`);
// Set a timeout for graceful shutdown
const shutdownTimeout = setTimeout(() => {
logger.error('Graceful shutdown timed out, forcing exit');
process.exit(1);
}, 30000); // 30 seconds timeout
try {
// 1. Stop accepting new connections
logger.info('Closing HTTP server...');
await new Promise<void>((resolve, reject) => {
server.close((err) => {
if (err) reject(err);
else resolve();
});
});
logger.info('HTTP server closed');
// 2. Close database connections
logger.info('Closing database connections...');
await db.disconnect();
logger.info('Database connections closed');
// 3. Close Redis connections
logger.info('Closing Redis connections...');
await redis.quit();
logger.info('Redis connections closed');
// 4. Cleanup complete
clearTimeout(shutdownTimeout);
logger.info('Graceful shutdown completed');
process.exit(0);
} catch (error) {
logger.error('Error during graceful shutdown', { error });
clearTimeout(shutdownTimeout);
process.exit(1);
}
};
// Handle different signals
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
// Handle uncaught exceptions
process.on('uncaughtException', (error) => {
logger.fatal('Uncaught exception', { error });
shutdown('uncaughtException');
});
// Handle unhandled rejections
process.on('unhandledRejection', (reason, promise) => {
logger.fatal('Unhandled rejection', { reason, promise });
shutdown('unhandledRejection');
});
}
// Middleware to reject new requests during shutdown
export function shutdownMiddleware(req, res, next) {
if (isShuttingDown) {
res.status(503).json({
error: {
code: 'SERVICE_UNAVAILABLE',
message: 'Server is shutting down',
},
});
return;
}
next();
}
Best Practices DevOps
❌ Anti-Pattern
- Geheimnisse im Code
- Ohne Cache erstellen
- Image troppo grandi
- Run as root
- No health checks
- No logging
- Deploy manuale
- No rollback strategy
✅ Best Practices
- Secrets in environment/vault
- Mehrstufiger Build mit Cache
- Alpine base, minimal layers
- Nicht-Root-Benutzer im Container
- Health check configurato
- Structured logging (JSON)
- CI/CD automatizzato
- Blue-green / rolling deploy
Checklist Deploy
✅ Vor dem Einsatz in der Produktion
- ☐ Tutti i test passano
- ☐ Build Docker funziona localmente
- ☐ Health check risponde correttamente
- ☐ In der Umgebung konfigurierte Geheimnisse
- ☐ SSL/TLS configurato
- ☐ Backup database verificato
- ☐ Monitoring e alerting attivi
- ☐ Rollback plan documentato
- ☐ Team notificato del deploy
- ☐ Changelog aggiornato
Fazit und nächste Schritte
Ein gutes DevOps-Setup macht die Bereitstellung zuverlässig, wiederholbar und sicher. Mit Docker, CI/CD und ordnungsgemäßer Überwachung können Sie beruhigt bereitstellen dass etwaige Probleme schnell erkannt werden.
Copilot kann Docker-Konfigurationen, CI/CD-Workflows und Überwachungscode generieren. aber Sie müssen immer überprüfen und testen bevor Sie sie in der Produktion verwenden. Die Sicherheit und Zuverlässigkeit Ihres Systems hängt von Ihrem Verständnis ab jeder Komponente.
Nel nächster und letzter Artikel wir werden sehen wie weiterentwickeln das Projekt im Laufe der Zeit: Skalierbarkeit, kontinuierliches Refactoring, Management von Abhängigkeiten, erweiterte Überwachung und langfristige Wartung.
🎯 Wichtige Punkte zum Merken
- Hafenarbeiter: Mehrstufiger Build, Nicht-Root-Benutzer, Gesundheitsprüfung
- Bestehend aus: Separate Dateien für Entwickler und Produkt
- CI/CD: Automatisches Testen, Erstellen, Bereitstellen mit Genehmigung
- Gesundheit: Endpunkte für Lebendigkeits- und Bereitschaftsprüfungen
- Protokollierung: Strukturiert (JSON), mit Korrelations-ID
- Abschalten: Graceful, um Datenverlust zu vermeiden
- Geheimnisse: Nie im Code, immer in der Umgebung
- Verifizieren: Testen Sie immer, bevor Sie es bereitstellen







