Copilot을 통한 배포 및 DevOps
배포는 프로젝트를 전 세계에 공개하는 마지막 단계입니다. 하지만 그렇지 않아요 그냥 "온라인에 연결": 전문적인 배포는 컨테이너화, 자동화된 CI/CD, 건강 검진, 벌채 반출 e 모니터링. GitHub Copilot이 이를 설정하는 데 도움을 줄 수 있습니다. 정확하고 안전하게.
이 기사에서는 최적화된 Dockerfile을 생성하고 구성하는 방법을 살펴보겠습니다. 개발 및 생산을 위한 Docker Compose, 완전한 CI/CD 파이프라인 구현 GitHub Actions를 사용하여 프로덕션 환경에 맞게 애플리케이션을 준비합니다.
📚 시리즈 개요
| # | Articolo | 집중하다 |
|---|---|---|
| 1 | 기초와 사고방식 | 설정과 사고방식 |
| 2 | 개념 및 요구사항 | 아이디어에서 MVP까지 |
| 3 | 백엔드 아키텍처 | API 및 데이터베이스 |
| 4 | 프런트엔드 구조 | UI 및 구성요소 |
| 5 | 신속한 엔지니어링 | MCP 프롬프트 및 에이전트 |
| 6 | 테스트 및 품질 | 단위, 통합, E2E |
| 7 | 선적 서류 비치 | 읽어보기, API 문서, ADR |
| 8 | 📍 현재 위치 → 배포 및 DevOps | 도커, CI/CD |
| 9 | 진화 | 확장성 및 유지 관리 |
배포 전 체크리스트
배포하기 전에 프로젝트가 프로덕션 준비가 되었는지 확인하세요.
🚀 사전 제작 체크리스트
| 영역 | 요구 사항 | 확인됨 |
|---|---|---|
| 안전 | 환경 변수의 비밀(코드가 아님) | ☐ |
| 안전 | HTTPS가 구성됨 | ☐ |
| 안전 | 속도 제한 활성 | ☐ |
| 안전 | CORS가 올바르게 구성됨 | ☐ |
| 성능 | 최적화된 빌드(축소, 트리 쉐이킹) | ☐ |
| 성능 | 압축 활성화됨(gzip/brotli) | ☐ |
| 모니터링 | 상태 확인 엔드포인트 | ☐ |
| 모니터링 | 구조화된 로깅이 구성됨 | ☐ |
| 모니터링 | 오류 추적(Sentry 등) | ☐ |
| 회복력 | 단계적 종료 구현 | ☐ |
| 회복력 | 데이터베이스 연결 풀링 | ☐ |
| 백업 | 자동 데이터베이스 백업 | ☐ |
Docker: 전문 컨테이너화
잘 작성된 Dockerfile은 작고 안전하며 빠르게 구축할 수 있는 이미지를 생성합니다. 핵심은 다단계 빌드.
Dockerfile에 대한 프롬프트
Create a production-ready Dockerfile for my Node.js + TypeScript application.
PROJECT INFO:
- Runtime: Node.js 20
- Package manager: npm
- Build: TypeScript compilation
- Entry point: dist/server.js
- Port: 3000
REQUIREMENTS:
1. Multi-stage build (minimize final image size)
2. Non-root user for security
3. Health check command
4. Proper layer caching for faster builds
5. Only production dependencies in final image
6. Environment variables support
7. Metadata labels (maintainer, version, description)
8. .dockerignore recommendations
SECURITY:
- No secrets in image
- Minimal attack surface (alpine base)
- Read-only filesystem where possible
다단계 Dockerfile 완성
# ═══════════════════════════════════════════════════════════════
# Stage 1: Dependencies (cached layer)
# ═══════════════════════════════════════════════════════════════
FROM node:20-alpine AS deps
WORKDIR /app
# Copy package files first (better cache utilization)
COPY package.json package-lock.json ./
# Install ALL dependencies (including devDependencies for build)
RUN npm ci --include=dev
# ═══════════════════════════════════════════════════════════════
# Stage 2: Builder
# ═══════════════════════════════════════════════════════════════
FROM node:20-alpine AS builder
WORKDIR /app
# Copy dependencies from deps stage
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Build TypeScript
RUN npm run build
# Remove devDependencies after build
RUN npm prune --production
# ═══════════════════════════════════════════════════════════════
# Stage 3: Production
# ═══════════════════════════════════════════════════════════════
FROM node:20-alpine AS production
# Metadata
LABEL maintainer="your-email@example.com"
LABEL version="1.0.0"
LABEL description="TaskFlow API Server"
# Security: Create non-root user
RUN addgroup -g 1001 -S nodejs \
&& adduser -S nodejs -u 1001
WORKDIR /app
# Copy only production artifacts
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/package.json ./
# Environment variables (defaults, override at runtime)
ENV NODE_ENV=production
ENV PORT=3000
# Switch to non-root user
USER nodejs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
# Start application
CMD ["node", "dist/server.js"]
.dockerignore
# Dependencies
node_modules
npm-debug.log
# Build output (built inside container)
dist
build
# Development files
.git
.gitignore
.env
.env.*
!.env.example
# IDE
.vscode
.idea
*.swp
*.swo
# Documentation
README.md
docs
*.md
# Tests
test
tests
__tests__
coverage
.nyc_output
*.test.ts
*.spec.ts
jest.config.js
# CI/CD
.github
.gitlab-ci.yml
Jenkinsfile
# Docker
Dockerfile*
docker-compose*
.dockerignore
# OS files
.DS_Store
Thumbs.db
# Logs
logs
*.log
# Temporary files
tmp
temp
Docker Compose: 로컬 오케스트레이션 및 프로덕션
Docker Compose를 사용하면 다중 컨테이너 환경을 관리할 수 있습니다. 개발용과 프로덕션용으로 별도의 파일을 만듭니다.
개발용 Docker Compose
version: '3.8'
services:
# ─────────────────────────────────────────────────────────────
# Application (with hot reload)
# ─────────────────────────────────────────────────────────────
app:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
- "9229:9229" # Node.js debugger
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:postgres@db:5432/taskflow_dev
- REDIS_URL=redis://redis:6379
- JWT_SECRET=dev-secret-change-in-production
volumes:
- .:/app
- /app/node_modules # Don't override node_modules
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
command: npm run dev
# ─────────────────────────────────────────────────────────────
# PostgreSQL Database
# ─────────────────────────────────────────────────────────────
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: taskflow_dev
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./scripts/init-db.sql:/docker-entrypoint-initdb.d/init.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d taskflow_dev"]
interval: 5s
timeout: 5s
retries: 5
# ─────────────────────────────────────────────────────────────
# Redis Cache
# ─────────────────────────────────────────────────────────────
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
command: redis-server --appendonly yes
# ─────────────────────────────────────────────────────────────
# pgAdmin (Database GUI)
# ─────────────────────────────────────────────────────────────
pgadmin:
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: admin@admin.com
PGADMIN_DEFAULT_PASSWORD: admin
ports:
- "5050:80"
depends_on:
- db
profiles:
- tools # Only start with: docker-compose --profile tools up
# ─────────────────────────────────────────────────────────────
# Redis Commander (Redis GUI)
# ─────────────────────────────────────────────────────────────
redis-commander:
image: rediscommander/redis-commander:latest
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8081:8081"
depends_on:
- redis
profiles:
- tools
volumes:
postgres_data:
redis_data:
networks:
default:
name: taskflow-network
프로덕션용 Docker Compose
version: '3.8'
services:
# ─────────────────────────────────────────────────────────────
# Application
# ─────────────────────────────────────────────────────────────
app:
image: ghcr.io/username/taskflow:${{ '{' }}TAG:-latest{{ '}' }}
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=${{ '{' }}DATABASE_URL{{ '}' }}
- REDIS_URL=${{ '{' }}REDIS_URL{{ '}' }}
- JWT_SECRET=${{ '{' }}JWT_SECRET{{ '}' }}
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
deploy:
replicas: 2
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
rollback_config:
parallelism: 1
delay: 10s
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# ─────────────────────────────────────────────────────────────
# Nginx Reverse Proxy
# ─────────────────────────────────────────────────────────────
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- app
restart: unless-stopped
# ─────────────────────────────────────────────────────────────
# PostgreSQL Database
# ─────────────────────────────────────────────────────────────
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${{ '{' }}POSTGRES_USER{{ '}' }}
POSTGRES_PASSWORD: ${{ '{' }}POSTGRES_PASSWORD{{ '}' }}
POSTGRES_DB: ${{ '{' }}POSTGRES_DB{{ '}' }}
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${{ '{' }}POSTGRES_USER{{ '}' }} -d ${{ '{' }}POSTGRES_DB{{ '}' }}"]
interval: 10s
timeout: 5s
retries: 5
# ─────────────────────────────────────────────────────────────
# Redis Cache
# ─────────────────────────────────────────────────────────────
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
restart: unless-stopped
command: redis-server --appendonly yes --requirepass ${{ '{' }}REDIS_PASSWORD{{ '}' }}
healthcheck:
test: ["CMD", "redis-cli", "-a", "${{ '{' }}REDIS_PASSWORD{{ '}' }}", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
driver: local
redis_data:
driver: local
networks:
default:
name: taskflow-prod
driver: bridge
GitHub Actions를 사용한 CI/CD
CI/CD 파이프라인은 테스트, 빌드 및 배포를 자동화합니다. GitHub Actions를 사용하면 모든 것이 저장소에 구성됩니다.
CI/CD 파이프라인에 대한 프롬프트
Create a complete GitHub Actions CI/CD workflow for my Node.js project.
REQUIREMENTS:
1. Trigger on push to main/develop and pull requests to main
2. Run linting and type checking
3. Run unit tests with coverage report
4. Run integration tests (with PostgreSQL service)
5. Build Docker image
6. Push to GitHub Container Registry (on main only)
7. Deploy to staging (on develop branch)
8. Deploy to production (on main branch, manual approval)
9. Cache npm dependencies between runs
10. Send Slack notification on failure
ENVIRONMENTS:
- Staging: staging.taskflow.dev
- Production: taskflow.dev
SECRETS NEEDED:
- CODECOV_TOKEN
- SLACK_WEBHOOK_URL
- SSH_PRIVATE_KEY (for deployment)
Include proper job dependencies and failure handling.
GitHub Actions 전체 워크플로
name: CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ '{{' }} github.repository {{ '}}' }}
NODE_VERSION: '20'
# Limit concurrent runs
concurrency:
group: ${{ '{{' }} github.workflow {{ '}}' }}-${{ '{{' }} github.ref {{ '}}' }}
cancel-in-progress: true
jobs:
# ═══════════════════════════════════════════════════════════════
# LINT AND TYPE CHECK
# ═══════════════════════════════════════════════════════════════
lint:
name: Lint & Type Check
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ '{{' }} env.NODE_VERSION {{ '}}' }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint
- name: Run TypeScript check
run: npm run type-check
# ═══════════════════════════════════════════════════════════════
# UNIT TESTS
# ═══════════════════════════════════════════════════════════════
unit-tests:
name: Unit Tests
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ '{{' }} env.NODE_VERSION {{ '}}' }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run unit tests with coverage
run: npm run test:coverage
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
token: ${{ '{{' }} secrets.CODECOV_TOKEN {{ '}}' }}
files: ./coverage/lcov.info
fail_ci_if_error: true
# ═══════════════════════════════════════════════════════════════
# INTEGRATION TESTS
# ═══════════════════════════════════════════════════════════════
integration-tests:
name: Integration Tests
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: taskflow_test
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7-alpine
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ '{{' }} env.NODE_VERSION {{ '}}' }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run database migrations
run: npm run db:migrate
env:
DATABASE_URL: postgresql://test:test@localhost:5432/taskflow_test
- name: Run integration tests
run: npm run test:integration
env:
DATABASE_URL: postgresql://test:test@localhost:5432/taskflow_test
REDIS_URL: redis://localhost:6379
JWT_SECRET: test-secret-for-ci
# ═══════════════════════════════════════════════════════════════
# BUILD AND PUSH DOCKER IMAGE
# ═══════════════════════════════════════════════════════════════
build:
name: Build Docker Image
runs-on: ubuntu-latest
needs: [lint, unit-tests, integration-tests]
if: github.event_name == 'push'
permissions:
contents: read
packages: write
outputs:
image-tag: ${{ '{{' }} steps.meta.outputs.tags {{ '}}' }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ '{{' }} env.REGISTRY {{ '}}' }}
username: ${{ '{{' }} github.actor {{ '}}' }}
password: ${{ '{{' }} secrets.GITHUB_TOKEN {{ '}}' }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ '{{' }} env.REGISTRY {{ '}}' }}/${{ '{{' }} env.IMAGE_NAME {{ '}}' }}
tags: |
type=sha,prefix=
type=ref,event=branch
type=raw,value=latest,enable=${{ '{{' }} github.ref == 'refs/heads/main' {{ '}}' }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ '{{' }} steps.meta.outputs.tags {{ '}}' }}
labels: ${{ '{{' }} steps.meta.outputs.labels {{ '}}' }}
cache-from: type=gha
cache-to: type=gha,mode=max
# ═══════════════════════════════════════════════════════════════
# DEPLOY TO STAGING
# ═══════════════════════════════════════════════════════════════
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: [build]
if: github.ref == 'refs/heads/develop'
environment:
name: staging
url: https://staging.taskflow.dev
steps:
- name: Deploy to staging server
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ '{{' }} secrets.STAGING_HOST {{ '}}' }}
username: ${{ '{{' }} secrets.STAGING_USER {{ '}}' }}
key: ${{ '{{' }} secrets.SSH_PRIVATE_KEY {{ '}}' }}
script: |
cd /opt/taskflow
docker compose -f docker-compose.staging.yml pull
docker compose -f docker-compose.staging.yml up -d
docker system prune -f
- name: Verify deployment
run: |
sleep 30
curl -f https://staging.taskflow.dev/health || exit 1
# ═══════════════════════════════════════════════════════════════
# DEPLOY TO PRODUCTION
# ═══════════════════════════════════════════════════════════════
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: [build]
if: github.ref == 'refs/heads/main'
environment:
name: production
url: https://taskflow.dev
steps:
- name: Deploy to production server
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ '{{' }} secrets.PRODUCTION_HOST {{ '}}' }}
username: ${{ '{{' }} secrets.PRODUCTION_USER {{ '}}' }}
key: ${{ '{{' }} secrets.SSH_PRIVATE_KEY {{ '}}' }}
script: |
cd /opt/taskflow
docker compose -f docker-compose.prod.yml pull
docker compose -f docker-compose.prod.yml up -d --no-deps app
docker system prune -f
- name: Verify deployment
run: |
sleep 30
curl -f https://taskflow.dev/health || exit 1
- name: Notify on success
if: success()
uses: slackapi/slack-github-action@v1.25.0
with:
payload: |
{{ '{' }}
"text": "🚀 Production deployment successful!",
"blocks": [
{{ '{' }}
"type": "section",
"text": {{ '{' }}
"type": "mrkdwn",
"text": "🚀 *Production deployment successful!*\n*Commit:* `${{ '{{' }} github.sha {{ '}}' }}`\n*By:* ${{ '{{' }} github.actor {{ '}}' }}"
{{ '}' }}
{{ '}' }}
]
{{ '}' }}
env:
SLACK_WEBHOOK_URL: ${{ '{{' }} secrets.SLACK_WEBHOOK_URL {{ '}}' }}
# ═══════════════════════════════════════════════════════════════
# NOTIFY ON FAILURE
# ═══════════════════════════════════════════════════════════════
notify-failure:
name: Notify on Failure
runs-on: ubuntu-latest
needs: [lint, unit-tests, integration-tests, build]
if: failure()
steps:
- name: Send Slack notification
uses: slackapi/slack-github-action@v1.25.0
with:
payload: |
{{ '{' }}
"text": "❌ CI/CD Pipeline failed!",
"blocks": [
{{ '{' }}
"type": "section",
"text": {{ '{' }}
"type": "mrkdwn",
"text": "❌ *CI/CD Pipeline failed!*\n*Branch:* `${{ '{{' }} github.ref_name {{ '}}' }}`\n*Commit:* `${{ '{{' }} github.sha {{ '}}' }}`\n*By:* ${{ '{{' }} github.actor {{ '}}' }}\n<${{ '{{' }} github.server_url {{ '}}' }}/${{ '{{' }} github.repository {{ '}}' }}/actions/runs/${{ '{{' }} github.run_id {{ '}}' }}|View Run>"
{{ '}' }}
{{ '}' }}
]
{{ '}' }}
env:
SLACK_WEBHOOK_URL: ${{ '{{' }} secrets.SLACK_WEBHOOK_URL {{ '}}' }}
상태 확인 엔드포인트
상태 확인 엔드포인트를 통해 오케스트레이터와 로드 밸런서는 다음을 확인할 수 있습니다. 애플리케이션이 작동 중인 경우.
import {{ '{' }} Router, Request, Response {{ '}' }} from 'express';
import {{ '{' }} db {{ '}' }} from '../database';
import {{ '{' }} redis {{ '}' }} from '../cache';
import {{ '{' }} logger {{ '}' }} from '../logger';
const router = Router();
interface HealthStatus {{ '{' }}
status: 'healthy' | 'degraded' | 'unhealthy';
timestamp: string;
version: string;
uptime: number;
checks: Record<string, CheckResult>;
{{ '}' }}
interface CheckResult {{ '{' }}
status: 'ok' | 'error';
latency?: number;
message?: string;
{{ '}' }}
/**
* Lightweight liveness probe
* Returns 200 if the server is running
*/
router.get('/health/live', (req: Request, res: Response) => {{ '{' }}
res.status(200).json({{ '{' }} status: 'ok' {{ '}' }});
{{ '}' }});
/**
* Comprehensive readiness probe
* Checks all dependencies
*/
router.get('/health', async (req: Request, res: Response) => {{ '{' }}
const health: HealthStatus = {{ '{' }}
status: 'healthy',
timestamp: new Date().toISOString(),
version: process.env.npm_package_version || '1.0.0',
uptime: process.uptime(),
checks: {{ '{' }}{{ '}' }},
{{ '}' }};
// Check Database
try {{ '{' }}
const start = Date.now();
await db.query('SELECT 1');
health.checks.database = {{ '{' }}
status: 'ok',
latency: Date.now() - start,
{{ '}' }};
{{ '}' }} catch (error) {{ '{' }}
health.checks.database = {{ '{' }}
status: 'error',
message: 'Database connection failed',
{{ '}' }};
health.status = 'unhealthy';
logger.error('Health check: Database failed', {{ '{' }} error {{ '}' }});
{{ '}' }}
// Check Redis
try {{ '{' }}
const start = Date.now();
await redis.ping();
health.checks.redis = {{ '{' }}
status: 'ok',
latency: Date.now() - start,
{{ '}' }};
{{ '}' }} catch (error) {{ '{' }}
health.checks.redis = {{ '{' }}
status: 'error',
message: 'Redis connection failed',
{{ '}' }};
// Redis failure = degraded (app can still work without cache)
if (health.status === 'healthy') {{ '{' }}
health.status = 'degraded';
{{ '}' }}
logger.warn('Health check: Redis failed', {{ '{' }} error {{ '}' }});
{{ '}' }}
// Check disk space (optional)
try {{ '{' }}
const {{ '{' }} execSync {{ '}' }} = require('child_process');
const diskUsage = execSync("df -h / | awk 'NR==2 {{ '{' }}print $5{{ '}' }}'")
.toString()
.trim()
.replace('%', '');
health.checks.disk = {{ '{' }}
status: parseInt(diskUsage) > 90 ? 'error' : 'ok',
message: `${{ '{' }}diskUsage{{ '}' }}% used`,
{{ '}' }};
{{ '}' }} catch {{ '{' }}
// Ignore disk check errors
{{ '}' }}
// Check memory
const memUsage = process.memoryUsage();
const heapUsedMB = Math.round(memUsage.heapUsed / 1024 / 1024);
const heapTotalMB = Math.round(memUsage.heapTotal / 1024 / 1024);
health.checks.memory = {{ '{' }}
status: heapUsedMB / heapTotalMB > 0.9 ? 'error' : 'ok',
message: `${{ '{' }}heapUsedMB{{ '}' }}MB / ${{ '{' }}heapTotalMB{{ '}' }}MB`,
{{ '}' }};
// Set HTTP status based on health
const statusCode = health.status === 'healthy' ? 200 :
health.status === 'degraded' ? 200 : 503;
res.status(statusCode).json(health);
{{ '}' }});
export default router;
프로덕션에 로그인
구조화된 로깅은 프로덕션 환경에서 디버깅 및 모니터링을 위해 필수적입니다. 분석의 용이성을 위해 JSON 형식을 사용합니다.
import pino from 'pino';
import {{ '{' }} config {{ '}' }} from './config';
// Create base logger
export const logger = pino({{ '{' }}
level: config.env === 'production' ? 'info' : 'debug',
// Pretty print only in development
transport: config.env === 'development'
? {{ '{' }}
target: 'pino-pretty',
options: {{ '{' }}
colorize: true,
translateTime: 'HH:MM:ss',
ignore: 'pid,hostname',
{{ '}' }},
{{ '}' }}
: undefined,
// Structured format for production
formatters: {{ '{' }}
level: (label) => ({{ '{' }} level: label {{ '}' }}),
bindings: (bindings) => ({{ '{' }}
pid: bindings.pid,
host: bindings.hostname,
service: 'taskflow-api',
version: process.env.npm_package_version,
{{ '}' }}),
{{ '}' }},
// ISO timestamp
timestamp: pino.stdTimeFunctions.isoTime,
// Redact sensitive fields
redact: {{ '{' }}
paths: [
'req.headers.authorization',
'req.headers.cookie',
'res.headers["set-cookie"]',
'*.password',
'*.token',
'*.secret',
'*.apiKey',
],
censor: '[REDACTED]',
{{ '}' }},
{{ '}' }});
// Child logger with request context
export function createRequestLogger(requestId: string, userId?: string) {{ '{' }}
return logger.child({{ '{' }}
requestId,
userId,
{{ '}' }});
{{ '}' }}
// Express middleware for request logging
export function requestLogger(req, res, next) {{ '{' }}
const start = Date.now();
const requestId = req.headers['x-request-id'] || crypto.randomUUID();
// Attach request ID for correlation
req.requestId = requestId;
res.setHeader('X-Request-Id', requestId);
// Create child logger
req.log = createRequestLogger(requestId, req.user?.id);
// Log request start
req.log.info({{ '{' }}
msg: 'Request started',
method: req.method,
url: req.url,
userAgent: req.headers['user-agent'],
ip: req.ip,
{{ '}' }});
// Log response on finish
res.on('finish', () => {{ '{' }}
const duration = Date.now() - start;
const logData = {{ '{' }}
msg: 'Request completed',
method: req.method,
url: req.url,
status: res.statusCode,
duration,
{{ '}' }};
// Use appropriate log level based on status code
if (res.statusCode >= 500) {{ '{' }}
req.log.error(logData);
{{ '}' }} else if (res.statusCode >= 400) {{ '{' }}
req.log.warn(logData);
{{ '}' }} else {{ '{' }}
req.log.info(logData);
{{ '}' }}
{{ '}' }});
next();
{{ '}' }}
// Error logger
export function logError(error: Error, context?: Record<string, any>) {{ '{' }}
logger.error({{ '{' }}
msg: error.message,
error: {{ '{' }}
name: error.name,
message: error.message,
stack: error.stack,
{{ '}' }},
...context,
{{ '}' }});
{{ '}' }}
정상적인 종료
단계적 종료를 통해 애플리케이션이 진행 중인 요청을 완료할 수 있습니다. 완료하기 전에 데이터 손실을 방지하세요.
import {{ '{' }} Server {{ '}' }} from 'http';
import {{ '{' }} logger {{ '}' }} from './logger';
import {{ '{' }} db {{ '}' }} from './database';
import {{ '{' }} redis {{ '}' }} from './cache';
let isShuttingDown = false;
export function setupGracefulShutdown(server: Server) {{ '{' }}
const shutdown = async (signal: string) => {{ '{' }}
if (isShuttingDown) {{ '{' }}
logger.warn('Shutdown already in progress, ignoring signal');
return;
{{ '}' }}
isShuttingDown = true;
logger.info(`Received ${{ '{' }}signal{{ '}' }}, starting graceful shutdown...`);
// Set a timeout for graceful shutdown
const shutdownTimeout = setTimeout(() => {{ '{' }}
logger.error('Graceful shutdown timed out, forcing exit');
process.exit(1);
{{ '}' }}, 30000); // 30 seconds timeout
try {{ '{' }}
// 1. Stop accepting new connections
logger.info('Closing HTTP server...');
await new Promise<void>((resolve, reject) => {{ '{' }}
server.close((err) => {{ '{' }}
if (err) reject(err);
else resolve();
{{ '}' }});
{{ '}' }});
logger.info('HTTP server closed');
// 2. Close database connections
logger.info('Closing database connections...');
await db.disconnect();
logger.info('Database connections closed');
// 3. Close Redis connections
logger.info('Closing Redis connections...');
await redis.quit();
logger.info('Redis connections closed');
// 4. Cleanup complete
clearTimeout(shutdownTimeout);
logger.info('Graceful shutdown completed');
process.exit(0);
{{ '}' }} catch (error) {{ '{' }}
logger.error('Error during graceful shutdown', {{ '{' }} error {{ '}' }});
clearTimeout(shutdownTimeout);
process.exit(1);
{{ '}' }}
{{ '}' }};
// Handle different signals
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
// Handle uncaught exceptions
process.on('uncaughtException', (error) => {{ '{' }}
logger.fatal('Uncaught exception', {{ '{' }} error {{ '}' }});
shutdown('uncaughtException');
{{ '}' }});
// Handle unhandled rejections
process.on('unhandledRejection', (reason, promise) => {{ '{' }}
logger.fatal('Unhandled rejection', {{ '{' }} reason, promise {{ '}' }});
shutdown('unhandledRejection');
{{ '}' }});
{{ '}' }}
// Middleware to reject new requests during shutdown
export function shutdownMiddleware(req, res, next) {{ '{' }}
if (isShuttingDown) {{ '{' }}
res.status(503).json({{ '{' }}
error: {{ '{' }}
code: 'SERVICE_UNAVAILABLE',
message: 'Server is shutting down',
{{ '}' }},
{{ '}' }});
return;
{{ '}' }}
next();
{{ '}' }}
DevOps 모범 사례
❌ 안티 패턴
- 코드의 비밀
- 캐시 없이 빌드
- 이미지가 너무 큼
- 루트로 실행
- 건강검진 없음
- 로깅 없음
- 수동 배포
- 롤백 전략 없음
✅ 모범 사례
- 환경/Vault의 비밀
- 캐시를 사용한 다단계 빌드
- 알파인 베이스, 최소한의 레이어
- 컨테이너의 루트가 아닌 사용자
- 상태 확인이 구성됨
- 구조적 로깅(JSON)
- 자동화된 CI/CD
- 블루-그린/롤링 배치
배포 체크리스트
✅ 프로덕션에 배포하기 전
- ☐ 모든 테스트 통과
- ☐ Build Docker는 로컬에서 작동합니다.
- ☐ 상태 점검이 올바르게 응답합니다.
- ☐ 환경에 구성된 비밀
- ☐ SSL/TLS 구성됨
- ☐ 검증된 데이터베이스 백업
- ☐ 적극적인 모니터링 및 경고
- ☐ 문서화된 롤백 계획
- ☐ 팀에 배포 통보
- ☐ 변경 로그가 업데이트되었습니다.
결론 및 다음 단계
좋은 DevOps 설정은 배포를 안정적이고 반복 가능하며 안전하게 만듭니다. Docker, CI/CD 및 적절한 모니터링을 통해 확신을 갖고 배포할 수 있습니다. 모든 문제를 신속하게 식별할 수 있습니다.
Copilot은 Docker 구성, CI/CD 워크플로 및 모니터링 코드를 생성할 수 있습니다. 하지만 항상 확인하고 테스트해야 합니다. 프로덕션에 사용하기 전에. 시스템의 보안과 신뢰성은 귀하의 이해에 달려 있습니다. 각 구성 요소의.
에서 다음 기사와 마지막 기사 우리는 방법을 볼 것이다 진화하다 시간이 지남에 따라 프로젝트: 확장성, 지속적인 리팩토링, 관리 종속성, 고급 모니터링 및 장기 유지 관리.
🎯 기억해야 할 핵심 사항
- 도커: 다단계 빌드, 루트가 아닌 사용자, 상태 확인
- 작곡: dev 및 prod용 별도 파일
- CI/CD: 자동 테스트, 빌드, 승인 후 배포
- 건강: 활성 및 준비 상태 프로브를 위한 엔드포인트
- 벌채 반출: 상관관계 ID가 있는 구조적(JSON)
- 일시 휴업: 데이터 손실을 피하기 위해 우아함
- 기미: 절대로 코드에 있지 않고 항상 환경에 있습니다.
- 확인하다: 배포하기 전에 항상 테스트







