井云服务中心部署指南
本指南提供井云服务中心后端项目的完整部署说明,包括本地开发环境、测试环境和生产环境的部署方案。项目基于 Go 1.25.4 和 Kratos v2 框架构建,采用微服务架构设计。
🐳 本地开发环境
前置要求
- Docker: 20.10+
- Docker Compose: 2.0+
- Go: 1.25.4+ (如需本地编译)
- Make: 构建工具
快速启动
# 1. 克隆项目
git clone git@git.jingyun.design:jingyun/backend.git
cd backend
# 2. 启动基础设施服务
docker-compose -f deployments/docker-compose.local.yml up -d
# 3. 等待服务就绪(约30秒)
docker-compose -f deployments/docker-compose.local.yml ps
# 4. 构建并启动网关服务(示例)
cd services/gateway
make all
make run
服务列表
本地开发环境包含以下服务:
| 服务名 | 端口 | 描述 | 访问地址 |
|---|---|---|---|
| PostgreSQL | 5431 | 主数据库 (17.5) | postgres://akita:chrishy123@localhost:5431/akita |
| Redis | 6378 | 缓存数据库 | redis://localhost:6378 |
| RabbitMQ | 5672/15672 | 消息队列 (3-management) | http://localhost:15672 (akita/chrishy123) |
| Consul | 8500/8600 | 服务注册/配置中心 | http://localhost:8500 |
| Gateway | 8000/9000 | 网关服务 (统一入口) | http://localhost:8000 |
| Auth | 9001 | 认证服务 (登录/Token) | - |
| User | 9002 | 用户服务 (用户/分销/点数) | - |
| Tenant | 9003 | 租户服务 (租户/版本/菜单) | - |
| Agent | 9004 | 智能体服务 (AI智能体) | - |
| Payment | 9006 | 支付服务 (微信支付) | - |
| Integration | 9007 | 集成服务 (OSS/短信) | - |
| Cron | 9008 | 定时任务服务 (点数过期) | - |
开发工作流
# 1. 进入服务目录
cd services/[service-name]
# 2. 初始化开发工具
make init
# 3. 生成所有代码
make all
# 4. 运行测试
make test
# 5. 构建服务
make build
# 6. 运行服务
make run
常用命令
# 查看所有服务状态
docker-compose -f deployments/docker-compose.local.yml ps
# 查看服务日志
docker-compose -f deployments/docker-compose.local.yml logs -f [service-name]
# 重启特定服务
docker-compose -f deployments/docker-compose.local.yml restart [service-name]
# 停止所有服务
docker-compose -f deployments/docker-compose.local.yml down
# 重新构建并启动
docker-compose -f deployments/docker-compose.local.yml up -d --build [service-name]
数据持久化
数据持久化目录位于 /Users/akita/data/:
/Users/akita/data/
├── postgres/ # PostgreSQL 数据文件
├── redis/ # Redis 数据文件
├── rabbitmq/ # RabbitMQ 数据文件
└── consul/ # Consul 数据文件
☁️ 生产环境部署
环境要求
硬件要求
| 组件 | 最小配置 | 推荐配置 |
|---|---|---|
| CPU | 4 核 | 8 核+ |
| 内存 | 8GB | 16GB+ |
| 存储 | 100GB SSD | 500GB+ SSD |
| 网络 | 100Mbps | 1Gbps+ |
软件要求
- Kubernetes: 1.24+
- Docker: 20.10+
- Helm: 3.0+ (可选)
- Ingress Controller: nginx-ingress 或 traefik
Kubernetes 部署
1. 准备命名空间
# 创建命名空间
kubectl create namespace jingyun
# 设置默认命名空间
kubectl config set-context --current --namespace=jingyun
2. 部署基础设施
# 部署 PostgreSQL
kubectl apply -f k8s/postgres/
# 部署 Redis
kubectl apply -f k8s/redis/
# 部署 RabbitMQ
kubectl apply -f k8s/rabbitmq/
# 部署 Consul
kubectl apply -f k8s/consul/
# 等待所有 Pod 就绪
kubectl wait --for=condition=ready pod -l app=jingyun-infrastructure --timeout=300s
3. 创建配置和密钥
# 创建 ConfigMap
kubectl apply -f k8s/configmaps/
# 创建 Secrets
kubectl apply -f k8s/secrets/
# 验证配置
kubectl get configmaps
kubectl get secrets
4. 部署微服务
# 部署所有服务
kubectl apply -f k8s/services/
# 或者逐个部署
kubectl apply -f k8s/services/gateway/
kubectl apply -f k8s/services/auth/
kubectl apply -f k8s/services/user/
kubectl apply -f k8s/services/tenant/
kubectl apply -f k8s/services/agent/
kubectl apply -f k8s/services/payment/
kubectl apply -f k8s/services/integration/
kubectl apply -f k8s/services/cron/
5. 配置 Ingress
# 部署 Ingress
kubectl apply -f k8s/ingress/
# 验证 Ingress
kubectl get ingress
6. 验证部署
# 检查所有 Pod 状态
kubectl get pods
# 检查服务状态
kubectl get services
# 检查 Ingress
kubectl get ingress
# 查看服务日志
kubectl logs -f deployment/gateway
Helm 部署(可选)
# 添加 Helm 仓库
helm repo add jingyun https://charts.jingyun.team
helm repo update
# 安装基础设施
helm install infra jingyun/infrastructure \
--namespace jingyun \
--create-namespace
# 安装应用服务
helm install backend jingyun/backend \
--namespace jingyun \
--set gateway.replicas=3 \
--set auth.replicas=2 \
--set user.replicas=2
🔧 环境配置
环境变量
生产环境需要配置以下关键环境变量:
# 数据库配置
DATABASE_URL="postgres://user:password@postgres:5432/jingyun?sslmode=require"
# Redis 配置
REDIS_URL="redis://redis:6379"
REDIS_PASSWORD="your-redis-password"
# RabbitMQ 配置
RABBITMQ_URL="amqp://user:password@rabbitmq:5672/"
# Consul 配置
CONSUL_ADDR="consul:8500"
# JWT 配置
JWT_SECRET="your-jwt-secret"
JWT_INTERNAL_SECRET="your-internal-secret"
# 微信配置
WECHAT_APP_ID="your-wechat-app-id"
WECHAT_APP_SECRET="your-wechat-app-secret"
# 支付配置
WECHAT_PAY_MCH_ID="your-mch-id"
WECHAT_PAY_API_KEY="your-api-key"
# 阿里云配置
ALIBABA_ACCESS_KEY="your-access-key"
ALIBABA_SECRET_KEY="your-secret-key"
OSS_BUCKET="your-bucket-name"
OSS_ENDPOINT="your-endpoint"
Consul 配置
在 Consul KV 中配置服务参数:
# 配置网关服务
consul kv put kratos/gateway.yaml '
server:
http:
addr: 0.0.0.0:8000
timeout: 30s
grpc:
addr: 0.0.0.0:9000
timeout: 30s
data:
database:
driver: postgres
source: ${DATABASE_URL}
redis:
addr: ${REDIS_URL}
password: ${REDIS_PASSWORD}
db: 0
'
# 配置认证服务
consul kv put kratos/auth.yaml '
server:
http:
addr: 0.0.0.0:9000
timeout: 30s
grpc:
addr: 0.0.0.0:9001
timeout: 30s
data:
database:
driver: postgres
source: ${DATABASE_URL}
redis:
addr: ${REDIS_URL}
password: ${REDIS_PASSWORD}
db: 1
'
📊 监控配置
Prometheus 配置
# prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'jingyun-gateway'
static_configs:
- targets: ['gateway:8000']
metrics_path: /metrics
scrape_interval: 10s
- job_name: 'jingyun-auth'
static_configs:
- targets: ['auth:9000']
metrics_path: /metrics
scrape_interval: 10s
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
Grafana 仪表板
# 导入预定义的仪表板
kubectl apply -f k8s/monitoring/grafana-dashboards.yaml
# 访问 Grafana
kubectl port-forward svc/grafana 3000:3000
# http://localhost:3000 (admin/admin)
🔍 健康检查
服务健康检查
所有服务都提供健康检查端点:
# HTTP 健康检查
curl http://localhost:8000/health
# gRPC 健康检查
grpcurl -plaintext localhost:9000 grpc.health.v1.Health/Check
Kubernetes 健康检查
# deployment.yaml 示例
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway
spec:
template:
spec:
containers:
- name: gateway
image: jingyun/gateway:latest
ports:
- containerPort: 8000
- containerPort: 9000
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
🚀 CI/CD 集成
GitHub Actions 示例
# .github/workflows/deploy.yml
name: Deploy to Production
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: 1.25.4
- run: make test
- run: make lint
build:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build Docker images
run: |
docker build -f services/gateway/Dockerfile -t jingyun/gateway:${{ github.sha }} .
docker push jingyun/gateway:${{ github.sha }}
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Deploy to Kubernetes
run: |
kubectl set image deployment/gateway gateway=jingyun/gateway:${{ github.sha }}
kubectl rollout status deployment/gateway
🔧 故障恢复
自动重启策略
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: gateway
image: jingyun/gateway:latest
restartPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
备份策略
# 数据库备份
kubectl exec -it postgres-0 -- pg_dump -U akita akita > backup.sql
# Redis 备份
kubectl exec -it redis-0 -- redis-cli BGSAVE
kubectl cp redis-0:/data/dump.rdb ./redis-backup.rdb
# Consul 备份
consul snapshot save backup.snap
📖 相关文档: