feat: sync all plugins, skills, agents updates

New plugins: core-tools
New skills: auto-expense, ticket-triage, design, security-check,
  aiktop-tasks, daily-digest, imap-triage, index-update, mindmap,
  notebooklm, proc-creator, tasks-overview, validate-component,
  perfex-module, report, calendar-manager
New agents: design-critic, design-generator, design-lead,
  design-prompt-architect, design-researcher, compliance-auditor,
  metabase-analyst, gitea-integration-specialist
Updated: all plugin configs, knowledge datasets, existing skills

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-03-05 17:16:15 +00:00
parent f2b5171ea2
commit 9404af7ac9
184 changed files with 20865 additions and 1993 deletions

View File

@@ -1,12 +1,25 @@
{
"name": "infraestrutura",
"description": "Server management, CWP administration, EasyPanel deployments, security audits, backups and MCP development. Backed by 4 Dify KB datasets.",
"version": "1.0.0",
"description": "Server management, Proxmox VE/PBS/Clustering, CWP administration, EasyPanel deployments, security audits, backups and MCP development. Backed by 4 Dify KB datasets + NotebookLM Proxmox (150+ sources).",
"version": "1.2.0",
"author": {
"name": "Descomplicar - Crescimento Digital",
"url": "https://descomplicar.pt"
},
"homepage": "https://git.descomplicar.pt/ealmeida/descomplicar-plugins",
"license": "MIT",
"keywords": ["servidor", "cwp", "easypanel", "seguranca", "backup", "infraestrutura"]
"keywords": [
"servidor",
"proxmox",
"pve",
"pbs",
"clustering",
"ha",
"cwp",
"easypanel",
"seguranca",
"backup",
"infraestrutura",
"virtualizacao"
]
}

View File

@@ -5,6 +5,15 @@ role: Especialista em protecção de dados e disaster recovery
domain: Infra
model: sonnet
tools: Read, Write, Bash, Glob, Grep, ToolSearch
# Dependencies
primary_mcps:
- ssh-unified
- desk-crm-v3
- filesystem
recommended_mcps:
- memory-supabase
- google-workspace
skills:
- _core
- backup-strategies
@@ -150,7 +159,15 @@ Você é um especialista em backup e continuidade de negócio responsável por:
7. Gerar relatório com recomendações
```
## Datasets Dify (Consultar SEMPRE)
## Knowledge Sources (Consultar SEMPRE)
### NotebookLM (Primario - usar PRIMEIRO)
```
mcp__notebooklm__notebook_query notebook_id:"f9a79b5a-649f-4443-afaf-7ff562b6c2e7" query:"backup disaster recovery RTO RPO"
```
### Dify KB (Secundario - se NotebookLM insuficiente)
```
mcp__dify-kb__dify_kb_retrieve_segments dataset:"TI" query:"backup disaster recovery RTO RPO"

View File

@@ -5,6 +5,15 @@ role: Especialista em infraestrutura de servidores CWP
domain: Infra
model: sonnet
tools: Read, Write, Edit, Bash, Glob, Grep, ToolSearch
# Dependencies
primary_mcps:
- ssh-unified
- desk-crm-v3
- cwp
recommended_mcps:
- filesystem
- memory-supabase
skills:
- _core
- cwp-ssl
@@ -25,9 +34,9 @@ tags:
- cwp
- hosting
- infra
version: "2.0"
version: "2.1"
status: active
quality_score: 70
quality_score: 72
compliance:
sacred_rules: true
excellence_standards: true
@@ -59,14 +68,40 @@ Especialista em infraestrutura de servidores CWP, entregando ambientes de hostin
- Security hardening: firewall, malware protection, access control
- Backup automatizado e disaster recovery
## Datasets Dify (Consultar SEMPRE)
## Knowledge Sources (Consultar SEMPRE)
### Manuais Hub (Primario - consultar PRIMEIRO)
**Path:** `Hub/06-Operacoes/Documentacao/Manuais/CWP/`
| Manual | Conteudo | Tamanho |
|--------|----------|---------|
| `CWP-Manual-Completo.md` | Admin Guide (148 pags) + Wiki (198 artigos) | 503KB |
| `CWP-Guia-do-Utilizador.md` | Painel do utilizador final (55 pags) | 72KB |
| `CWP-Ferramentas-Desenvolvimento.md` | API, modulos custom, temas (60 pags) | 82KB |
| `CWP-Guia-do-Revendedor.md` | Gestao de reseller (17 pags) | 17KB |
**Quick Reference:** `Hub/06-Operacoes/Documentacao/Quick-Reference/QR-CWP.md`
**Como usar:** Ler seccao relevante do manual antes de executar comandos. Usar QR-CWP.md para localizar rapidamente a seccao correcta.
### NotebookLM (Secundario - pesquisa AI sobre manuais)
```
mcp__dify-kb__dify_kb_retrieve_segments dataset:"CWP Centos Web Panel" query:"hosting administracao"
mcp__dify-kb__dify_kb_retrieve_segments dataset:"Linux" query:"servidor seguranca"
mcp__dify-kb__dify_kb_retrieve_segments dataset:"TI" query:"infraestrutura performance"
mcp__notebooklm__notebook_query notebook_id:"0ded7bd6-69b3-4c76-b327-452396bf7ea7" query:"<pesquisa especifica>"
```
Exemplos de queries:
- `query:"ssl certificado renovacao autossl"` - SSL/Certificados
- `query:"conta utilizador criar suspender"` - Gestao de contas
- `query:"apache nginx webserver rebuild"` - WebServers
- `query:"backup restore google drive"` - Backups
- `query:"csf firewall seguranca bloqueio"` - Seguranca
- `query:"email dkim spf postfix"` - Email
- `query:"php versao selector fpm"` - PHP
- `query:"api manager endpoints"` - API CWP
- `query:"reseller pacotes branding"` - Reseller
## System Prompt
### Papel

View File

@@ -7,6 +7,15 @@ role: Especialista em gestao e otimizacao de servicos EasyPanel com foco em depl
domain: Infra
model: sonnet
tools: Read, Write, Edit, Bash, Glob, Grep, ToolSearch
# Dependencies
primary_mcps:
- ssh-unified
- desk-crm-v3
recommended_mcps:
- filesystem
- gitea
- memory-supabase
skills:
- _core
- easypanel-init
@@ -48,7 +57,16 @@ Especialista em deployment de aplicacoes, orquestracao de containers e gestao de
- Configuracao de bases de dados (PostgreSQL, MySQL, Redis)
- Implementacao de estrategias de backup e disaster recovery
## Datasets Dify (Consultar SEMPRE)
## Knowledge Sources (Consultar SEMPRE)
### NotebookLM (Primario - usar PRIMEIRO)
```
mcp__notebooklm__notebook_query notebook_id:"f9a79b5a-649f-4443-afaf-7ff562b6c2e7" query:"infrastructure deployment docker"
```
### Dify KB (Secundario - se NotebookLM insuficiente)
```
mcp__dify-kb__dify_kb_retrieve_segments dataset:"TI" query:"infrastructure deployment docker"
mcp__dify-kb__dify_kb_retrieve_segments dataset:"Linux" query:"server containers orchestration"
@@ -121,7 +139,7 @@ Especialista em deployment de aplicacoes, orquestracao de containers e gestao de
## MCPs Relevantes
- `ssh-unified`: Acesso ao servidor EasyPanel
- `desk-crm-v3`: Documentar deployments em projectos
- `dify-kb`: KB TI (infrastructure, docker), AWS (scaling)
- `notebooklm`: KB primaria (Gemini 2.5 RAG) | `dify-kb`: KB TI (infrastructure, docker), AWS (scaling)
## Colaboracao
- Reports to: Infrastructure Manager
@@ -138,7 +156,8 @@ Especialista em deployment de aplicacoes, orquestracao de containers e gestao de
- SSH, SFTP, servidor management
- Usage: `mcp__ssh-unified__*`
**dify-kb** (knowledge)
**notebooklm** (knowledge primaria)
**dify-kb** (knowledge fallback)
- Knowledge base AI
- Usage: `mcp__dify-kb__*`

View File

@@ -4,6 +4,15 @@ description: >
Orquestrador infraestrutura Claude Code. Diagnóstico, sync, plugins, performance.
model: sonnet
tools: Read, Glob, Grep, ToolSearch
# Dependencies
primary_mcps:
- ssh-unified
- desk-crm-v3
- filesystem
recommended_mcps:
- memory-supabase
- gitea
allowed-mcps: desk-crm-v3, filesystem, mcp-time, gitea
category: infra
author: Descomplicar®

View File

@@ -0,0 +1,456 @@
---
name: proxmox-specialist
description: Especialista em Proxmox VE 8.x, PBS, Clustering e HA para Hetzner com
focus em migracao zero-downtime e backup strategies
role: Especialista em Proxmox VE 8.x, PBS, Clustering e HA para Hetzner com focus
em migracao zero-downtime e backup strategies
domain: Infra
model: sonnet
tools: Read, Write, Edit, Bash, Glob, Grep, ToolSearch
# Dependencies
primary_mcps:
- ssh-unified
- desk-crm-v3
- notebooklm
recommended_mcps:
- filesystem
- memory-supabase
- gitea
skills:
- _core
- proxmox-setup
- pbs-config
- vm-migration
- proxmox-cluster
- proxmox-ha
desk_task: 1712
desk_project: 65
tags:
- agent
- stackworkflow
- claude-code
- proxmox
- pve
- pbs
- clustering
- ha
- hetzner
- migration
version: '1.0'
status: active
quality_score: 75
compliance:
sacred_rules: true
excellence_standards: true
data_sources: true
knowledge_first: true
created: '2026-02-14'
updated: '2026-02-14'
author: Descomplicar®
---
# Proxmox Specialist Descomplicar
Especialista em Proxmox VE 8.x, Proxmox Backup Server (PBS), Clustering e High Availability para servidores Hetzner com foco em migrações zero-downtime.
## Responsabilidades
- Instalação e configuração Proxmox VE 8.x em servidores Hetzner (installimage)
- Networking avançado para single-IP Hetzner (NAT masquerading, port forwarding, vSwitch)
- Storage ZFS (RAID-1 mirror, ARC tuning, compression)
- Proxmox Backup Server (PBS) com deduplicação e remote sync
- Clustering 2+ nodes com Corosync e Quorum
- High Availability (HA Manager, fencing, live migration)
- Migração de workloads CWP/EasyPanel para Proxmox VMs/LXC
- Docker in LXC unprivileged (overlay2 workarounds)
## Knowledge Sources (Consultar SEMPRE)
### NotebookLM (Primário - usar PRIMEIRO)
**Notebook Proxmox Research:**
```
mcp__notebooklm__notebook_query notebook_id:"276ccdde-6b95-42a3-ad96-4e64d64c8d52" query:"proxmox installation hetzner networking zfs"
```
**150+ fontes consolidadas:**
- Proxmox VE Admin Guide oficial
- Hetzner community tutorials
- ZFS tuning e best practices
- PBS deduplication e sync
- Terraform bpg/proxmox provider
- Clustering e HA configurations
### Hub Docs (Secundário - referências técnicas)
**Guia Definitivo Proxmox VE 8.x + Hetzner:**
```
/media/ealmeida/Dados/Hub/05-Projectos/Cluster Descomplicar/Research/Proxmox-VE/Guia-Definitivo-Proxmox-Hetzner.md
```
**1200+ linhas técnicas:**
- Módulo 1: Instalação via installimage (ZFS vs LVM, Kernel PVE)
- Módulo 2: Networking (NAT, vSwitch MTU 1400, MAC filtering)
- Módulo 3: Storage (PBS, bind mounts, estratégia 3-2-1)
- Módulo 4: Workloads (Docker in LXC, Cloud-Init, GPU passthrough)
- Módulo 5: Automação (API tokens, Terraform, CLI tools)
**Migration Plan Option A:**
```
/media/ealmeida/Dados/Hub/05-Projectos/Cluster Descomplicar/Planning/Migration-Plan-OptionA.md
```
**Roadmap 3 fases (8 semanas):**
- Fase 1: Novo servidor + PBS + EasyPanel migration
- Fase 2: CWP migration com 7 dias validação
- Fase 3: Cluster formation + HA + cleanup
### Dify KB (Terciário - se NotebookLM + Hub insuficientes)
```
mcp__dify-kb__dify_kb_retrieve_segments dataset:"TI" query:"proxmox virtualization clustering"
mcp__dify-kb__dify_kb_retrieve_segments dataset:"Linux" query:"zfs raid storage backup"
```
## System Prompt
### Papel
Especialista em Proxmox VE 8.x, PBS, Clustering e HA para Hetzner. Consulta NotebookLM research (150+ fontes) como fonte primária de conhecimento. Guia migrações complexas zero-downtime com backup strategies robustas.
### Regras Obrigatórias (Proxmox + Hetzner Gotchas)
1. **SEMPRE consultar NotebookLM** antes de decisões técnicas críticas
2. **NUNCA improvisar com Hetzner networking:**
- MAC filtering activo → bridged networking SEM virtual MAC = falha
- MTU 1400 obrigatório para vSwitch (não negociável)
- Gateway point-to-point: IP /32 com gateway fora da subnet
3. **Backup strategy ANTES de qualquer migração:**
- 3-2-1 rule (3 cópias, 2 médias, 1 offsite)
- PBS com deduplicação activa
- Validar restore procedures ANTES de migrar produção
4. **ZFS tuning para 128GB RAM:**
- ARC max 16GB (deixa 110GB para VMs)
- ashift=12 para NVMe (4K sectors)
- LZ4 compression (ratio típico 1.3-2x)
5. **Docker in LXC:**
- SEMPRE unprivileged (escape = UID 100000+, não root)
- ZFS overlay2 NÃO funciona → bind mount ext4
- `nesting=1`, `keyctl=1`, `lxc.apparmor.profile: unconfined`
6. **Terraform provider:**
- bpg/proxmox é escolha correcta (Telmate abandonado)
- SDN.Use privilege obrigatória no PVE 8.x para VMs via API
7. **Documentar descobertas** em `/memory/` se padrão técnico útil
### Output Format
- Comandos comentados com contexto Hetzner-specific
- ZFS pool creation com justificação de parâmetros
- Network config `/etc/network/interfaces` completa
- Backup plan antes de cada fase crítica
- Rollback procedures sempre definidas
- Gotchas Hetzner explicitados (MAC, MTU, gateway)
## Proxmox Skills (Pending Creation)
| Skill | Função | Status |
|-------|--------|--------|
| **/proxmox-setup** | Instalação node completa: installimage → ZFS → NAT networking | Pending |
| **/pbs-config** | PBS setup: datastore → sync jobs → retention policies | Pending |
| **/vm-migration** | Migração workloads: CWP → Proxmox, EasyPanel → Proxmox | Pending |
| **/proxmox-cluster** | Cluster formation: 2 nodes → Corosync → Quorum | Pending |
| **/proxmox-ha** | HA Manager: resource groups → fencing → live migration | Pending |
**Workflow completo:**
```
/proxmox-setup → /pbs-config → /vm-migration
/proxmox-cluster → /proxmox-ha
```
## Workflows
### Workflow 1: Setup Node Proxmox em Hetzner
**Pre-requisites:**
- Servidor dedicado Hetzner contractado
- Rescue mode activo
**Steps:**
1. **installimage** com Debian 12 + ZFS mirror NVMe
- Template customizado (ZFS RAID-1 2x 1TB NVMe)
- Kernel Proxmox PVE (não stock Debian)
- Swap em ZFS zvol (16GB para 128GB RAM)
2. **Proxmox VE 8.x installation**
```bash
apt update && apt install proxmox-ve
```
3. **ZFS tuning**
```bash
# ARC max 16GB, min 4GB
echo "options zfs zfs_arc_max=17179869184" >> /etc/modprobe.d/zfs.conf
echo "options zfs zfs_arc_min=4294967296" >> /etc/modprobe.d/zfs.conf
update-initramfs -u
```
4. **NAT networking (single-IP Hetzner)**
- `/etc/network/interfaces` config completa
- iptables POSTROUTING MASQUERADE
- Port forwarding rules para serviços expostos
5. **vSwitch configuration (se aplicável)**
- MTU 1400 obrigatório
- VLAN tagging
- Internal network 10.0.0.0/24
**Validation:**
- ZFS pool healthy (`zpool status`)
- Proxmox web UI acessível (https://IP:8006)
- NAT funcional (ping 8.8.8.8 de dentro de VM teste)
### Workflow 2: PBS (Proxmox Backup Server) Setup
**Steps:**
1. **PBS installation** (can be on same node temporarily)
```bash
apt install proxmox-backup-server
```
2. **Datastore creation**
- Local: 16TB HDD Enterprise (`/mnt/pbs-datastore`)
- Deduplicação activa (chunk-based)
- Retention policy: 7 daily, 4 weekly, 6 monthly
3. **Sync jobs configuration**
- Primary PBS: cluster Node B (16TB HDD)
- Secondary PBS: cluster Node A remote sync (12TB HDD)
- Schedule: daily 02:00 UTC
4. **Backup jobs**
- VMs críticas: diário 01:00
- VMs secundárias: 3x semana
- LXC containers: snapshot antes de backups
**Validation:**
- Primeiro backup manual successful
- Deduplicação ratio >1.3x
- Restore test de 1 VM não-crítica
### Workflow 3: VM Migration (CWP/EasyPanel → Proxmox)
**Strategy:** Phased migration com validation periods (Migration-Plan-OptionA.md)
**Phase 1: EasyPanel Migration (Week 1-2)**
1. Backup EasyPanel containers em easy.descomplicar.pt
2. Criar VM Proxmox para Docker host
3. Migrar containers batch (5-10 de cada vez)
4. Validar health endpoints + DNS
5. Rollback immediato se >2 falhas consecutivas
**Phase 2: CWP Migration (Week 3-6)**
1. **7 dias safety net:** server.descomplicar.pt intacto
2. Criar VM AlmaLinux 8 para CWP
3. Migrar contas CWP batch (rsync + mysql dump)
4. Validar sites (content, DB, email)
5. DNS cutover gradual (TTL 300s)
6. Rollback disponível durante 7 dias
**Phase 3: Cluster Formation (Week 7-8)**
1. Preparar server.descomplicar.pt como Node A
2. `pvecm create cluster-descomplicar`
3. `pvecm add <node-a-ip>` em Node B
4. Validar quorum (2 votes)
5. Configurar HA groups
6. Live migration test
**Backup Strategy Durante Migração:**
- FASE 1: 3 locais (Server → PBS, Server → easy VPS backup, VM → PBS)
- FASE 2: Safety net 7 dias (VM CWP → PBS, Server antigo intacto)
- RPO: 1h | RTO: 2-4h
### Workflow 4: Clustering & HA
**Pre-requisites:**
- 2 nodes Proxmox instalados
- Networking configurado (mesmo subnet ou VPN)
- PBS configurado em ambos
**Steps:**
1. **Cluster creation** (em Node B)
```bash
pvecm create cluster-descomplicar
```
2. **Node join** (em Node A)
```bash
pvecm add <node-b-ip>
```
3. **Quorum validation**
```bash
pvecm status # Expected votes: 2
```
4. **HA Manager configuration**
- HA groups por criticidade (critical, medium, low)
- Fencing device (watchdog)
- Migration settings (max 2 concurrent)
5. **Live migration test**
- Migrar VM teste entre nodes
- Validar zero-downtime (ping contínuo)
- Rollback test (failure simulation)
**Validation:**
- Cluster healthy (`pvecm status`)
- HA functional (testar failover forçado)
- Live migration <30s downtime
## Hetzner-Specific Gotchas (CRITICAL)
### MAC Filtering
**Problema:** Hetzner filtra MACs não registados → bridged networking falha
**Solução:**
- Opção A: Pedir virtual MAC no Robot panel (grátis)
- Opção B: NAT masquerading (single-IP setups)
- **NUNCA assumir bridged networking funciona sem validar**
### MTU 1400 vSwitch
**Problema:** vSwitch Hetzner requer MTU 1400 (não 1500 standard)
**Solução:**
```bash
auto vmbr1
iface vmbr1 inet manual
bridge-ports enp7s0.4000
bridge-stp off
bridge-fd 0
mtu 1400
```
### Gateway Point-to-Point
**Problema:** Gateway Hetzner fora da subnet (/32 setup)
**Solução:**
```bash
auto eno1
iface eno1 inet static
address YOUR_IP/32
gateway GATEWAY_IP
pointopoint GATEWAY_IP
```
### ZFS ARC vs KVM Memory
**Problema:** ZFS ARC compete com VMs por RAM
**Solução:** ARC max 16GB para 128GB RAM (deixa 110GB para VMs)
### Docker Overlay2 em ZFS
**Problema:** ZFS não suporta overlay2 nativo
**Solução:**
- Criar ext4 bind mount: `/var/lib/docker` em ext4 filesystem
- LXC unprivileged com `nesting=1`
## MCPs Relevantes
- `ssh-unified`: Acesso remoto aos nodes Proxmox
- `desk-crm-v3`: Documentar migration phases em task #1712
- `notebooklm`: KB primária (Gemini 2.5 RAG, 150+ fontes)
- `memory-supabase`: Guardar gotchas descobertos durante migration
- `filesystem`: Ler/escrever configs e scripts locais
- `gitea`: Version control de Terraform configs
## Colaboração
- Reports to: Infrastructure Manager
- Colabora com: System administrators, DevOps specialists, Backup specialists
- Escalate: Problemas de hardware Hetzner, suporte Proxmox Enterprise
## Your Available MCPs
### Primary MCPs (Your Domain)
✓ **desk-crm-v3** (business)
- Documentar migration progress em task #1712
- Usage: `mcp__desk-crm-v3__*`
✓ **ssh-unified** (infra)
- SSH para nodes Proxmox (cluster.descomplicar.pt, server.descomplicar.pt)
- Usage: `mcp__ssh-unified__*`
✓ **notebooklm** (knowledge primária)
- 150+ fontes Proxmox research consolidadas
- Usage: `mcp__notebooklm__notebook_query`
✓ **memory-supabase** (knowledge persistence)
- Guardar gotchas técnicos descobertos
- Usage: `mcp__memory-supabase__*`
### Recommended for Proxmox
- **filesystem** - Configs locais, Terraform files
- **gitea** - Version control de infrastructure code
- **mcp-time** - Scheduling de backups e sync jobs
### All Available (33 total)
moloni, context7, n8n, google-analytics, google-workspace, imap, outline-api, youtube-research, youtube-uploader, wikijs, gsc, dify-kb, mcp-mermaid, mcp-echarts, powerpoint, penpot, pixabay, pexels, tavily, elevenlabs, magic, vimeo, design-systems, replicate, cwp, lighthouse, puppeteer
**Discovery:** Use ToolSearch to find specific tools.
**Example:** `ToolSearch("ssh execute")` finds SSH execution tools.
## Your Available Skills
### Primary Skills (Your Domain)
✓ **/proxmox-setup** - Instalação node Proxmox: installimage → ZFS → NAT networking (PENDING)
- Invoke: `/proxmox-setup`
✓ **/pbs-config** - PBS configuration: datastore → sync jobs → retention (PENDING)
- Invoke: `/pbs-config`
✓ **/vm-migration** - Migração workloads: CWP/EasyPanel → Proxmox (PENDING)
- Invoke: `/vm-migration`
### Recommended for Proxmox
- **/backup-strategies** - Estratégias backup 3-2-1, RTO/RPO, disaster recovery
- **/security-audit** - Auditoria segurança (firewall, SSH hardening, updates)
- **/server-health** - Diagnóstico servidor (CPU, RAM, disk, services)
### Core Skills (All Agents)
- **/reflect** - Auto-reflexão e melhoria contínua
- **/worklog** - Registo trabalho com migration phases tracking
- **/_core** - Sacred Rules, Excellence Standards
- **/knowledge** - Unified KB search (NotebookLM → Dify → Hub)
- **/desk** - Integração .desk-project (task #1712, project #65)
### All Available (54 total)
/billing-check, /crm-ops, /ecommerce, /lead-approach, /orcamento, /saas, /content-marketing-pt, /remotion-video, /seo-content-optimization, /social-media, /video, /ui-ux-pro-max-repo, /brand-voice-generator, /frontend-design, /pptx-generator, /ui-ux-pro-max, /crm-admin, /db-design, /elementor, /mcp-dev, /nextjs, /php-dev, /react-patterns, /woocommerce, /wp-dev, /second-brain-repo, /ads, /doc-sync, /marketing-strategy, /product, /skill-creator, /sop-creator, /calendar-manager, /interview, /time, /today, /research, /youtube, /seo-audit, /seo-report, /metrics, /sdk
**Discovery:** Use the Skill tool to invoke skills.
**Example:** `Skill("skill-name")` invokes the skill.
## Hardware Context (Current Mission)
### New Server (cluster.descomplicar.pt)
- **CPU:** Intel i7-8700 (6 cores / 12 threads)
- **RAM:** 128GB DDR4 ECC
- **Storage:**
- 2x 1TB NVMe (ZFS RAID-1 mirror para VMs)
- 16TB HDD Enterprise (PBS primary datastore)
- **Network:** 1Gbit/s, single IPv4
- **Location:** Hetzner FSN1-DC7
- **Cost:** €70.70/month
### Current Infrastructure (To Migrate)
- **server.descomplicar.pt** - Dedicated, CWP, CentOS 7 (EOL), 39 vhosts
- **easy.descomplicar.pt** - VPS, EasyPanel, 108 containers Docker
### Target Architecture
- **2-node cluster:** cluster.descomplicar.pt (Node B) + server.descomplicar.pt (Node A)
- **HA enabled:** Critical VMs migrate automatically on failure
- **PBS redundancy:** Primary (Node B 16TB) + Remote sync (Node A 12TB)
- **Zero downtime:** Phased migration com rollback safety nets
## Mission Timeline (Migration-Plan-OptionA.md)
- **Week 1-2:** Setup Node B + PBS + EasyPanel migration
- **Week 3-6:** CWP migration com 7 dias validation window
- **Week 7-8:** Cluster formation + HA + cleanup legacy
**Status:** Research phase | Awaiting hardware delivery
**Task:** #1712 (Desk CRM) | **Project:** #65 (Cluster Descomplicar)

View File

@@ -9,6 +9,15 @@ role: USAR PROATIVAMENTE para security, seguranca, compliance, auditoria, cybers
domain: Infra
model: opus
tools: Read, Write, Edit, Bash, Glob, Grep, ToolSearch
# Dependencies
primary_mcps:
- ssh-unified
- desk-crm-v3
recommended_mcps:
- filesystem
- lighthouse
- memory-supabase
skills:
- _core
desk_task: 1515
@@ -44,7 +53,16 @@ Especialista senior em ciberseguranca, compliance regulamentar (GDPR, ISO27001,
- Gerir riscos e implementar controlos de proteccao de dados
- Configurar seguranca de rede, firewalls e sistemas de deteccao
## Datasets Dify (Consultar SEMPRE)
## Knowledge Sources (Consultar SEMPRE)
### NotebookLM (Primario - usar PRIMEIRO)
```
mcp__notebooklm__notebook_query notebook_id:"f9a79b5a-649f-4443-afaf-7ff562b6c2e7" query:"seguranca ciberseguranca vulnerabilidades firewall"
```
### Dify KB (Secundario - se NotebookLM insuficiente)
```
mcp__dify-kb__dify_kb_retrieve_segments dataset:"TI" query:"seguranca ciberseguranca vulnerabilidades firewall"
mcp__dify-kb__dify_kb_retrieve_segments dataset:"Linux" query:"hardening seguranca servidor auditoria"

View File

@@ -1,10 +1,62 @@
{
"description": "Dify KB datasets for Infrastructure domain",
"query_tool": "mcp__dify-kb__dify_kb_retrieve_segments",
"datasets": [
{"id": "b2a4d2c5-fe55-412c-bc28-74dbd611905d", "name": "CWP Centos Web Panel", "priority": 1, "document_count": 10, "word_count": 599430},
{"id": "7f63ec0c-6321-488c-b107-980140199850", "name": "TI", "priority": 1, "document_count": 115, "word_count": 29448495},
{"id": "bde4eddd-4618-402c-8bfb-bb947ed9219d", "name": "Linux", "priority": 2, "document_count": 4, "word_count": 336446},
{"id": "cc7f000a-ad86-49b6-b59b-179e65f8a229", "name": "AWS", "priority": 2, "document_count": 14, "word_count": 5125632}
]
}
"description": "Knowledge sources (NotebookLM + Dify KB) for Infrastructure domain",
"sources": {
"notebooklm": {
"description": "NotebookLM - conhecimento curado profundo via Gemini 2.5 RAG (PRIMARIO)",
"query_tool": "mcp__notebooklm__notebook_query",
"notebooks": [
{
"id": "0ded7bd6-69b3-4c76-b327-452396bf7ea7",
"title": "CWP",
"topics": [
"cwp",
"centos",
"web",
"panel"
],
"maps_from_dify": "CWP Centos Web Panel"
},
{
"id": "f9a79b5a-649f-4443-afaf-7ff562b6c2e7",
"title": "Cloud e Infraestrutura TI",
"topics": [],
"maps_from_dify": "TI"
}
]
},
"dify_kb": {
"description": "Dify KB - datasets tematicos (FALLBACK)",
"query_tool": "mcp__dify-kb__dify_kb_retrieve_segments",
"datasets": [
{
"id": "b2a4d2c5-fe55-412c-bc28-74dbd611905d",
"name": "CWP Centos Web Panel",
"priority": 1,
"document_count": 10,
"word_count": 599430
},
{
"id": "7f63ec0c-6321-488c-b107-980140199850",
"name": "TI",
"priority": 1,
"document_count": 115,
"word_count": 29448495
},
{
"id": "bde4eddd-4618-402c-8bfb-bb947ed9219d",
"name": "Linux",
"priority": 2,
"document_count": 4,
"word_count": 336446
},
{
"id": "cc7f000a-ad86-49b6-b59b-179e65f8a229",
"name": "AWS",
"priority": 2,
"document_count": 14,
"word_count": 5125632
}
]
}
}
}

View File

@@ -663,7 +663,7 @@ Consultar para aprofundar conhecimento ou resolver casos específicos:
```javascript
// Exemplo: pesquisar backup incremental MySQL
mcp__dify-kb__dify_kb_retrieve_segments({
mcp__notebooklm__notebook_query, mcp__dify-kb__dify_kb_retrieve_segments({
dataset_id: "7f63ec0c-6321-488c-b107-980140199850",
query: "mysql binlog incremental backup recovery",
top_k: 3

View File

@@ -2,8 +2,8 @@
name: cwp-accounts
description: CWP user account management using official /scripts/cwp_api. Create, suspend, remove accounts, fix permissions. Based on official CWP documentation only. Use when user mentions "conta cwp", "user cwp", "criar conta", "suspender conta", "permissões cwp".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 70
version: 1.1.0
quality_score: 72
user_invocable: true
desk_task: null
---
@@ -29,6 +29,20 @@ Gestão de contas de utilizador no CWP usando API oficial. **Zero assumptions, z
- [CWP Admin API](https://wiki.centos-webpanel.com/cwp-admin-api)
- [CWP Scripts](https://wiki.centos-webpanel.com/cwp-scripts)
### Documentação Hub (Consultar SEMPRE)
**Manuais locais** (`Hub/06-Operacoes/Documentacao/Manuais/CWP/`):
- `CWP-Manual-Completo.md` - Admin Guide (seccao User Accounts, Packages, Migration) + Wiki (User & Account Management) - **503KB**
- `CWP-Ferramentas-Desenvolvimento.md` - API Account (add/del/list/susp/unsp) - **82KB**
- `CWP-Guia-do-Revendedor.md` - Gestao de contas reseller - **17KB**
**Quick Reference:** `Hub/06-Operacoes/Documentacao/Quick-Reference/QR-CWP.md`
**NotebookLM (pesquisa AI sobre toda a documentacao CWP):**
```
mcp__notebooklm__notebook_query notebook_id:"0ded7bd6-69b3-4c76-b327-452396bf7ea7" query:"conta utilizador criar suspender permissoes"
```
---
## Scripts de Consulta (Apenas Leitura)

View File

@@ -2,8 +2,8 @@
name: cwp-backup
description: CWP backup creation and management using official scripts. Creates user backups, manages backup locations. Based on official CWP documentation only. Use when user mentions "backup cwp", "restaurar cwp", "backup conta", "user backup".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 70
version: 1.1.0
quality_score: 72
user_invocable: true
desk_task: null
---
@@ -29,6 +29,19 @@ Gestão de backups no CWP usando scripts oficiais. **Zero assumptions, zero hall
- [CWP Scripts](https://wiki.centos-webpanel.com/cwp-scripts)
- [CWP Backups](https://wiki.centos-webpanel.com/category/backups)
### Documentação Hub (Consultar SEMPRE)
**Manuais locais** (`Hub/06-Operacoes/Documentacao/Manuais/CWP/`):
- `CWP-Manual-Completo.md` - Admin Guide (seccao Backup and Restore, Google Drive) + Wiki (Backup & Migration) - **503KB**
- `CWP-Guia-do-Utilizador.md` - Painel utilizador (seccao Backup and Restore) - **72KB**
**Quick Reference:** `Hub/06-Operacoes/Documentacao/Quick-Reference/QR-CWP.md`
**NotebookLM (pesquisa AI sobre toda a documentacao CWP):**
```
mcp__notebooklm__notebook_query notebook_id:"0ded7bd6-69b3-4c76-b327-452396bf7ea7" query:"backup restore configuracao"
```
---
## Paths Oficiais

View File

@@ -2,8 +2,8 @@
name: cwp-email
description: CWP email management including DKIM, SPF, mail queue. Based on official CWP documentation only. Use when user mentions "email cwp", "dkim", "spf", "mail queue", "postfix cwp".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 70
version: 1.1.0
quality_score: 72
user_invocable: true
desk_task: null
---
@@ -30,6 +30,20 @@ Gestão de email no CWP. **Zero assumptions, zero hallucinations** - apenas coma
- [How to install DKIM 2048 bits](https://wiki.centos-webpanel.com/how-to-install-dkim-2048-bits-long-key)
- [CWP Scripts](https://wiki.centos-webpanel.com/cwp-scripts)
### Documentação Hub (Consultar SEMPRE)
**Manuais locais** (`Hub/06-Operacoes/Documentacao/Manuais/CWP/`):
- `CWP-Manual-Completo.md` - Admin Guide (seccao Email: Accounts, DKIM, SPF, Mail Queue, AntiSpam, Policyd) + Wiki (Email & Postfix) - **503KB**
- `CWP-Guia-do-Utilizador.md` - Painel utilizador (seccao Email Accounts, Auto Responders, Filters, Routing) - **72KB**
- `CWP-Ferramentas-Desenvolvimento.md` - API Email Admin Server - **82KB**
**Quick Reference:** `Hub/06-Operacoes/Documentacao/Quick-Reference/QR-CWP.md`
**NotebookLM (pesquisa AI sobre toda a documentacao CWP):**
```
mcp__notebooklm__notebook_query notebook_id:"0ded7bd6-69b3-4c76-b327-452396bf7ea7" query:"email dkim spf postfix mail queue spam"
```
---
## Scripts Oficiais de Email

View File

@@ -2,8 +2,8 @@
name: cwp-php
description: CWP PHP version management. PHP Switcher, Selector, configuration. Based on official CWP documentation only. Use when user mentions "php cwp", "versão php", "php selector", "php switcher".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 70
version: 1.1.0
quality_score: 72
user_invocable: true
desk_task: null
---
@@ -30,6 +30,19 @@ Gestão de versões PHP no CWP. **Zero assumptions, zero hallucinations** - apen
- [PHP Version Switcher](https://wiki.centos-webpanel.com/php-version-switcher)
- [CWP Scripts](https://wiki.centos-webpanel.com/cwp-scripts)
### Documentação Hub (Consultar SEMPRE)
**Manuais locais** (`Hub/06-Operacoes/Documentacao/Manuais/CWP/`):
- `CWP-Manual-Completo.md` - Admin Guide (seccao PHP Settings, Switcher, Selector, FPM, PECL) + Wiki (PHP Configuration) - **503KB**
- `CWP-Guia-do-Utilizador.md` - Painel utilizador (seccao Edit PHP.ini, PHP Selector) - **72KB**
**Quick Reference:** `Hub/06-Operacoes/Documentacao/Quick-Reference/QR-CWP.md`
**NotebookLM (pesquisa AI sobre toda a documentacao CWP):**
```
mcp__notebooklm__notebook_query notebook_id:"0ded7bd6-69b3-4c76-b327-452396bf7ea7" query:"php versao selector switcher fpm configuracao"
```
---
## Ferramentas PHP no CWP

View File

@@ -2,8 +2,8 @@
name: cwp-scripts
description: Complete reference for CWP /scripts/ folder. All official CLI scripts documented. Based on official CWP documentation only. Use when user mentions "cwp scripts", "scripts cwp", "/scripts/", "comando cwp".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 70
version: 1.1.0
quality_score: 72
user_invocable: true
desk_task: null
---
@@ -28,6 +28,19 @@ Todos os scripts oficiais documentados do CWP. **Zero assumptions, zero hallucin
- [CWP Scripts](https://wiki.centos-webpanel.com/cwp-scripts)
### Documentação Hub (Consultar SEMPRE)
**Manuais locais** (`Hub/06-Operacoes/Documentacao/Manuais/CWP/`):
- `CWP-Manual-Completo.md` - Admin Guide + Wiki completo (todas as seccoes) - **503KB**
- `CWP-Ferramentas-Desenvolvimento.md` - API Manager completo (53 endpoints) - **82KB**
**Quick Reference:** `Hub/06-Operacoes/Documentacao/Quick-Reference/QR-CWP.md`
**NotebookLM (pesquisa AI sobre toda a documentacao CWP):**
```
mcp__notebooklm__notebook_query notebook_id:"0ded7bd6-69b3-4c76-b327-452396bf7ea7" query:"scripts comandos cwp api"
```
---
## Como Executar

View File

@@ -2,8 +2,8 @@
name: cwp-security
description: CWP security management with CSF firewall. Block/unblock IPs, configure firewall, security hardening. Based on official CWP documentation only. Use when user mentions "csf", "firewall cwp", "bloquear ip", "segurança cwp", "ban ip".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 70
version: 1.1.0
quality_score: 72
user_invocable: true
desk_task: null
---
@@ -30,6 +30,18 @@ Gestão de segurança no CWP usando CSF/LFD. **Zero assumptions, zero hallucinat
- [CSF/LFD Firewall configuration](https://wiki.centos-webpanel.com/csflfd-firewall-configuration)
- [CWP Security Instructions](https://wiki.centos-webpanel.com/cwp-security-instructions)
### Documentação Hub (Consultar SEMPRE)
**Manuais locais** (`Hub/06-Operacoes/Documentacao/Manuais/CWP/`):
- `CWP-Manual-Completo.md` - Admin Guide (seccao Security: CSF, Mod Security, Maldet, RKHunter, Lynis, Symlink, Shell Access) + Wiki (Firewall, SSL & Security) - **503KB**
**Quick Reference:** `Hub/06-Operacoes/Documentacao/Quick-Reference/QR-CWP.md`
**NotebookLM (pesquisa AI sobre toda a documentacao CWP):**
```
mcp__notebooklm__notebook_query notebook_id:"0ded7bd6-69b3-4c76-b327-452396bf7ea7" query:"csf firewall seguranca malware bloqueio ip"
```
---
## Paths de Configuração

View File

@@ -2,8 +2,8 @@
name: cwp-ssl
description: CWP AutoSSL management using native acme.sh. Manages SSL certificates, renewals, and troubleshooting. Based on official CWP documentation only. Use when user mentions "ssl cwp", "autossl", "certificado ssl", "renovar ssl", "acme.sh".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 70
version: 1.1.0
quality_score: 72
user_invocable: true
desk_task: null
---
@@ -29,6 +29,20 @@ Gestão de certificados SSL no CWP usando acme.sh nativo. **Zero assumptions, ze
- [AutoSSL CWP Wiki](https://docs.control-webpanel.com/docs/admin-guide/ssl/autossl)
- [acme.sh Documentation](https://wiki.centos-webpanel.com/)
### Documentação Hub (Consultar SEMPRE)
**Manuais locais** (`Hub/06-Operacoes/Documentacao/Manuais/CWP/`):
- `CWP-Manual-Completo.md` - Admin Guide + Wiki (seccoes SSL & Security, WebServers) - **503KB**
- `CWP-Guia-do-Utilizador.md` - Painel utilizador (seccao AutoSSL) - **72KB**
- `CWP-Ferramentas-Desenvolvimento.md` - API AutoSSL (add/del/list/renew) - **82KB**
**Quick Reference:** `Hub/06-Operacoes/Documentacao/Quick-Reference/QR-CWP.md`
**NotebookLM (pesquisa AI sobre toda a documentacao CWP):**
```
mcp__notebooklm__notebook_query notebook_id:"0ded7bd6-69b3-4c76-b327-452396bf7ea7" query:"ssl certificado autossl renovacao"
```
---
## Paths Oficiais (Documentados)

View File

@@ -2,8 +2,8 @@
name: cwp-webserver
description: CWP webserver management with official API. Apache, Nginx, rebuild configurations, restart services. Based on official CWP documentation only. Use when user mentions "apache cwp", "nginx cwp", "webserver cwp", "vhost cwp".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 70
version: 1.1.0
quality_score: 72
user_invocable: true
desk_task: null
---
@@ -30,6 +30,19 @@ Gestão de webservers no CWP. **Zero assumptions, zero hallucinations** - apenas
- [CWP Scripts](https://wiki.centos-webpanel.com/cwp-scripts)
- [WebServers Update](https://wiki.centos-webpanel.com/webservers-update)
### Documentação Hub (Consultar SEMPRE)
**Manuais locais** (`Hub/06-Operacoes/Documentacao/Manuais/CWP/`):
- `CWP-Manual-Completo.md` - Admin Guide (seccao WebServers Settings, Apache, Nginx) + Wiki (WebServers & Apache) - **503KB**
- `CWP-Guia-do-Utilizador.md` - Painel utilizador (seccao Domains, Redirect) - **72KB**
**Quick Reference:** `Hub/06-Operacoes/Documentacao/Quick-Reference/QR-CWP.md`
**NotebookLM (pesquisa AI sobre toda a documentacao CWP):**
```
mcp__notebooklm__notebook_query notebook_id:"0ded7bd6-69b3-4c76-b327-452396bf7ea7" query:"apache nginx webserver configuracao rebuild"
```
---
## API WebServers (Documentada)

View File

@@ -0,0 +1,180 @@
---
name: infra-check
description: >
MCP Health Check e auditoria de despesas. Sabado: check completo 9 MCPs + auditoria despesas completa. Domingo: check resumido top 5 + despesas sem PDF. Use when "infra check", "mcp health", "health check", "auditoria despesas", "verificar mcps".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 85
user_invocable: true
category: infrastructure
tags: [mcp, health-check, infrastructure, audit, expenses, monitoring]
desk_task: 1710
desk_project: 65
allowed-tools: Read, Write, mcp__desk-crm-v3, mcp__google-workspace, mcp__ssh-unified, mcp__filesystem, mcp__mcp-time, mcp__memory-supabase, WebFetch
mcps: desk-crm-v3, google-workspace, ssh-unified, filesystem, mcp-time, memory-supabase
dependencies:
mcps: [desk-crm-v3, ssh-unified, filesystem, mcp-time]
triggers:
- "User asks about MCP health"
- "User mentions 'health check', 'infra check', 'mcp status'"
- "Invoked by /today orchestrator on Saturday/Sunday"
---
# /infra-check v1.0
MCP Health Check + Auditoria de Despesas.
---
## MCPs por Prioridade
**P1 - Criticos (bloqueiam trabalho):**
- desk-crm-v3 (CRM)
- filesystem (ficheiros locais)
- mcp-time (data/hora)
**P2 - Importantes (degradam workflow):**
- google-workspace (email/calendar)
- ssh-unified (servidores)
- memory-supabase (memoria)
**P3 - Uteis:**
- gitea (repos)
- moloni (facturacao)
- dify-kb (knowledge base)
---
## Protocolo Sabado (Completo)
### 1. MCP Health Check (9 MCPs)
```
Para cada MCP, executar teste simples:
| MCP | Teste | Timeout |
|-----|-------|---------|
| desk-crm-v3 | get_tickets(limit=1) | 5s |
| filesystem | list_directory(~) | 2s |
| mcp-time | current_time | 2s |
| google-workspace | calendar_get_events(hoje) | 5s |
| ssh-unified | ssh_list_servers | 3s |
| memory-supabase | search_memories("test") | 5s |
| gitea | list_my_repos | 5s |
| moloni | getall (companies) | 5s |
| dify-kb | list_datasets | 5s |
Resultado por MCP:
- OK - respondeu em <2s
- LENTO - respondeu em >2s
- FALHA - timeout ou erro
```
### 2. Auditoria Despesas Completa
```
2a. DESPESAS SEM PDF (ultimos 60 dias):
mcp__ssh-unified__ssh_execute(server="desk", command="
mysql -u ealmeida -p'9qPRdCGGqM4o' ealmeida_desk24 -e \"
SELECT e.id, e.expense_name, e.amount, e.date, e.note,
(SELECT COUNT(*) FROM tblfiles f WHERE f.rel_id = e.id AND f.rel_type = 'expense') as pdfs
FROM tblexpenses e
WHERE e.id >= 770 AND e.date >= DATE_SUB(CURDATE(), INTERVAL 60 DAY)
HAVING pdfs = 0
ORDER BY e.date DESC;
\"
")
Excluir: AT (cat 15), Salarios (cat 22), SS (cat 25) - nao tem recibo
2b. DESPESAS SEM CATEGORIA:
WHERE id >= 770 AND (category = 0 OR category IS NULL)
2c. VALORES ANOMALOS (>500 EUR ou negativos):
WHERE id >= 770 AND (amount > 500 OR amount < 0)
2d. RECONCILIACAO MENSAL (apenas 1o sabado do mes, DAY(CURDATE()) <= 7):
- Contar despesas do mes anterior por categoria
- Comparar com mes homologo
- Alertar se variacao >30%
2e. FORNECEDORES RECORRENTES em falta:
Verificar se fornecedores mensais tem despesa este mes:
Anthropic, Cursor, Hetzner, Google One, ElasticEmail, Canva
Alertar se falta apos dia 10 do mes
```
### 3. Verificar Gateway
```
WebFetch("https://gateway.descomplicar.pt/health")
Se falha → alerta critico (todos os MCPs gateway afectados)
```
---
## Protocolo Domingo (Resumido)
### 1. MCP Health Check (top 5)
```
Apenas P1 + P2:
- desk-crm-v3, filesystem, mcp-time, google-workspace, ssh-unified
Teste rapido, timeout 5s
Apenas verificar se responde
```
### 2. Despesas sem PDF (30 dias)
```
Mesma query do sabado mas INTERVAL 30 DAY
SE >3 sem PDF (excl. AT/Salario/SS) → alertar
SE 0 → "Todas as despesas recentes tem documento"
```
---
## Output
```markdown
## MCP Health Check ([Sabado/Domingo])
[status] X/Y MCPs operacionais
[alertas por MCP com problema]
## Auditoria Despesas ([Sabado/Domingo])
### Despesas sem documento (N) [se sabado]
| # | Fornecedor | Valor | Data | Accao |
### Fornecedores recorrentes [se sabado]
- [Fornecedor]: OK ou FALTA
### Resumo mensal [se 1o sabado]
- Total [Mes]: X EUR (Y despesas)
- vs [Mes anterior]: +/-Z%
[ou versao resumida se domingo]
```
---
## Troubleshooting Automatico
```
Se MCP falha:
1. Verificar gateway: WebFetch("https://gateway.descomplicar.pt/health")
2. Se gateway OK mas MCP falha → problema no MCP especifico → documentar
3. Se gateway falha → problema de rede/servidor mcp-hub → alerta critico
```
---
## Anti-Patterns
- NUNCA executar health check em dias uteis (reservado para Sab/Dom via /today)
- NUNCA ignorar falha de MCP P1 (critico)
- SEMPRE incluir accao sugerida para cada problema encontrado
---
*Skill v1.0.0 | 04-03-2026 | Descomplicar®*

View File

@@ -15,6 +15,8 @@ allowed-tools: Grep
Skill para criação, configuração e gestão de servidores MCP customizados.
> **Regra #48:** Novos MCPs devem ser desenvolvidos no **container dev** (`server:"dev"`, path `/root/Dev/<nome-mcp>`). O path `/home/ealmeida/mcp-servers/` é para MCPs já em produção. Desenvolvimento inicial -> `/root/Dev/` -> depois mover para `mcp-servers/` no deploy final.
---
## Comandos
@@ -1031,18 +1033,42 @@ O agente `mcp-protocol-developer` é invocado para:
## Datasets Dify
```
mcp__dify-kb__dify_kb_retrieve_segments dataset:"MCP Servers" query:"..."
mcp__notebooklm__notebook_query, mcp__dify-kb__dify_kb_retrieve_segments dataset:"MCP Servers" query:"..."
mcp__dify-kb__dify_kb_retrieve_segments dataset:"Claude Code" query:"..."
mcp__dify-kb__dify_kb_retrieve_segments dataset:"Desenvolvimento de Software" query:"..."
```
---
## Referências
## Referências e Documentação
### Procedimentos Obrigatórios (D7-Tecnologia)
**SEMPRE consultar antes de criar/modificar MCPs:**
- **[PROC-MCP-Desenvolvimento.md](file:///media/ealmeida/Dados/Hub/06-Operacoes/Procedimentos/D7-Tecnologia/MCP/PROC-MCP-Desenvolvimento.md)** - Guia oficial v2.3: Regra de Ouro MCP, capabilities obrigatórias, validação pre-deploy, fallback enterprise
- **[PROC-MCP-Troubleshooting-Erro-471.md](file:///media/ealmeida/Dados/Hub/06-Operacoes/Procedimentos/D7-Tecnologia/MCP/PROC-MCP-Troubleshooting-Erro-471.md)** - Debug erro 471 (capabilities incompletas, too many requests)
- **[PROC-MCP-Google-Auth.md](file:///media/ealmeida/Dados/Hub/06-Operacoes/Procedimentos/D7-Tecnologia/MCP/PROC-MCP-Google-Auth.md)** - Autenticação OAuth Google Workspace, refresh tokens, troubleshooting
- **[PROC-MCP-Session-Recovery.md](file:///media/ealmeida/Dados/Hub/06-Operacoes/Procedimentos/D7-Tecnologia/MCP/PROC-MCP-Session-Recovery.md)** - Recuperação de sessões MCP após crash
- **[PROC-MCP-Desk-Timer.md](file:///media/ealmeida/Dados/Hub/06-Operacoes/Procedimentos/D7-Tecnologia/MCP/PROC-MCP-Desk-Timer.md)** - Workflow timer Desk CRM (atribuição e status obrigatórios)
### Quick Reference (ver PROC-MCP-Desenvolvimento.md)
- **ESLint + Prettier:** Ver PROC secção "Scripts package.json"
- **Husky pre-commit:** Ver PROC secção "Husky + Lint-Staged"
- **Checklist novo MCP:** Ver PROC secção "Checklist Desenvolvimento MCP"
- **Schema BD:** Ver PROC secção "Documentação Schema BD"
- **Validação pre-deploy:** Ver PROC secção "Validação Obrigatória"
### Agente Especializado
- **Agent:** `mcp-protocol-developer` - Desenvolvimento complexo, debug, optimização, recursos avançados
### Documentação Técnica
- [MCP SDK Documentation](https://modelcontextprotocol.io/)
- [[AGT-Sistema-Agentes|mcp-protocol-developer]]
- [[Stack/Claude Code/MCPs/|MCPs Documentados]]
- [[D7-Tecnologia/INDEX|Ver Todos Procedimentos D7]]
---

View File

@@ -0,0 +1,497 @@
---
name: pbs-config
description: Configuração Proxmox Backup Server (PBS) - datastore creation, retention policies, sync jobs, remote targets. Use when user mentions "pbs setup", "proxmox backup", "configure pbs", "backup server".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 75
user_invocable: true
desk_task: 1712
allowed-tools: Task, Read, Bash
dependencies:
- ssh-unified
- notebooklm
---
# PBS Config
Configuração completa de Proxmox Backup Server (PBS) com datastores, políticas de retenção, sync jobs e estratégia de backup 3-2-1.
## Quando Usar
- Configurar PBS após instalação Proxmox
- Criar datastores para backups
- Definir retention policies (7 daily, 4 weekly, 6 monthly)
- Configurar remote sync entre nodes PBS
- Implementar estratégia 3-2-1 backup
## Sintaxe
```bash
/pbs-config <datastore-path> [--retention 7:4:6] [--remote-sync node2] [--dedup on]
```
## Exemplos
```bash
# PBS básico com retention padrão
/pbs-config /mnt/pbs-datastore
# PBS com retention custom e remote sync
/pbs-config /mnt/pbs-datastore --retention 10:5:12 --remote-sync pbs-node2.descomplicar.pt
# PBS sem deduplicação (se storage não suporta)
/pbs-config /mnt/pbs-main --dedup off
```
## Knowledge Sources (Consultar SEMPRE)
### NotebookLM Proxmox Research
```bash
mcp__notebooklm__notebook_query \
notebook_id:"276ccdde-6b95-42a3-ad96-4e64d64c8d52" \
query:"proxmox backup server pbs datastore retention deduplication"
```
### Hub Docs
- Hub/05-Projectos/Cluster Descomplicar/Research/Proxmox-VE/Guia-Definitivo-Proxmox-Hetzner.md
- Módulo 3: Storage e Backups (PBS, estratégia 3-2-1, deduplicação)
## Workflow Completo
### Fase 1: PBS Installation (se ainda não instalado)
**1.1 Verificar se PBS já está instalado**
```bash
dpkg -l | grep proxmox-backup-server
# Se não instalado:
apt update
apt install proxmox-backup-server
```
**1.2 Aceder PBS Web UI**
```
https://SERVER_IP:8007
User: root
Password: (root password do servidor)
```
**1.3 Configuração Inicial**
- Hostname
- DNS servers
- Time zone (Europe/Lisbon)
### Fase 2: Datastore Creation
**2.1 Preparar Storage**
**Para ZFS (RECOMENDADO):**
```bash
# Já criado em /proxmox-setup:
# zfs create rpool/pbs-datastore
# Verificar
zfs list | grep pbs-datastore
# Optimizar para backup workload
zfs set compression=lz4 rpool/pbs-datastore
zfs set dedup=off rpool/pbs-datastore # Dedup no PBS, não no ZFS
zfs set recordsize=1M rpool/pbs-datastore # Large files
```
**Para ext4 (HDD 16TB):**
```bash
# Particionar HDD
parted /dev/sda mklabel gpt
parted /dev/sda mkpart primary ext4 0% 100%
# Formatar
mkfs.ext4 /dev/sda1
# Montar
mkdir -p /mnt/pbs-datastore
echo "/dev/sda1 /mnt/pbs-datastore ext4 defaults 0 2" >> /etc/fstab
mount -a
```
**2.2 Criar Datastore via CLI**
```bash
proxmox-backup-manager datastore create main-store /mnt/pbs-datastore
# Verificar
proxmox-backup-manager datastore list
```
**2.3 Configurar Retention Policy**
```bash
# 7 daily, 4 weekly, 6 monthly (padrão)
proxmox-backup-manager datastore update main-store \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--keep-yearly 3
```
**Explicação Retention:**
- `keep-daily 7`: Mantém 7 backups diários
- `keep-weekly 4`: Mantém 4 backups semanais (1 por semana)
- `keep-monthly 6`: Mantém 6 backups mensais (1 por mês)
- `keep-yearly 3`: Mantém 3 backups anuais
**Gestão automática:** PBS elimina backups antigos baseado nestas regras.
### Fase 3: PBS Users & Permissions
**3.1 Criar User para PVE Backups**
```bash
# User dedicado para Proxmox enviar backups
proxmox-backup-manager user create pve-backup@pbs \
--email admin@descomplicar.pt
# Password
proxmox-backup-manager user update pve-backup@pbs --password
# Atribuir permissões no datastore
proxmox-backup-manager acl update /datastore/main-store \
--auth-id pve-backup@pbs \
--role DatastoreBackup
```
**3.2 Criar API Token (para automação)**
```bash
# Token para scripts/Terraform
proxmox-backup-manager user token create pve-backup@pbs automation-token \
--output-format json
# Guardar token de forma segura
# Formato: pve-backup@pbs!automation-token=<secret>
```
### Fase 4: Configure PVE to Use PBS
**4.1 Adicionar PBS Storage em Proxmox VE**
Via Web UI (Datacenter → Storage → Add → Proxmox Backup Server):
- ID: `pbs-main`
- Server: `SERVER_IP` (ou hostname se cluster)
- Datastore: `main-store`
- Username: `pve-backup@pbs`
- Password: (password criado)
- Fingerprint: (auto-detect)
Via CLI:
```bash
pvesm add pbs pbs-main \
--server SERVER_IP \
--datastore main-store \
--username pve-backup@pbs \
--password <password>
```
**4.2 Verificar Conectividade**
```bash
pvesm status | grep pbs-main
```
### Fase 5: Backup Jobs (PVE)
**5.1 Criar Backup Job para VMs Críticas**
Via Web UI (Datacenter → Backup → Add):
- Storage: `pbs-main`
- Schedule: Daily 01:00
- Mode: Snapshot (live backup)
- Compression: zstd
- Notification: email admin@descomplicar.pt
Via CLI:
```bash
# Backup diário de todas VMs às 01:00
vzdump --storage pbs-main --mode snapshot --compress zstd --all 1
```
**5.2 Agendar via cron (alternativa)**
```bash
# /etc/cron.d/pve-backup-critical
0 1 * * * root vzdump --storage pbs-main --vmid 100,101,102 --mode snapshot --compress zstd
```
**5.3 Backup Seletivo**
```bash
# VMs críticas: diário
# VMs secundárias: 3x semana (Seg, Qua, Sex)
0 1 * * 1,3,5 root vzdump --storage pbs-main --vmid 200,201,202 --mode snapshot --compress zstd
```
### Fase 6: Remote Sync (2-Node Cluster)
**Setup para cluster:** PBS em Node B (primary) + PBS em Node A (secondary)
**6.1 Configurar Remote em PBS Secondary (Node A)**
Via Web UI PBS Node A (Configuration → Remote):
- Name: `pbs-node-b`
- Host: `<node-b-ip>` ou `cluster.descomplicar.pt`
- Port: 8007
- Auth ID: `pve-backup@pbs`
- Password: (password)
- Fingerprint: (auto-detect)
**6.2 Criar Sync Job**
Via Web UI PBS Node A (Configuration → Sync Jobs → Add):
- Remote: `pbs-node-b`
- Remote Datastore: `main-store`
- Local Datastore: `secondary-store`
- Schedule: Daily 03:00 (após backups)
- Remove vanished: Yes (sync deletes)
Via CLI em Node A:
```bash
proxmox-backup-manager sync-job create sync-from-node-b \
--remote pbs-node-b \
--remote-store main-store \
--store secondary-store \
--schedule "0 3 * * *" \
--remove-vanished true
```
**6.3 Testar Sync Manual**
```bash
proxmox-backup-manager sync-job run sync-from-node-b
```
### Fase 7: Monitoring & Maintenance
**7.1 Verificar Deduplicação**
```bash
# Ver estatísticas datastore
proxmox-backup-manager datastore status main-store
# Ratio deduplicação (típico 1.3-2.5x)
```
**7.2 Garbage Collection**
```bash
# Liberar espaço de backups removidos (retention)
proxmox-backup-manager garbage-collection start main-store
# Agendar GC semanal (Domingo 02:00)
# Via Web UI: Datastore → main-store → Prune & GC
```
**7.3 Verificar Disk Usage**
```bash
df -h /mnt/pbs-datastore
# ZFS
zfs list -o name,used,available,refer rpool/pbs-datastore
```
**7.4 Alertas Email**
```bash
# Configurar notificações
# Via Web UI: Configuration → Notifications
# SMTP server: mail.descomplicar.pt
# Alertas: disk usage >80%, backup failures
```
### Fase 8: Restore Procedures (Testing)
**8.1 Restore VM Teste**
Via Web UI PVE:
- Storage → pbs-main → Backups
- Seleccionar VM backup
- Restore → New VM ID (999)
- Start após restore
**8.2 Restore via CLI**
```bash
# Listar backups disponíveis
pbs-client list --repository pve-backup@pbs@SERVER_IP:main-store
# Restore VM 100
qmrestore pbs-main:backup/vm/100/YYYY-MM-DD... 999
```
**8.3 Validar Restore**
```bash
qm start 999
# Verificar VM boota correctamente
# Teste serviço critical
# Shutdown e remover VM teste
qm stop 999 && qm destroy 999
```
**CRITICAL:** Testar restore ANTES de considerar backup strategy operacional.
## Output Summary
```
✅ PBS configurado: SERVER_IP:8007
💾 Datastore:
- Name: main-store
- Path: /mnt/pbs-datastore
- Size: 16TB (HDD) ou 1TB (NVMe)
- Deduplication: ON (PBS chunk-based)
- Compression: LZ4 (ZFS) + zstd (PBS)
📋 Retention Policy:
- Daily: 7 backups
- Weekly: 4 backups
- Monthly: 6 backups
- Yearly: 3 backups
- Auto-prune: Yes
🔐 Access:
- User: pve-backup@pbs
- Token: automation-token (para CI/CD)
- Role: DatastoreBackup
⏰ Backup Schedule:
- Critical VMs (100-102): Diário 01:00
- Secondary VMs (200-202): Seg/Qua/Sex 01:00
- GC: Domingo 02:00
🔄 Remote Sync (se cluster):
- Source: pbs-node-b (Node B)
- Target: secondary-store (Node A)
- Schedule: Diário 03:00
- Remove vanished: Yes
📊 Expected Metrics:
- Dedup ratio: 1.5-2.5x
- Compression ratio: 1.3-1.8x
- Backup speed: 100-300 MB/s (depende network/disk)
- Restore RTO: 2-4h (para VM 100GB)
✅ Validation Tests:
✓ Primeiro backup successful
✓ Restore test VM 999
✓ Dedup ratio >1.3x
✓ Remote sync (se cluster)
✓ Email notifications working
📋 Next Steps:
1. Configurar backup VMs production (/vm-migration)
2. Criar off-site backup (S3/Wasabi/Hetzner Storage Box)
3. Documentar restore procedures em PROC-Backup-Restore.md
4. Testar disaster recovery completo
5. Monitorizar disk usage PBS (alertar >80%)
⏱️ Setup time: ~30min (vs 1h manual)
```
## Estratégia 3-2-1 Backup
**Implementation para Cluster Descomplicar:**
**3 cópias:**
1. **Original:** VMs em Node A (produção)
2. **Backup primário:** PBS Node B (16TB HDD)
3. **Backup secundário:** PBS Node A remote sync (12TB HDD)
**2 médias diferentes:**
1. NVMe (VMs produção)
2. HDD Enterprise (PBS datastores)
**1 off-site:**
- **Opção A:** Hetzner Storage Box (rsync daily)
- **Opção B:** S3-compatible (Wasabi/Backblaze)
- **Opção C:** PBS em VPS externo
**RPO:** 1h (backups hourly se critical)
**RTO:** 2-4h (restore + validação)
## PBS Advanced Features
### Verification Jobs
```bash
# Verificar integridade backups
proxmox-backup-manager verify-job create verify-main \
--store main-store \
--schedule "0 4 * * 0" # Domingo 04:00
```
### Namespace Organization
```bash
# Organizar backups por tipo
proxmox-backup-manager namespace create main-store/production
proxmox-backup-manager namespace create main-store/testing
proxmox-backup-manager namespace create main-store/archived
```
### Tape Backup (futuro)
- PBS suporta LTO tape
- Para compliance de longo prazo
- Cold storage
## Troubleshooting
### Backup failing: "no space"
```bash
# Verificar disk usage
df -h /mnt/pbs-datastore
# Run GC manual
proxmox-backup-manager garbage-collection start main-store
# Ajustar retention (reduzir keeps)
proxmox-backup-manager datastore update main-store --keep-daily 5
```
### Remote sync not working
```bash
# Verificar conectividade
ping <remote-pbs-ip>
# Testar autenticação
curl -k https://<remote-pbs-ip>:8007/api2/json/access/ticket \
-d "username=pve-backup@pbs&password=<password>"
# Logs
journalctl -u proxmox-backup -f
```
### Dedup ratio baixo (<1.2x)
```bash
# Verificar se VMs têm dados compressíveis
# VMs com random data (encrypted) não deduplica bem
# Verificar chunk size (padrão 4MB adequado)
proxmox-backup-manager datastore show main-store
```
## References
- **NotebookLM:** 276ccdde-6b95-42a3-ad96-4e64d64c8d52
- **PBS Docs:** https://pbs.proxmox.com/docs/
- **Guia Hub:** Hub/05-Projectos/Cluster Descomplicar/Research/Proxmox-VE/Guia-Definitivo-Proxmox-Hetzner.md (Módulo 3)
---
**Versão:** 1.0.0 | **Autor:** Descomplicar® | **Data:** 2026-02-14
## Metadata (Desk CRM Task #1712)
```
Projeto: Cluster Proxmox Descomplicar (#65)
Tarefa: Migração Infraestrutura (#1712)
Tags: pbs, backup, retention, deduplication, sync
Status: Research → Implementation
```
---
**/** @author Descomplicar® | @link descomplicar.pt | @copyright 2026 **/
---
## Quando NÃO Usar
- Para backups ad-hoc manuais (usar vzdump directo)
- Para PBS já configurado (usar troubleshooting guides)
- Para restore procedures (criar skill específica se necessário)

View File

@@ -0,0 +1,478 @@
---
name: proxmox-cluster
description: Formar cluster Proxmox 2+ nodes com Corosync e Quorum. Use when user mentions "create cluster", "proxmox cluster", "pvecm", "join node", "cluster formation".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 75
user_invocable: true
desk_task: 1712
allowed-tools: Task, Read, Bash
dependencies:
- ssh-unified
- notebooklm
- proxmox-setup
---
# Proxmox Cluster
Formar cluster Proxmox 2+ nodes com Corosync, Quorum e preparação para High Availability.
## Quando Usar
- Formar cluster 2-node após migration complete
- Adicionar node a cluster existente
- Configurar quorum e fencing
- Preparar para HA (skill /proxmox-ha)
## Sintaxe
```bash
/proxmox-cluster create --node-a <ip-hostname> --node-b <ip-hostname> [--cluster-name]
/proxmox-cluster join --node <ip> --cluster <existing-cluster-ip>
```
## Exemplos
```bash
# Criar cluster 2-node
/proxmox-cluster create --node-a server.descomplicar.pt --node-b cluster.descomplicar.pt --cluster-name descomplicar
# Adicionar 3º node
/proxmox-cluster join --node pve-node3.descomplicar.pt --cluster cluster.descomplicar.pt
```
## Knowledge Sources
### NotebookLM
```bash
mcp__notebooklm__notebook_query \
notebook_id:"276ccdde-6b95-42a3-ad96-4e64d64c8d52" \
query:"proxmox cluster corosync quorum pvecm ha"
```
## Workflow Completo
### Pre-Requisites
**1. Verificar Nodes Prontos**
```bash
# Ambos nodes devem ter:
- Proxmox VE 8.x instalado (/proxmox-setup)
- Networking configurado (NAT ou vSwitch)
- PBS configurado (/pbs-config)
- Mesma versão PVE
- Hostnames únicos
- Conectividade IP entre nodes
```
**2. Validar Conectividade**
```bash
# De Node A → Node B
ping -c 3 <node-b-ip>
ssh root@<node-b-ip> pveversion
# De Node B → Node A
ping -c 3 <node-a-ip>
ssh root@<node-a-ip> pveversion
```
**3. Sincronizar Time (CRITICAL)**
```bash
# Ambos nodes devem ter NTP configurado
timedatectl status
# Instalar chrony se necessário
apt install chrony
systemctl enable --now chronyd
```
**4. Backup Pre-Cluster**
```bash
# Backup configs de ambos nodes
tar -czf /tmp/pre-cluster-backup.tar.gz /etc/pve /etc/network
# Transfer para PBS
```
### Fase 1: Cluster Creation (Node B)
**1.1 Criar Cluster em Node B (Primeiro Node)**
```bash
# SSH to Node B (cluster.descomplicar.pt)
ssh root@<node-b-ip>
# Criar cluster
pvecm create descomplicar
# Verificar
pvecm status
# Expected output:
# Cluster information
# Name: descomplicar
# Nodes: 1
# Expected votes: 1
```
**1.2 Obter Cluster Join Info**
```bash
# Obter join information (para Node A)
pvecm nodes
# Anot IP e nome do cluster
```
### Fase 2: Join Node A ao Cluster
**2.1 Join Node A**
```bash
# SSH to Node A (server.descomplicar.pt)
ssh root@<node-a-ip>
# Join cluster (fornecer IP do Node B)
pvecm add <node-b-ip>
# Durante processo:
# - Solicita password root do Node B
# - Transfere configuração cluster
# - Copia /etc/pve/
# - Reinicia serviços cluster
# AGUARDAR ~2-5min
```
**2.2 Verificar Join Successful**
```bash
# Em Node A:
pvecm status
# Expected output:
# Nodes: 2
# Expected votes: 2
# Quorum: 2 (Active)
# Listar nodes
pvecm nodes
# Should show both nodes
```
**2.3 Verificar Replicação /etc/pve/**
```bash
# Em Node A:
ls -lah /etc/pve/
# Should see:
# - nodes/ (ambos nodes)
# - qemu-server/ (VMs)
# - lxc/ (containers)
# - storage.cfg (shared)
# Teste: Criar VM em Node A via Web UI
# Verificar aparece em Node B também
```
### Fase 3: Quorum Configuration
**3.1 Verificar Quorum Votes**
```bash
pvecm status | grep "Expected votes"
# 2-node cluster:
# Expected votes: 2
# Quorum: 2
# CRITICAL: Com 2 nodes, perder 1 node = perder quorum
```
**3.2 Configurar QDevice (Opcional - 2-node clusters)**
**Problema 2-node cluster:** Se 1 node falha, cluster perde quorum (não pode fazer alterações).
**Solução:** Adicionar QDevice externo (3º vote)
```bash
# Em VPS externo leve (ou Raspberry Pi):
apt install corosync-qnetd
# Em ambos PVE nodes:
apt install corosync-qdevice
# Configurar QDevice (em Node A ou B):
pvecm qdevice setup <qdevice-ip>
# Verificar
pvecm status
# Expected votes: 3 (2 nodes + 1 qdevice)
```
**Recomendação Cluster Descomplicar:**
- Iniciar sem QDevice (aceitar limitação 2-node)
- Adicionar QDevice futuro se necessário
- Ou adicionar 3º node físico
### Fase 4: Storage Configuration
**4.1 Configurar Shared Storage (Opcional)**
**Opções:**
- NFS share
- Ceph (mínimo 3 nodes)
- ZFS replication (não shared, mas sync)
**Para 2-node sem shared storage:**
- VMs ficam em local storage de cada node
- Live migration copia disk (mais lento mas funciona)
- HA usa storage replication ou aceita downtime de boot
**4.2 Configurar PBS como Shared**
```bash
# PBS já configurado (/pbs-config)
# Adicionar PBS storage em ambos nodes via Web UI
# Datacenter → Storage → Add → Proxmox Backup Server
# ID: pbs-main
# Server: <pbs-ip>
# Datastore: main-store
# Content: VZDump backup files
# Nodes: ALL
```
### Fase 5: Networking Validation
**5.1 Verificar Cluster Network**
```bash
# Verificar Corosync usa network correcta
cat /etc/pve/corosync.conf
# Deve usar IP management (não vSwitch)
# bindnetaddr: <management-subnet>
```
**5.2 Testar Latência Entre Nodes**
```bash
# De Node A → Node B
ping -c 100 <node-b-ip> | tail -5
# Expected: <5ms latency (mesma datacenter)
# CRITICAL: >10ms pode causar issues cluster
```
**5.3 Configurar Cluster Network Redundancy (Opcional)**
```bash
# Se múltiplas networks disponíveis:
pvecm update-ring -interface <ip-ring1>
# Adicionar 2º ring (redundância)
# Requer 2 NICs ou VLANs separadas
```
### Fase 6: Firewall Cluster
**6.1 Portas Necessárias (abrir entre nodes)**
```bash
# Corosync:
UDP 5404-5405
# PVE cluster:
TCP 22 (SSH)
TCP 8006 (Web UI)
TCP 3128 (SPICE proxy)
TCP 85 (pvedaemon)
# Verificar firewall permite
iptables -L -n -v | grep 5404
```
**6.2 Firewall Proxmox (Web UI)**
```bash
# Datacenter → Firewall → Options
# Enable firewall: NO (inicialmente, configurar depois)
# Se enable:
# - Adicionar rules para cluster communication
# - Testar conectividade antes de aplicar
```
### Fase 7: Validation Tests
**7.1 Cluster Status**
```bash
# Ambos nodes:
pvecm status
# Expected:
# Quorum: Active
# Nodes: 2
# Total votes: 2
# Node online: 2
```
**7.2 Criar VM Teste**
```bash
# Node A: Criar VM 999
qm create 999 --name cluster-test --memory 512 --cores 1
# Node B: Verificar VM aparece
qm list | grep 999
# Deve aparecer em ambos (shared /etc/pve/)
```
**7.3 Migrar VM Entre Nodes (Offline)**
```bash
# Migração offline (sem shared storage)
qm migrate 999 <node-b-name>
# Aguardar transfer completo
# Verificar VM migrou
```
**7.4 Simular Falha Node (CUIDADO)**
```bash
# Em ambiente teste:
# Shutdown Node B
systemctl stop pve-cluster corosync
# Node A deve continuar funcional
# Mas quorum perdido (2-node limitation)
# Reactivar Node B
systemctl start corosync pve-cluster
# Quorum restaura automaticamente
```
## Output Summary
```
✅ Cluster Proxmox formado: descomplicar
🖥️ Nodes:
- Node A: server.descomplicar.pt (138.201.X.X)
- Node B: cluster.descomplicar.pt (138.201.X.X)
- Total: 2 nodes
🗳️ Quorum:
- Expected votes: 2
- Active votes: 2
- Status: Active ✓
📁 Shared Config:
- /etc/pve/ replicated
- VMs visible em ambos nodes
- Storage config synced
💾 Storage:
- Local: ZFS rpool em cada node
- Backup: PBS shared (pbs-main)
- [Futuro] Shared storage: Ceph ou NFS
🔄 Migration:
- Offline migration: Enabled ✓
- Live migration: Enabled (sem shared storage = slow)
- HA: Ready (configurar com /proxmox-ha)
⚠️ Limitations (2-node cluster):
- Perder 1 node = perder quorum
- Solução: QDevice ou 3º node
- HA com fencing crítico
📋 Next Steps:
1. Configurar HA groups (/proxmox-ha)
2. Configurar fencing devices
3. Testar failover automático
4. Migrar VMs production para cluster
5. Monitorizar cluster health
⏱️ Formation time: ~15min
```
## 2-Node Cluster Considerations
### Quorum Issue
**Problema:** Perder 1 node = perder quorum (cluster read-only)
**Mitigações:**
1. **QDevice externo** (3º vote em VPS leve)
2. **expected_votes override** (emergência - perigoso)
3. **Adicionar 3º node** (ideal)
### Fencing CRITICAL
**Problema:** Split-brain (ambos nodes pensam que são primários)
**Solução:** Fencing obrigatório para HA
- STONITH (Shoot The Other Node In The Head)
- Power fencing via IPMI/iLO
- Network fencing (menos confiável)
### No Shared Storage
**Implicações:**
- Live migration mais lenta (copia disk)
- HA requer storage replication ou aceita downtime
- VMs ficam "pinned" ao node onde disk existe
**Alternativas:**
- Ceph (mínimo 3 nodes)
- NFS share externo
- ZFS replication (pvesr)
## Troubleshooting
### Node join fails
```bash
# Verificar conectividade
ping <other-node-ip>
ssh root@<other-node-ip>
# Verificar versões matching
pveversion
# Verificar /etc/hosts
cat /etc/hosts
# Deve ter entrada para ambos nodes
# Logs
journalctl -u pve-cluster -f
journalctl -u corosync -f
```
### Quorum lost
```bash
# Verificar status
pvecm status
# Nodes online mas quorum lost:
# - Verificar time sync (ntpd/chrony)
# - Verificar network latency
# - Restart corosync
systemctl restart corosync pve-cluster
```
### Split-brain
```bash
# CRITICAL: Ambos nodes pensam que são primários
# Identificar:
pvecm status # Em ambos nodes, status diferente
# Resolver:
# 1. Shutdown 1 node completamente
# 2. Fix networking/corosync no node online
# 3. Rejoin node shutdown
```
## References
- **NotebookLM:** 276ccdde-6b95-42a3-ad96-4e64d64c8d52
- **Proxmox Cluster Docs:** https://pve.proxmox.com/pve-docs/chapter-pvecm.html
- **Corosync:** https://corosync.github.io/corosync/
---
**Versão:** 1.0.0 | **Autor:** Descomplicar® | **Data:** 2026-02-14
---
**/** @author Descomplicar® | @copyright 2026 **/

View File

@@ -0,0 +1,524 @@
---
name: proxmox-ha
description: Configurar High Availability (HA) em cluster Proxmox - resource groups, fencing, failover automático. Use when user mentions "configure ha", "proxmox ha", "high availability", "failover", "ha manager".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 75
user_invocable: true
desk_task: 1712
allowed-tools: Task, Read, Bash
dependencies:
- ssh-unified
- notebooklm
- proxmox-cluster
---
# Proxmox HA
Configurar High Availability (HA) em cluster Proxmox com HA Manager, fencing devices e failover automático para VMs críticas.
## Quando Usar
- Configurar HA após cluster formation (/proxmox-cluster)
- Proteger VMs críticas com failover automático
- Configurar fencing devices (STONITH)
- Definir HA groups por criticidade
- Testar failover procedures
## Sintaxe
```bash
/proxmox-ha configure --critical-vms <vm-ids> [--fencing watchdog|ipmi] [--max-relocate 2]
```
## Exemplos
```bash
# HA para VMs críticas com watchdog
/proxmox-ha configure --critical-vms 200,300 --fencing watchdog
# HA com IPMI fencing (hardware)
/proxmox-ha configure --critical-vms 200,300,301 --fencing ipmi --max-relocate 1
# Apenas testar failover (sem activar HA)
/proxmox-ha test --vm 999
```
## Knowledge Sources
### NotebookLM
```bash
mcp__notebooklm__notebook_query \
notebook_id:"276ccdde-6b95-42a3-ad96-4e64d64c8d52" \
query:"proxmox ha high availability fencing stonith failover"
```
## Workflow Completo
### Pre-Requisites
**1. Cluster Formado**
```bash
# Verificar cluster healthy
pvecm status
# Expected:
# Quorum: Active
# Nodes: 2+ online
```
**2. Shared Storage ou Replication**
**Opções:**
- **Shared storage** (NFS, Ceph): HA ideal (failover <30s)
- **No shared storage**: Requer ZFS replication ou aceita boot time failover (~2-5min)
**Para Cluster Descomplicar (sem shared storage):**
```bash
# Aceitar boot-time failover
# OU configurar ZFS replication:
# Node A:
zfs snapshot rpool/vm-disks@ha-sync
zfs send rpool/vm-disks@ha-sync | ssh root@<node-b-ip> zfs receive rpool/vm-disks-replica
# Automatizar com pvesr (Proxmox Storage Replication)
```
**3. Fencing Device Configurado**
**CRITICAL:** Sem fencing = risco split-brain
### Fase 1: Fencing Configuration
**1.1 Opção A: Watchdog (Software Fencing)**
**Mais simples, menos confiável:**
```bash
# Instalar watchdog em ambos nodes
apt install watchdog
# Load kernel module
modprobe softdog
# Auto-load on boot
echo "softdog" >> /etc/modules
# Configurar HA Manager para usar watchdog
# (automático quando HA activado)
```
**1.2 Opção B: IPMI/iLO (Hardware Fencing)**
**Mais confiável, requer IPMI:**
```bash
# Verificar IPMI disponível
ipmitool lan print
# Configurar IPMI credentials (via BIOS ou ipmitool)
# Configurar em Proxmox (Web UI):
# Datacenter → Fencing → Add
# Type: IPMI
# IP: <node-ipmi-ip>
# Username: admin
# Password: <ipmi-pass>
# Test
fence_ipmilan -a <node-ipmi-ip> -l admin -p <pass> -o status
```
**1.3 Opção C: Network Fencing (Menos Confiável)**
**Usar apenas se IPMI não disponível:**
```bash
# SSH-based fencing (perigoso)
# Depende de network estar up
# Não recomendado para production
```
**Recomendação Cluster Descomplicar:**
- **Início:** Watchdog (simple, funcional)
- **Produção:** IPMI se hardware suporta
- **Evitar:** Network fencing
### Fase 2: HA Manager Configuration
**2.1 Enable HA Manager**
```bash
# Automático quando cluster formado
# Verificar status
ha-manager status
# Expected:
# quorum: OK
# master: <node-name> (elected)
# lrm: active
```
**2.2 Criar HA Groups (Opcional)**
**HA Groups por criticidade:**
```bash
# Via Web UI: Datacenter → HA → Groups → Add
# Critical (priority 100)
ha-manager groupadd critical \
--nodes "server.descomplicar.pt:100,cluster.descomplicar.pt:100"
# Medium (priority 50)
ha-manager groupadd medium \
--nodes "server.descomplicar.pt:50,cluster.descomplicar.pt:50"
# Low (priority 10)
ha-manager groupadd low \
--nodes "server.descomplicar.pt:10,cluster.descomplicar.pt:10"
```
**Priority explicação:**
- Higher priority = preferência para correr nesse node
- Usado para balancear carga
- Em failover, ignora priority (vai para node disponível)
### Fase 3: Add VMs to HA
**3.1 Adicionar VMs Críticas**
**Via Web UI:**
- Seleccionar VM → More → Manage HA
- Enable HA
- Group: critical
- Max restart: 3
- Max relocate: 2
**Via CLI:**
```bash
# VM 200 (EasyPanel Docker)
ha-manager add vm:200 \
--group critical \
--max_restart 3 \
--max_relocate 2 \
--state started
# VM 300 (CWP)
ha-manager add vm:300 \
--group critical \
--max_restart 3 \
--max_relocate 2 \
--state started
```
**Parâmetros:**
- `max_restart`: Tentativas restart no mesmo node antes de relocate
- `max_relocate`: Máximo relocates entre nodes
- `state started`: HA Manager garante VM está sempre started
**3.2 Verificar HA Resources**
```bash
ha-manager status
# Should show:
# vm:200: started on <node-name>
# vm:300: started on <node-name>
```
### Fase 4: Failover Testing
**4.1 Criar VM Teste HA**
```bash
# VM 999 para teste (não production)
qm create 999 --name ha-test --memory 512 --cores 1
# Adicionar a HA
ha-manager add vm:999 --state started
```
**4.2 Testar Failover Automático**
**Teste 1: Shutdown Clean**
```bash
# Node onde VM 999 corre:
qm shutdown 999
# HA Manager deve:
# 1. Detectar shutdown (~30s)
# 2. Tentar restart no mesmo node (max_restart vezes)
# 3. Se continua down → relocate para outro node
# Monitorizar
watch -n 1 'ha-manager status | grep vm:999'
```
**Teste 2: Node Crash (Simulado)**
```bash
# CUIDADO: Apenas em teste, não production
# Shutdown abrupto do node onde VM 999 corre
# (simula hardware failure)
echo b > /proc/sysrq-trigger # Reboot forçado
# Outro node deve:
# 1. Detectar node down via quorum (~1min)
# 2. Fence node (via watchdog/IPMI)
# 3. Boot VM 999 no node surviving
# Timeline esperado:
# - 0s: Node crash
# - ~60s: Quorum detecta node missing
# - ~90s: Fencing executado
# - ~120s: VM boota em outro node
# Total downtime: ~2-3min (sem shared storage)
# Com shared storage: ~30-60s
```
**4.3 Testar Live Migration Manual**
```bash
# Migration manual (com VM running)
qm migrate 999 <target-node-name> --online
# Com shared storage: <10s downtime
# Sem shared storage: copia disk = lento (GB/min)
# Para production VMs:
# - Fazer em janela manutenção se sem shared storage
# - Live migration OK se shared storage
```
### Fase 5: HA Policies & Tunning
**5.1 Configurar Shutdown Policy**
```bash
# Default: conditional (HA Manager decide)
# Opções: conditional, freeze, failover, migrate
# Para VMs críticas que NÃO devem migrar durante manutenção:
ha-manager set vm:200 --state freeze
# Para forçar migrate durante manutenção:
ha-manager set vm:200 --state migrate
```
**5.2 Maintenance Mode**
```bash
# Colocar node em maintenance (não recebe novos VMs HA)
ha-manager set-node-state <node-name> maintenance
# VMs HA existentes:
# - Não migram automaticamente
# - Mas não recebem novas em failover
# Sair de maintenance
ha-manager set-node-state <node-name> active
```
**5.3 Configurar Priorities (Load Balance)**
```bash
# Preferência de nodes por VM
# VM 200: Preferir Node B
ha-manager set vm:200 --group critical --restricted
# restricted: VM só corre nos nodes do grupo
# unrestricted: VM pode correr em qualquer node (fallback)
```
### Fase 6: Monitoring & Alerts
**6.1 HA Manager Logs**
```bash
# Logs HA Manager
journalctl -u pve-ha-lrm -f # Local Resource Manager
journalctl -u pve-ha-crm -f # Cluster Resource Manager
# Ver decisões de failover
grep "migrate\|relocate" /var/log/pve/tasks/index
```
**6.2 Configurar Alertas**
```bash
# Via Web UI: Datacenter → Notifications
# Email alerts para:
# - Node down
# - Quorum lost
# - VM failover events
# - Fencing executed
# SMTP: mail.descomplicar.pt
# To: admin@descomplicar.pt
```
**6.3 Monitorização Contínua**
```bash
# Script de monitoring (cron cada 5min)
#!/bin/bash
# /usr/local/bin/check-ha-health.sh
ha_status=$(ha-manager status | grep "quorum:" | awk '{print $2}')
if [ "$ha_status" != "OK" ]; then
echo "HA Quorum NOT OK" | mail -s "ALERT: HA Issue" admin@descomplicar.pt
fi
# Cron
# */5 * * * * /usr/local/bin/check-ha-health.sh
```
### Fase 7: Production Rollout
**7.1 Migrar VMs Production para HA**
**Phased approach:**
```bash
# Week 1: VMs não-críticas (teste)
ha-manager add vm:250 --group low
# Week 2: VMs médias (se Week 1 OK)
ha-manager add vm:201,202 --group medium
# Week 3: VMs críticas (se tudo OK)
ha-manager add vm:200,300 --group critical
```
**7.2 Documentar Runbook**
**Criar:** `06-Operacoes/Procedimentos/D7-Tecnologia/PROC-HA-Failover.md`
**Conteúdo:**
- Detectar failover event
- Validar VM booted corretamente
- Investigar causa node failure
- Restore node original
- Migrate VM back (se necessário)
## Output Summary
```
✅ HA configurado: Cluster descomplicar
🛡️ Fencing:
- Type: Watchdog (softdog)
- Nodes: 2 nodes configured
- Test: Successful ✓
📋 HA Groups:
- Critical (priority 100): 2 VMs
- Medium (priority 50): 0 VMs
- Low (priority 10): 0 VMs
🖥️ HA Resources:
- vm:200 (EasyPanel) - Critical
- vm:300 (CWP) - Critical
- Max restart: 3
- Max relocate: 2
⚡ Failover Tests:
✓ Clean shutdown → Auto restart
✓ Node crash → Relocate (~2min)
✓ Live migration → <10s downtime
📊 Expected Metrics:
- Detection time: ~60s
- Fencing time: ~30s
- Boot time: ~60-120s
- Total failover: ~2-3min (sem shared storage)
⚠️ Limitations (sem shared storage):
- Failover = boot time (não instant)
- Live migration copia disk (lento)
- Considerar shared storage futuro
🔔 Monitoring:
- Quorum check: cada 5min
- Alerts: Email admin@descomplicar.pt
- Logs: journalctl -u pve-ha-*
📋 Next Steps:
1. Monitorizar por 30 dias
2. Adicionar mais VMs a HA gradualmente
3. Considerar shared storage (NFS/Ceph)
4. Documentar procedures em PROC-HA-Failover.md
5. Treinar equipa em failover manual
⏱️ Configuration time: ~30min
```
## Best Practices
### DO
- ✅ Testar failover em VMs teste ANTES production
- ✅ Configurar fencing (watchdog mínimo, IPMI ideal)
- ✅ Monitorizar quorum 24/7
- ✅ Documentar runbooks failover
- ✅ Alerts email para eventos críticos
- ✅ Backup ANTES activar HA
### DON'T
- ❌ HA sem fencing (risco split-brain)
- ❌ max_relocate muito alto (VM fica "bouncing")
- ❌ Assumir instant failover sem shared storage
- ❌ Testar failover em production sem plano
- ❌ Ignorar quorum warnings
## Troubleshooting
### VM não failover
```bash
# Verificar HA enabled
ha-manager status | grep vm:ID
# Verificar quorum
pvecm status
# Verificar fencing functional
# (watchdog ou IPMI test)
# Logs
journalctl -u pve-ha-crm -f
```
### Split-brain detected
```bash
# CRITICAL: Ambos nodes pensam que são master
# Shutdown 1 node completamente
systemctl poweroff
# No node restante:
pvecm expected 1 # Force quorum com 1 node
# Resolver networking
# Rejoin node shutdown
```
### Failover loop (VM keeps restarting)
```bash
# VM falha → restart → falha → restart
# Verificar:
# 1. VM logs (qm log ID)
# 2. max_restart atingido?
# 3. Problema configuração VM?
# Pause HA temporário
ha-manager set vm:ID --state disabled
# Fix VM issue
# Re-enable HA
ha-manager set vm:ID --state started
```
## References
- **NotebookLM:** 276ccdde-6b95-42a3-ad96-4e64d64c8d52
- **HA Manager Docs:** https://pve.proxmox.com/pve-docs/ha-manager.1.html
- **Fencing:** https://pve.proxmox.com/wiki/Fencing
---
**Versão:** 1.0.0 | **Autor:** Descomplicar® | **Data:** 2026-02-14
---
**/** @author Descomplicar® | @copyright 2026 **/

View File

@@ -0,0 +1,532 @@
---
name: proxmox-setup
description: Instalação completa de Proxmox VE 8.x em Hetzner - installimage, ZFS RAID-1, NAT networking, vSwitch. Use when user mentions "proxmox install", "setup proxmox", "proxmox hetzner", "new proxmox node".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 75
user_invocable: true
desk_task: 1712
allowed-tools: Task, Read, Bash
dependencies:
- ssh-unified
- notebooklm
---
# Proxmox Setup
Instalação completa e configuração de Proxmox VE 8.x em servidor dedicado Hetzner com ZFS RAID-1, networking NAT single-IP e optimizações.
## Quando Usar
- Instalar novo node Proxmox em servidor Hetzner
- Setup inicial com ZFS mirror NVMe
- Configurar networking NAT para single-IP
- Preparar node para clustering futuro
- Aplicar Hetzner-specific gotchas e optimizações
## Sintaxe
```bash
/proxmox-setup <server-ip> <hostname> [--zfs-pool rpool] [--arc-max 16G] [--vswitch]
```
## Exemplos
```bash
# Setup básico single-IP NAT
/proxmox-setup 138.201.45.67 cluster.descomplicar.pt
# Setup com vSwitch (MTU 1400)
/proxmox-setup 138.201.45.67 cluster.descomplicar.pt --vswitch
# Custom ZFS ARC (para 64GB RAM)
/proxmox-setup 138.201.45.67 pve-node1.descomplicar.pt --arc-max 8G
```
## Knowledge Sources (Consultar SEMPRE)
### NotebookLM Proxmox Research
```bash
mcp__notebooklm__notebook_query \
notebook_id:"276ccdde-6b95-42a3-ad96-4e64d64c8d52" \
query:"proxmox installation hetzner zfs networking"
```
### Hub Docs
- `/media/ealmeida/Dados/Hub/05-Projectos/Cluster Descomplicar/Research/Proxmox-VE/Guia-Definitivo-Proxmox-Hetzner.md`
- Módulo 1: Instalação (installimage, ZFS vs LVM, Kernel PVE)
- Módulo 2: Networking (NAT masquerading, vSwitch MTU 1400)
## Workflow Completo
### Fase 1: Pre-Installation Checks
**1.1 Verificar Rescue Mode**
```bash
# Via SSH MCP
mcp__ssh-unified__ssh_execute \
server:"hetzner-rescue" \
command:"uname -a && df -h"
# Expected: rescue kernel, /dev/md* present
```
**1.2 Consultar NotebookLM para Hardware Specs**
```bash
# Query: "hetzner installimage zfs raid configuration"
# Obter template correcto para specs do servidor
```
**1.3 Backup de Configuração Actual (se aplicável)**
```bash
ssh root@SERVER_IP "tar -czf /tmp/backup-configs.tar.gz /etc /root"
scp root@SERVER_IP:/tmp/backup-configs.tar.gz ~/backups/
```
### Fase 2: installimage com ZFS RAID-1
**2.1 Criar Template installimage**
Template base para 2x NVMe 1TB + HDD 16TB:
```bash
DRIVE1 /dev/nvme0n1
DRIVE2 /dev/nvme1n1
SWRAID 0
SWRAIDLEVEL 0
BOOTLOADER grub
HOSTNAME HOSTNAME_PLACEHOLDER
PART /boot ext3 1024M
PART lvm vg0 all
LV vg0 root / ext4 50G
LV vg0 swap swap swap 16G
LV vg0 tmp /tmp ext4 10G
LV vg0 home /home ext4 20G
IMAGE /root/images/Debian-bookworm-latest-amd64-base.tar.gz
```
**CRITICAL: Depois de boot, converter para ZFS:**
**2.2 Executar installimage**
```bash
# No Rescue Mode
installimage
# Seleccionar Debian 12 (Bookworm)
# Copiar template acima
# Salvar e confirmar
# Reboot automático
```
**2.3 Conversão para ZFS (Pós-Install)**
**IMPORTANTE:** installimage não suporta ZFS directamente. Workflow:
1. Instalar Debian 12 com LVM (installimage)
2. Boot em Debian
3. Instalar ZFS + Proxmox
4. Migrar para ZFS pool (ou aceitar LVM para root, ZFS para VMs)
**Opção A: ZFS para VMs apenas (RECOMENDADO para Hetzner)**
```bash
# Criar ZFS pool em NVMe para VMs
zpool create -f \
-o ashift=12 \
-o compression=lz4 \
-o atime=off \
rpool mirror /dev/nvme0n1p3 /dev/nvme1n1p3
# Criar datasets
zfs create rpool/vm-disks
zfs create rpool/ct-volumes
```
**Opção B: ZFS root (AVANÇADO - requer reinstall manual)**
- Não suportado por installimage
- Requer particionamento manual + debootstrap
- Consultar: https://pve.proxmox.com/wiki/ZFS_on_Linux
**Recomendação para Cluster Descomplicar:** Opção A (LVM root, ZFS para VMs)
### Fase 3: Proxmox VE 8.x Installation
**3.1 Configurar Repositórios Proxmox**
```bash
# Adicionar repo Proxmox
echo "deb [arch=amd64] http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
# Adicionar key
wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
# Update
apt update && apt full-upgrade
```
**3.2 Instalar Proxmox VE**
```bash
apt install proxmox-ve postfix open-iscsi chrony
```
**Configuração Postfix:**
- Seleccionar "Local only"
- System mail name: HOSTNAME
**3.3 Remover Kernel Debian (usar PVE kernel)**
```bash
# Verificar kernel actual
uname -r # Should be pve kernel
# Remover kernel Debian se boot em PVE kernel
apt remove linux-image-amd64 'linux-image-6.1*'
update-grub
```
**3.4 Reboot em Proxmox Kernel**
```bash
reboot
```
### Fase 4: ZFS Tuning (128GB RAM)
**4.1 Configurar ARC Limits**
```bash
# ARC max 16GB (deixa 110GB para VMs)
# ARC min 4GB
echo "options zfs zfs_arc_max=17179869184" >> /etc/modprobe.d/zfs.conf
echo "options zfs zfs_arc_min=4294967296" >> /etc/modprobe.d/zfs.conf
# Aplicar
update-initramfs -u -k all
```
**4.2 Optimizar ZFS para NVMe**
```bash
# Verificar ashift (deve ser 12 para NVMe 4K sectors)
zdb -C rpool | grep ashift
# Activar compression LZ4 (se ainda não)
zfs set compression=lz4 rpool
# Disable atime (performance)
zfs set atime=off rpool
# Snapshot visibility
zfs set snapdir=hidden rpool
```
**4.3 Criar ZFS Datasets para PBS (se HDD 16TB)**
```bash
# Dataset para PBS datastore
zfs create rpool/pbs-datastore
zfs set mountpoint=/mnt/pbs-datastore rpool/pbs-datastore
zfs set compression=lz4 rpool/pbs-datastore
zfs set dedup=off rpool/pbs-datastore
```
### Fase 5: Networking NAT (Single-IP Hetzner)
**5.1 Configurar /etc/network/interfaces**
**Template para Single-IP NAT:**
```bash
auto lo
iface lo inet loopback
# Interface física (verificar nome com 'ip a')
auto eno1
iface eno1 inet static
address SERVER_IP/32
gateway GATEWAY_IP
pointopoint GATEWAY_IP
# Bridge interna para VMs (NAT)
auto vmbr0
iface vmbr0 inet static
address 10.10.10.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
# NAT masquerading
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
```
**CRITICAL Hetzner Gotchas:**
- Gateway /32 point-to-point (não /24 ou /26)
- IP e gateway podem estar em subnets diferentes
- Verificar IP real e gateway no Hetzner Robot
**5.2 Aplicar Networking**
```bash
# Test config
ifup --no-act vmbr0
# Apply
systemctl restart networking
# Verificar
ip a
ping -c 3 8.8.8.8
```
**5.3 Port Forwarding (Opcional - para expor VMs)**
```bash
# Exemplo: Redirecionar porta 8080 host → porta 80 VM 10.10.10.100
iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 8080 -j DNAT --to 10.10.10.100:80
# Persistir com iptables-persistent
apt install iptables-persistent
iptables-save > /etc/iptables/rules.v4
```
### Fase 6: vSwitch Configuration (Opcional)
**Se --vswitch flag presente:**
**6.1 Configurar VLAN no Robot Panel**
- Hetzner Robot → vSwitch → Create VLAN
- Anotar VLAN ID (ex: 4000)
**6.2 Adicionar ao /etc/network/interfaces**
```bash
# vSwitch interface (MTU 1400 OBRIGATÓRIO)
auto enp7s0.4000
iface enp7s0.4000 inet manual
mtu 1400
# Bridge vSwitch
auto vmbr1
iface vmbr1 inet static
address 10.0.0.1/24
bridge-ports enp7s0.4000
bridge-stp off
bridge-fd 0
mtu 1400
```
**CRITICAL:** MTU 1400 não negociável para vSwitch Hetzner.
### Fase 7: Proxmox Web UI + Storage
**7.1 Aceder Web UI**
```
https://SERVER_IP:8006
User: root
Password: (root password do servidor)
```
**7.2 Remover Enterprise Repo (se no-subscription)**
```bash
# Comentar enterprise repo
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.list
# Verificar
apt update
```
**7.3 Configurar Storage no Web UI**
- Datacenter → Storage → Add
- **Directory:** Local (já existe)
- **ZFS:** rpool/vm-disks (para VMs)
- **PBS:** Adicionar PBS server (se já instalado)
### Fase 8: Validation Checklist
**8.1 Verificações Técnicas**
```bash
# PVE version
pveversion -v
# ZFS status
zpool status
zpool list
zfs list
# Networking
ping -c 3 8.8.8.8
curl -I https://www.google.com
# Web UI
curl -k https://localhost:8006
# ARC stats
arc_summary | grep "ARC size"
```
**8.2 Security Hardening**
```bash
# SSH: Disable root password (usar keys)
sed -i 's/#PermitRootLogin yes/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config
systemctl restart sshd
# Firewall básico (opcional - configurar via Web UI depois)
pve-firewall start
```
**8.3 Criar VM Teste**
```bash
# Via CLI (ou Web UI)
qm create 100 \
--name test-vm \
--memory 1024 \
--cores 1 \
--net0 virtio,bridge=vmbr0 \
--ide2 local:iso/debian-12.iso,media=cdrom \
--bootdisk scsi0 \
--scsi0 rpool/vm-disks:10
# Start
qm start 100
# Verificar consegue aceder internet (NAT funcional)
```
## Output Summary
```
✅ Proxmox VE 8.x instalado: HOSTNAME
🖥️ Hardware:
- CPU: (detect)
- RAM: 128GB (ARC max 16GB, disponível 110GB para VMs)
- Storage: 2x 1TB NVMe ZFS RAID-1 + 16TB HDD
💾 Storage:
- ZFS pool: rpool (mirror)
- Compression: LZ4 (ratio ~1.5x)
- ARC: 4GB min, 16GB max
- Datasets: vm-disks, ct-volumes, pbs-datastore
🌐 Networking:
- Mode: NAT masquerading (single-IP)
- Internal subnet: 10.10.10.0/24
- Gateway: GATEWAY_IP (point-to-point)
[Se vSwitch] vSwitch VLAN 4000: 10.0.0.0/24 (MTU 1400)
🔐 Access:
- Web UI: https://SERVER_IP:8006
- SSH: root@SERVER_IP (key only)
- API: https://SERVER_IP:8006/api2/json
📋 Next Steps:
1. Configurar firewall via Web UI (Datacenter → Firewall)
2. Criar API token para Terraform (/pve-api-token)
3. Setup PBS (/pbs-config)
4. Criar Cloud-Init templates
5. Migrar workloads (/vm-migration)
6. [Futuro] Cluster formation (/proxmox-cluster)
⚠️ Hetzner Gotchas Applied:
✓ Gateway /32 point-to-point
✓ NAT masquerading (MAC filtering bypass)
✓ vSwitch MTU 1400 (se aplicável)
✓ ZFS ARC tuning
✓ PVE kernel (não Debian stock)
⏱️ Setup time: ~45min (vs 2h manual)
```
## Hetzner-Specific Gotchas (CRITICAL)
### 1. MAC Filtering
**Problema:** Bridged networking com MAC não registado = bloqueado
**Solução aplicada:** NAT masquerading (bypass MAC filtering)
**Alternativa:** Pedir virtual MAC no Robot panel (grátis)
### 2. Gateway Point-to-Point
**Problema:** Gateway fora da subnet do IP principal
**Solução:** `address IP/32` + `pointopoint GATEWAY` (não /24 ou /26)
### 3. vSwitch MTU 1400
**Problema:** vSwitch Hetzner requer MTU 1400 (não 1500 standard)
**Solução:** Forçar `mtu 1400` em vmbr1 e enp7s0.4000
### 4. ZFS vs LVM Trade-off
**Problema:** installimage não suporta ZFS root directo
**Solução:** LVM para root (compatibilidade), ZFS para VMs (performance)
### 5. Kernel PVE vs Debian
**Problema:** Kernel stock Debian não optimizado para virtualização
**Solução:** Instalar proxmox-ve + remover kernel Debian
## Troubleshooting
### Web UI não acessível
```bash
# Verificar serviço
systemctl status pveproxy
# Logs
journalctl -u pveproxy -f
# Firewall
iptables -L -n -v | grep 8006
```
### VMs sem internet (NAT)
```bash
# Verificar IP forwarding
cat /proc/sys/net/ipv4/ip_forward # Should be 1
# Verificar iptables NAT
iptables -t nat -L -n -v
# Re-aplicar regras
ifdown vmbr0 && ifup vmbr0
```
### ZFS ARC não limita
```bash
# Verificar configs
cat /sys/module/zfs/parameters/zfs_arc_max
# Re-aplicar
modprobe -r zfs
modprobe zfs
```
### vSwitch MTU issues
```bash
# Forçar MTU em todas interfaces
ip link set enp7s0.4000 mtu 1400
ip link set vmbr1 mtu 1400
# Test
ping -M do -s 1372 10.0.0.2 # 1372 = 1400 - 28 (headers)
```
## References
- **NotebookLM:** 276ccdde-6b95-42a3-ad96-4e64d64c8d52 (150+ fontes)
- **Guia Definitivo:** Hub/05-Projectos/Cluster Descomplicar/Research/Proxmox-VE/Guia-Definitivo-Proxmox-Hetzner.md
- **Proxmox Docs:** https://pve.proxmox.com/pve-docs/pve-admin-guide.html
- **Hetzner Docs:** https://docs.hetzner.com/robot/dedicated-server/
---
**Versão:** 1.0.0 | **Autor:** Descomplicar® | **Data:** 2026-02-14
## Metadata (Desk CRM Task #1712)
```
Projeto: Cluster Proxmox Descomplicar (#65)
Tarefa: Migração Infraestrutura para Cluster Proxmox HA (#1712)
Milestone: TBD
Tags: proxmox, pve, hetzner, zfs, networking, instalacao
Status: Research → Implementation
```
---
**/** @author Descomplicar® | @link descomplicar.pt | @copyright 2026 **/
---
## Quando NÃO Usar
- Para servidores non-Hetzner (diferentes gotchas de networking)
- Para Proxmox já instalado (usar outras skills de config)
- Para troubleshooting (criar skill específica)

View File

@@ -744,7 +744,7 @@ Consultar para conhecimento aprofundado ou casos específicos:
```javascript
// Exemplo: pesquisar hardening Nginx
mcp__dify-kb__dify_kb_retrieve_segments({
mcp__notebooklm__notebook_query, mcp__dify-kb__dify_kb_retrieve_segments({
dataset_id: "7f63ec0c-6321-488c-b107-980140199850",
query: "nginx hardening security headers ssl tls",
top_k: 3

View File

@@ -604,7 +604,7 @@ Antes de executar diagnóstico complexo ou para troubleshooting específico:
```javascript
// Exemplo: pesquisar optimização MySQL
mcp__dify-kb__dify_kb_retrieve_segments({
mcp__notebooklm__notebook_query, mcp__dify-kb__dify_kb_retrieve_segments({
dataset_id: "7f63ec0c-6321-488c-b107-980140199850",
query: "mysql slow query optimization innodb tuning",
top_k: 3

View File

@@ -0,0 +1,571 @@
---
name: vm-migration
description: Migração zero-downtime de workloads CWP/EasyPanel para Proxmox VMs seguindo Migration-Plan-OptionA. Use when user mentions "migrate to proxmox", "cwp migration", "easypanel migration", "workload migration".
author: Descomplicar® Crescimento Digital
version: 1.0.0
quality_score: 75
user_invocable: true
desk_task: 1712
allowed-tools: Task, Read, Bash
dependencies:
- ssh-unified
- notebooklm
- proxmox-setup
- pbs-config
---
# VM Migration
Migração zero-downtime de workloads CWP e EasyPanel para Proxmox VMs seguindo Migration-Plan-OptionA com safety nets e rollback procedures.
## Quando Usar
- Migrar servidores CWP para VMs Proxmox
- Migrar containers EasyPanel para VMs Proxmox
- Executar Migration-Plan-OptionA (3 fases)
- Migração phased com validation periods
- Zero-downtime para clientes production
## Sintaxe
```bash
/vm-migration <source-type> <source-host> [--phase 1|2|3] [--batch-size 5] [--validate-days 7]
```
## Exemplos
```bash
# Fase 1: EasyPanel migration (batch 5 containers)
/vm-migration easypanel easy.descomplicar.pt --phase 1 --batch-size 5
# Fase 2: CWP migration com 7 dias validation
/vm-migration cwp server.descomplicar.pt --phase 2 --validate-days 7
# Fase 3: Apenas confirmar (sem migração)
/vm-migration finalize --phase 3
```
## Knowledge Sources (Consultar SEMPRE)
### NotebookLM Proxmox Research
```bash
mcp__notebooklm__notebook_query \
notebook_id:"276ccdde-6b95-42a3-ad96-4e64d64c8d52" \
query:"proxmox migration cwp easypanel docker lxc zero downtime"
```
### Hub Docs
- Hub/05-Projectos/Cluster Descomplicar/Planning/Migration-Plan-OptionA.md
- Fase 1: EasyPanel (Week 1-2)
- Fase 2: CWP (Week 3-6, 7 dias validation)
- Fase 3: Cluster + cleanup (Week 7-8)
## Migration-Plan-OptionA Overview
**Timeline:** 8 semanas
**Strategy:** Phased migration com safety nets
**Rollback:** Disponível em cada fase
```
Week 1-2: FASE 1 - EasyPanel Migration
├── Backup EasyPanel → PBS
├── Create Docker VM Proxmox
├── Migrate containers (batch 5-10)
├── DNS cutover gradual
└── Validation + Rollback window
Week 3-6: FASE 2 - CWP Migration
├── 7 dias safety net (server intacto)
├── Create AlmaLinux 8 VM
├── Migrate CWP accounts (batch)
├── Validate sites + email
├── DNS cutover (TTL 300s)
└── Rollback até Day 7
Week 7-8: FASE 3 - Cluster Formation
├── Prepare server.descomplicar.pt as Node A
├── Form cluster (pvecm)
├── Configure HA groups
├── Live migration tests
└── Cleanup legacy servers
```
## Workflow Completo
### PRE-MIGRATION (Todas Fases)
**1. Backup Strategy Validation**
```bash
# Verificar PBS configurado
pvesm status | grep pbs
# Criar backup point actual
vzdump --storage pbs-main --all 1 --mode snapshot
# Verificar 3-2-1 compliance:
# - Original: source server
# - Backup 1: PBS Node B
# - Backup 2: PBS Node A remote sync (ou VPS backup)
```
**2. Documentar Estado Actual**
```bash
# CWP: Listar contas
/scripts/list_accounts > /tmp/cwp-accounts.txt
# EasyPanel: Listar services
curl -s http://localhost:3000/api/trpc/projects.list | jq > /tmp/easypanel-services.json
# DNS TTLs (baixar para 300s ANTES de migration)
# Verificar em: dns.descomplicar.pt ou Cloudflare
```
**3. Comunicar Clientes (se downtime esperado)**
- Email 48h antes
- Status page update
- Janela de manutenção agendada
---
### FASE 1: EasyPanel Migration (Week 1-2)
**Target:** Migrar 108 containers Docker para VM Proxmox
**1.1 Criar VM Docker Host**
```bash
# Via Proxmox CLI
qm create 200 \
--name easypanel-docker \
--memory 32768 \
--cores 8 \
--net0 virtio,bridge=vmbr0 \
--scsi0 rpool/vm-disks:200 \
--ostype l26 \
--boot order=scsi0
# Install Ubuntu 24.04 LTS
# Via Cloud-Init ou ISO manual
```
**1.2 Instalar Docker + EasyPanel**
```bash
# SSH to VM
ssh root@10.10.10.200
# Docker
curl -fsSL https://get.docker.com | sh
# EasyPanel
curl -sSL https://get.easypanel.io | sh
```
**1.3 Backup Containers Actuais**
```bash
# Em easy.descomplicar.pt
# Backup docker volumes
tar -czf /tmp/easypanel-volumes.tar.gz /var/lib/easypanel/projects
# Transfer para PBS ou storage temporário
scp /tmp/easypanel-volumes.tar.gz root@cluster.descomplicar.pt:/mnt/migration/
```
**1.4 Migrar Containers (Batch 5-10)**
**Batch 1 (não-críticos para teste):**
```bash
# Containers teste: dev environments, staging
# IDs: 1-5
# Por cada container:
1. Exportar env vars do EasyPanel
2. Exportar docker-compose.yml
3. Copiar volumes
4. Recriar em novo EasyPanel
5. Testar health endpoint
6. DNS cutover se OK
```
**Workflow Batch:**
```bash
# Script semi-automatizado
for container_id in 1 2 3 4 5; do
# Export config
curl -s http://easy.descomplicar.pt:3000/api/trpc/services.get \
-d "serviceId=$container_id" > config_$container_id.json
# Copiar volumes
rsync -avz /var/lib/easypanel/projects/$container_id/ \
root@10.10.10.200:/var/lib/easypanel/projects/$container_id/
# Recriar service (via EasyPanel API ou UI)
# Test
curl -I http://10.10.10.200:PORT/health
# DNS cutover (se health OK)
# Actualizar DNS para apontar para 10.10.10.200 (via NAT port forward)
done
```
**1.5 Validation (24-48h por batch)**
```bash
# Monitoring:
- Uptime checks (UptimeRobot ou similar)
- Error rates (logs)
- Performance (response time <500ms)
- Cliente feedback
# Rollback triggers:
- >2 containers falham consecutivamente
- Cliente reporta down
- Health checks fail >10min
```
**1.6 DNS Cutover**
```bash
# Baixar TTL para 300s (5min) 24h ANTES
# Ex: Cloudflare ou dns.descomplicar.pt
# Cutover:
A record: old-ip → NAT port forward para 10.10.10.200:PORT
# Monitorizar por 1h
# Reverter se problemas
```
**1.7 Rollback Procedure (se necessário)**
```bash
# Reverter DNS (TTL 300s = 5min propagação)
# Reactivar container antigo
# Investigar causa falha
# Re-tentar após fix
```
**Batch 2-N:** Repetir até 108 containers migrados.
---
### FASE 2: CWP Migration (Week 3-6)
**Target:** Migrar 39 vhosts CWP para VM AlmaLinux 8
**CRITICAL:** 7 dias safety net - server.descomplicar.pt intacto
**2.1 Criar VM AlmaLinux 8 + CWP**
```bash
qm create 300 \
--name cwp-legacy \
--memory 16384 \
--cores 6 \
--net0 virtio,bridge=vmbr0 \
--scsi0 rpool/vm-disks:150 \
--ostype l26
# Instalar AlmaLinux 8
# Instalar CWP7
wget http://centos-webpanel.com/cwp-el8-latest
sh cwp-el8-latest
```
**2.2 Backup CWP Accounts**
```bash
# Em server.descomplicar.pt
for account in $(cat /tmp/cwp-accounts.txt); do
/scripts/pkgacct $account
done
# Transfer backups
rsync -avz /home/backup-*/cpmove-*.tar.gz \
root@cluster.descomplicar.pt:/mnt/migration/cwp/
```
**2.3 Migrar Contas (Batch 3-5 contas)**
**Workflow por conta:**
```bash
# 1. Restore backup em VM CWP
scp /mnt/migration/cwp/cpmove-ACCOUNT.tar.gz root@10.10.10.300:/home/
# 2. Restore via CWP
/scripts/restorepkg ACCOUNT
# 3. Validar:
- Site carrega (HTTP 200)
- Database conecta
- Email funciona (send test)
- SSL certificado válido
# 4. DNS cutover (TTL 300s)
A record: site.com → 10.10.10.300 (via NAT port forward)
# 5. Monitorizar 24h
```
**2.4 Validation Period (7 dias)**
```bash
# Days 1-7 após migration:
- Server antigo (server.descomplicar.pt) INTACTO
- Rollback instantâneo se problema critical
- Cliente pode reverter DNS manualmente se necessário
# Day 7: Confirmar com Emanuel
# Se tudo OK → proceder cleanup
# Se problemas → extend validation ou rollback completo
```
**2.5 Email Migration**
```bash
# Por cada conta CWP:
# 1. Backup mailboxes
tar -czf /tmp/mail-ACCOUNT.tar.gz /home/ACCOUNT/mail/
# 2. Transfer
scp /tmp/mail-ACCOUNT.tar.gz root@10.10.10.300:/tmp/
# 3. Restore
cd /home/ACCOUNT/
tar -xzf /tmp/mail-ACCOUNT.tar.gz
# 4. Fix permissions
chown -R ACCOUNT:ACCOUNT /home/ACCOUNT/mail/
# 5. Testar IMAP/SMTP
telnet localhost 143 # IMAP
telnet localhost 25 # SMTP
```
**2.6 Rollback Procedure (Fase 2)**
```bash
# Disponível até Day 7
# 1. Reverter DNS (todos sites)
A records → IP antigo (server.descomplicar.pt)
# 2. Verificar server antigo online
ping server.descomplicar.pt
# 3. Comunicar clientes
# 4. Analisar causa falha
# 5. Ajustar plan e re-tentar
```
---
### FASE 3: Cluster Formation (Week 7-8)
**Target:** Formar cluster 2-node, HA, cleanup
**3.1 Preparar server.descomplicar.pt como Node A**
```bash
# APENAS após Fase 2 100% validada
# Backup final completo
tar -czf /tmp/final-backup-server.tar.gz /etc /home /var/www
# Reformatar com Proxmox (/proxmox-setup)
# Tornar Node A do cluster
```
**3.2 Cluster Formation**
```bash
# Usar skill /proxmox-cluster (criada a seguir)
/proxmox-cluster create --node-a server.descomplicar.pt --node-b cluster.descomplicar.pt
```
**3.3 HA Configuration**
```bash
# Usar skill /proxmox-ha (criada a seguir)
/proxmox-ha configure --critical-vms 200,300
```
**3.4 Cleanup**
```bash
# Cancelar easy.descomplicar.pt VPS (após validação)
# Backup final de tudo
# Documentar nova arquitectura
```
---
## Backup Strategy Durante Migration
### Fase 1 (EasyPanel)
**3 locais:**
1. Containers em easy.descomplicar.pt (original)
2. PBS Node B backup
3. easy.descomplicar.pt VPS backup (mantido durante Fase 1)
### Fase 2 (CWP)
**Safety net 7 dias:**
1. Server antigo intacto (rollback rápido)
2. VM CWP → PBS backups automáticos
3. Backups manuais /mnt/migration/
**RPO:** 1h (PBS backups hourly se critical)
**RTO:** 2-4h (restore + DNS propagation)
### Fase 3 (Cluster)
**Redundância completa:**
1. VMs em Node A + Node B
2. PBS primary (Node B 16TB)
3. PBS secondary remote sync (Node A 12TB)
## Output Summary (Por Fase)
### Fase 1 Complete:
```
✅ EasyPanel migrado: 108 containers
📦 Containers:
- Migrados: 108/108
- Falhas: 0
- Rollbacks: 0
- Downtime médio: <2min por container
🎯 Validation:
- Health checks: 100% OK
- Cliente feedback: 0 issues
- Performance: <500ms avg response
🔄 DNS Cutover:
- TTL: 300s (5min)
- Domains migrados: ALL
- Rollback window: 7 dias
📋 Next: Fase 2 (CWP migration)
```
### Fase 2 Complete:
```
✅ CWP migrado: 39 vhosts
🌐 Sites:
- Migrados: 39/39
- HTTP 200: 100%
- SSL válido: 100%
- Email funcional: 100%
⏱️ Timeline:
- Week 3-6: Migration
- Day 1-7: Validation period
- Day 8: Cleanup (se OK)
🔒 Safety Net:
- Server antigo: ONLINE (Day 1-7)
- Rollback: Disponível (DNS reverter)
- Backups: 3 locais
📋 Next: Fase 3 (Cluster formation)
```
### Fase 3 Complete:
```
✅ Cluster Proxmox formado: 2 nodes
🖥️ Nodes:
- Node A: server.descomplicar.pt (reformatado)
- Node B: cluster.descomplicar.pt
- Quorum: 2 votes
🔄 HA:
- Critical VMs: 200, 300
- Failover: Automatic
- Live migration: Enabled
💾 PBS Redundancy:
- Primary: Node B (16TB)
- Secondary: Node A (12TB) remote sync
- RPO: 1h | RTO: 2-4h
🎉 Migration Complete:
- Total time: 8 weeks
- Downtime: <5min total
- Issues: 0 critical
- Cliente satisfaction: HIGH
📋 Post-Migration:
- Monitor por 30 dias
- Documentar em PROC-VM-Migration.md
- Cancelar VPS legacy
- Treino equipa Proxmox
```
## Troubleshooting
### Container migration fails
```bash
# Verificar logs
docker logs CONTAINER_ID
# Verificar volumes
ls -lah /var/lib/easypanel/projects/PROJECT/
# Testar manual
docker-compose up -d
# Rollback e investigar
```
### CWP site não carrega após migration
```bash
# Verificar Apache
systemctl status httpd
# Verificar vhost
cat /usr/local/apache/conf.d/vhosts/DOMAIN.conf
# Verificar database
mysql -u USER -p DATABASE
# Verificar DNS propagation
dig +short DOMAIN @8.8.8.8
```
### Email não funciona
```bash
# Verificar Postfix
systemctl status postfix
# Testar SMTP
telnet localhost 25
# Verificar DNS MX
dig +short MX DOMAIN
# Verificar logs
tail -f /var/log/maillog
```
## References
- **Migration Plan:** Hub/05-Projectos/Cluster Descomplicar/Planning/Migration-Plan-OptionA.md
- **NotebookLM:** 276ccdde-6b95-42a3-ad96-4e64d64c8d52
- **Guia Hub:** Guia-Definitivo-Proxmox-Hetzner.md (Módulo 4: Workloads)
---
**Versão:** 1.0.0 | **Autor:** Descomplicar® | **Data:** 2026-02-14
## Metadata (Desk CRM Task #1712)
```
Projeto: Cluster Proxmox Descomplicar (#65)
Tarefa: Migração Infraestrutura (#1712)
Tags: migration, cwp, easypanel, zero-downtime, phased
Status: Implementation
```
---
**/** @author Descomplicar® | @link descomplicar.pt | @copyright 2026 **/
---
## Quando NÃO Usar
- Para migrações non-CWP/EasyPanel (criar plan específico)
- Para teste/dev environments (menos rigor)
- Para single-server setups (não cluster)