Files
claude-plugins/infraestrutura/skills/proxmox-setup/SKILL.md
Emanuel Almeida 9404af7ac9 feat: sync all plugins, skills, agents updates
New plugins: core-tools
New skills: auto-expense, ticket-triage, design, security-check,
  aiktop-tasks, daily-digest, imap-triage, index-update, mindmap,
  notebooklm, proc-creator, tasks-overview, validate-component,
  perfex-module, report, calendar-manager
New agents: design-critic, design-generator, design-lead,
  design-prompt-architect, design-researcher, compliance-auditor,
  metabase-analyst, gitea-integration-specialist
Updated: all plugin configs, knowledge datasets, existing skills

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 17:16:32 +00:00

13 KiB

name, description, author, version, quality_score, user_invocable, desk_task, allowed-tools, dependencies
name description author version quality_score user_invocable desk_task allowed-tools dependencies
proxmox-setup Instalação completa de Proxmox VE 8.x em Hetzner - installimage, ZFS RAID-1, NAT networking, vSwitch. Use when user mentions "proxmox install", "setup proxmox", "proxmox hetzner", "new proxmox node". Descomplicar® Crescimento Digital 1.0.0 75 true 1712 Task, Read, Bash
ssh-unified
notebooklm

Proxmox Setup

Instalação completa e configuração de Proxmox VE 8.x em servidor dedicado Hetzner com ZFS RAID-1, networking NAT single-IP e optimizações.

Quando Usar

  • Instalar novo node Proxmox em servidor Hetzner
  • Setup inicial com ZFS mirror NVMe
  • Configurar networking NAT para single-IP
  • Preparar node para clustering futuro
  • Aplicar Hetzner-specific gotchas e optimizações

Sintaxe

/proxmox-setup <server-ip> <hostname> [--zfs-pool rpool] [--arc-max 16G] [--vswitch]

Exemplos

# Setup básico single-IP NAT
/proxmox-setup 138.201.45.67 cluster.descomplicar.pt

# Setup com vSwitch (MTU 1400)
/proxmox-setup 138.201.45.67 cluster.descomplicar.pt --vswitch

# Custom ZFS ARC (para 64GB RAM)
/proxmox-setup 138.201.45.67 pve-node1.descomplicar.pt --arc-max 8G

Knowledge Sources (Consultar SEMPRE)

NotebookLM Proxmox Research

mcp__notebooklm__notebook_query \
  notebook_id:"276ccdde-6b95-42a3-ad96-4e64d64c8d52" \
  query:"proxmox installation hetzner zfs networking"

Hub Docs

  • /media/ealmeida/Dados/Hub/05-Projectos/Cluster Descomplicar/Research/Proxmox-VE/Guia-Definitivo-Proxmox-Hetzner.md
  • Módulo 1: Instalação (installimage, ZFS vs LVM, Kernel PVE)
  • Módulo 2: Networking (NAT masquerading, vSwitch MTU 1400)

Workflow Completo

Fase 1: Pre-Installation Checks

1.1 Verificar Rescue Mode

# Via SSH MCP
mcp__ssh-unified__ssh_execute \
  server:"hetzner-rescue" \
  command:"uname -a && df -h"

# Expected: rescue kernel, /dev/md* present

1.2 Consultar NotebookLM para Hardware Specs

# Query: "hetzner installimage zfs raid configuration"
# Obter template correcto para specs do servidor

1.3 Backup de Configuração Actual (se aplicável)

ssh root@SERVER_IP "tar -czf /tmp/backup-configs.tar.gz /etc /root"
scp root@SERVER_IP:/tmp/backup-configs.tar.gz ~/backups/

Fase 2: installimage com ZFS RAID-1

2.1 Criar Template installimage

Template base para 2x NVMe 1TB + HDD 16TB:

DRIVE1 /dev/nvme0n1
DRIVE2 /dev/nvme1n1
SWRAID 0
SWRAIDLEVEL 0
BOOTLOADER grub
HOSTNAME HOSTNAME_PLACEHOLDER
PART /boot ext3 1024M
PART lvm vg0 all

LV vg0 root / ext4 50G
LV vg0 swap swap swap 16G
LV vg0 tmp /tmp ext4 10G
LV vg0 home /home ext4 20G

IMAGE /root/images/Debian-bookworm-latest-amd64-base.tar.gz

CRITICAL: Depois de boot, converter para ZFS:

2.2 Executar installimage

# No Rescue Mode
installimage

# Seleccionar Debian 12 (Bookworm)
# Copiar template acima
# Salvar e confirmar
# Reboot automático

2.3 Conversão para ZFS (Pós-Install)

IMPORTANTE: installimage não suporta ZFS directamente. Workflow:

  1. Instalar Debian 12 com LVM (installimage)
  2. Boot em Debian
  3. Instalar ZFS + Proxmox
  4. Migrar para ZFS pool (ou aceitar LVM para root, ZFS para VMs)

Opção A: ZFS para VMs apenas (RECOMENDADO para Hetzner)

# Criar ZFS pool em NVMe para VMs
zpool create -f \
  -o ashift=12 \
  -o compression=lz4 \
  -o atime=off \
  rpool mirror /dev/nvme0n1p3 /dev/nvme1n1p3

# Criar datasets
zfs create rpool/vm-disks
zfs create rpool/ct-volumes

Opção B: ZFS root (AVANÇADO - requer reinstall manual)

Recomendação para Cluster Descomplicar: Opção A (LVM root, ZFS para VMs)

Fase 3: Proxmox VE 8.x Installation

3.1 Configurar Repositórios Proxmox

# Adicionar repo Proxmox
echo "deb [arch=amd64] http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list

# Adicionar key
wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg

# Update
apt update && apt full-upgrade

3.2 Instalar Proxmox VE

apt install proxmox-ve postfix open-iscsi chrony

Configuração Postfix:

  • Seleccionar "Local only"
  • System mail name: HOSTNAME

3.3 Remover Kernel Debian (usar PVE kernel)

# Verificar kernel actual
uname -r  # Should be pve kernel

# Remover kernel Debian se boot em PVE kernel
apt remove linux-image-amd64 'linux-image-6.1*'
update-grub

3.4 Reboot em Proxmox Kernel

reboot

Fase 4: ZFS Tuning (128GB RAM)

4.1 Configurar ARC Limits

# ARC max 16GB (deixa 110GB para VMs)
# ARC min 4GB
echo "options zfs zfs_arc_max=17179869184" >> /etc/modprobe.d/zfs.conf
echo "options zfs zfs_arc_min=4294967296" >> /etc/modprobe.d/zfs.conf

# Aplicar
update-initramfs -u -k all

4.2 Optimizar ZFS para NVMe

# Verificar ashift (deve ser 12 para NVMe 4K sectors)
zdb -C rpool | grep ashift

# Activar compression LZ4 (se ainda não)
zfs set compression=lz4 rpool

# Disable atime (performance)
zfs set atime=off rpool

# Snapshot visibility
zfs set snapdir=hidden rpool

4.3 Criar ZFS Datasets para PBS (se HDD 16TB)

# Dataset para PBS datastore
zfs create rpool/pbs-datastore
zfs set mountpoint=/mnt/pbs-datastore rpool/pbs-datastore
zfs set compression=lz4 rpool/pbs-datastore
zfs set dedup=off rpool/pbs-datastore

Fase 5: Networking NAT (Single-IP Hetzner)

5.1 Configurar /etc/network/interfaces

Template para Single-IP NAT:

auto lo
iface lo inet loopback

# Interface física (verificar nome com 'ip a')
auto eno1
iface eno1 inet static
        address   SERVER_IP/32
        gateway   GATEWAY_IP
        pointopoint GATEWAY_IP

# Bridge interna para VMs (NAT)
auto vmbr0
iface vmbr0 inet static
        address  10.10.10.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0

        # NAT masquerading
        post-up   echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE

CRITICAL Hetzner Gotchas:

  • Gateway /32 point-to-point (não /24 ou /26)
  • IP e gateway podem estar em subnets diferentes
  • Verificar IP real e gateway no Hetzner Robot

5.2 Aplicar Networking

# Test config
ifup --no-act vmbr0

# Apply
systemctl restart networking

# Verificar
ip a
ping -c 3 8.8.8.8

5.3 Port Forwarding (Opcional - para expor VMs)

# Exemplo: Redirecionar porta 8080 host → porta 80 VM 10.10.10.100
iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 8080 -j DNAT --to 10.10.10.100:80

# Persistir com iptables-persistent
apt install iptables-persistent
iptables-save > /etc/iptables/rules.v4

Fase 6: vSwitch Configuration (Opcional)

Se --vswitch flag presente:

6.1 Configurar VLAN no Robot Panel

  • Hetzner Robot → vSwitch → Create VLAN
  • Anotar VLAN ID (ex: 4000)

6.2 Adicionar ao /etc/network/interfaces

# vSwitch interface (MTU 1400 OBRIGATÓRIO)
auto enp7s0.4000
iface enp7s0.4000 inet manual
        mtu 1400

# Bridge vSwitch
auto vmbr1
iface vmbr1 inet static
        address 10.0.0.1/24
        bridge-ports enp7s0.4000
        bridge-stp off
        bridge-fd 0
        mtu 1400

CRITICAL: MTU 1400 não negociável para vSwitch Hetzner.

Fase 7: Proxmox Web UI + Storage

7.1 Aceder Web UI

https://SERVER_IP:8006

User: root
Password: (root password do servidor)

7.2 Remover Enterprise Repo (se no-subscription)

# Comentar enterprise repo
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.list

# Verificar
apt update

7.3 Configurar Storage no Web UI

  • Datacenter → Storage → Add
  • Directory: Local (já existe)
  • ZFS: rpool/vm-disks (para VMs)
  • PBS: Adicionar PBS server (se já instalado)

Fase 8: Validation Checklist

8.1 Verificações Técnicas

# PVE version
pveversion -v

# ZFS status
zpool status
zpool list
zfs list

# Networking
ping -c 3 8.8.8.8
curl -I https://www.google.com

# Web UI
curl -k https://localhost:8006

# ARC stats
arc_summary | grep "ARC size"

8.2 Security Hardening

# SSH: Disable root password (usar keys)
sed -i 's/#PermitRootLogin yes/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config
systemctl restart sshd

# Firewall básico (opcional - configurar via Web UI depois)
pve-firewall start

8.3 Criar VM Teste

# Via CLI (ou Web UI)
qm create 100 \
  --name test-vm \
  --memory 1024 \
  --cores 1 \
  --net0 virtio,bridge=vmbr0 \
  --ide2 local:iso/debian-12.iso,media=cdrom \
  --bootdisk scsi0 \
  --scsi0 rpool/vm-disks:10

# Start
qm start 100

# Verificar consegue aceder internet (NAT funcional)

Output Summary

✅ Proxmox VE 8.x instalado: HOSTNAME

🖥️ Hardware:
   - CPU: (detect)
   - RAM: 128GB (ARC max 16GB, disponível 110GB para VMs)
   - Storage: 2x 1TB NVMe ZFS RAID-1 + 16TB HDD

💾 Storage:
   - ZFS pool: rpool (mirror)
   - Compression: LZ4 (ratio ~1.5x)
   - ARC: 4GB min, 16GB max
   - Datasets: vm-disks, ct-volumes, pbs-datastore

🌐 Networking:
   - Mode: NAT masquerading (single-IP)
   - Internal subnet: 10.10.10.0/24
   - Gateway: GATEWAY_IP (point-to-point)
   [Se vSwitch] vSwitch VLAN 4000: 10.0.0.0/24 (MTU 1400)

🔐 Access:
   - Web UI: https://SERVER_IP:8006
   - SSH: root@SERVER_IP (key only)
   - API: https://SERVER_IP:8006/api2/json

📋 Next Steps:
   1. Configurar firewall via Web UI (Datacenter → Firewall)
   2. Criar API token para Terraform (/pve-api-token)
   3. Setup PBS (/pbs-config)
   4. Criar Cloud-Init templates
   5. Migrar workloads (/vm-migration)
   6. [Futuro] Cluster formation (/proxmox-cluster)

⚠️ Hetzner Gotchas Applied:
   ✓ Gateway /32 point-to-point
   ✓ NAT masquerading (MAC filtering bypass)
   ✓ vSwitch MTU 1400 (se aplicável)
   ✓ ZFS ARC tuning
   ✓ PVE kernel (não Debian stock)

⏱️ Setup time: ~45min (vs 2h manual)

Hetzner-Specific Gotchas (CRITICAL)

1. MAC Filtering

Problema: Bridged networking com MAC não registado = bloqueado Solução aplicada: NAT masquerading (bypass MAC filtering) Alternativa: Pedir virtual MAC no Robot panel (grátis)

2. Gateway Point-to-Point

Problema: Gateway fora da subnet do IP principal Solução: address IP/32 + pointopoint GATEWAY (não /24 ou /26)

3. vSwitch MTU 1400

Problema: vSwitch Hetzner requer MTU 1400 (não 1500 standard) Solução: Forçar mtu 1400 em vmbr1 e enp7s0.4000

4. ZFS vs LVM Trade-off

Problema: installimage não suporta ZFS root directo Solução: LVM para root (compatibilidade), ZFS para VMs (performance)

5. Kernel PVE vs Debian

Problema: Kernel stock Debian não optimizado para virtualização Solução: Instalar proxmox-ve + remover kernel Debian

Troubleshooting

Web UI não acessível

# Verificar serviço
systemctl status pveproxy

# Logs
journalctl -u pveproxy -f

# Firewall
iptables -L -n -v | grep 8006

VMs sem internet (NAT)

# Verificar IP forwarding
cat /proc/sys/net/ipv4/ip_forward  # Should be 1

# Verificar iptables NAT
iptables -t nat -L -n -v

# Re-aplicar regras
ifdown vmbr0 && ifup vmbr0

ZFS ARC não limita

# Verificar configs
cat /sys/module/zfs/parameters/zfs_arc_max

# Re-aplicar
modprobe -r zfs
modprobe zfs

vSwitch MTU issues

# Forçar MTU em todas interfaces
ip link set enp7s0.4000 mtu 1400
ip link set vmbr1 mtu 1400

# Test
ping -M do -s 1372 10.0.0.2  # 1372 = 1400 - 28 (headers)

References


Versão: 1.0.0 | Autor: Descomplicar® | Data: 2026-02-14

Metadata (Desk CRM Task #1712)

Projeto: Cluster Proxmox Descomplicar (#65)
Tarefa: Migração Infraestrutura para Cluster Proxmox HA (#1712)
Milestone: TBD
Tags: proxmox, pve, hetzner, zfs, networking, instalacao
Status: Research → Implementation

/ @author Descomplicar® | @link descomplicar.pt | @copyright 2026 **/


Quando NÃO Usar

  • Para servidores non-Hetzner (diferentes gotchas de networking)
  • Para Proxmox já instalado (usar outras skills de config)
  • Para troubleshooting (criar skill específica)