Initial release: DictIA v0.8.14-alpha (fork de Speakr, AGPL-3.0)
This commit is contained in:
118
deployment/docs/LOCAL-SETUP.md
Normal file
118
deployment/docs/LOCAL-SETUP.md
Normal file
@@ -0,0 +1,118 @@
|
||||
# Setup Local — DictIA
|
||||
|
||||
Guide pour deployer DictIA localement avec GPU NVIDIA ou CPU.
|
||||
|
||||
## Profil local-gpu
|
||||
|
||||
### Prerequis
|
||||
|
||||
- NVIDIA GPU avec support CUDA
|
||||
- [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
|
||||
- Docker + Docker Compose V2
|
||||
- 8GB+ RAM (16GB recommande)
|
||||
- Token HuggingFace (pour la diarisation)
|
||||
|
||||
### Installation nvidia-container-toolkit
|
||||
|
||||
```bash
|
||||
# Ubuntu/Debian
|
||||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
|
||||
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
|
||||
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
|
||||
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y nvidia-container-toolkit
|
||||
sudo nvidia-ctk runtime configure --runtime=docker
|
||||
sudo systemctl restart docker
|
||||
|
||||
# Verifier
|
||||
docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smi
|
||||
```
|
||||
|
||||
### Setup DictIA
|
||||
|
||||
```bash
|
||||
cd dictia
|
||||
bash deployment/setup.sh --profile local-gpu
|
||||
```
|
||||
|
||||
Le setup va verifier:
|
||||
- nvidia-container-toolkit installe
|
||||
- GPU accessible depuis Docker
|
||||
- Assez de RAM disponible
|
||||
|
||||
### Configuration du modele
|
||||
|
||||
Par defaut, WhisperX utilise `large-v3`. Pour changer:
|
||||
|
||||
```bash
|
||||
# Editer .env
|
||||
ASR_MODEL=large-v3 # Meilleure qualite
|
||||
# ASR_MODEL=medium # Plus rapide, qualite correcte
|
||||
# ASR_MODEL=small # Tres rapide, qualite reduite
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Profil local-cpu
|
||||
|
||||
### Prerequis
|
||||
|
||||
- Docker + Docker Compose V2
|
||||
- 18GB+ RAM (WhisperX CPU est gourmand)
|
||||
- Patience (transcription ~10x temps reel)
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
cd dictia
|
||||
bash deployment/setup.sh --profile local-cpu
|
||||
```
|
||||
|
||||
### Limitations
|
||||
|
||||
- Transcription lente: 1h d'audio prend ~10h
|
||||
- Utilise float32 (pas de GPU acceleration)
|
||||
- Limite memoire a 18GB par defaut
|
||||
- Recommande pour: tests, petits fichiers, demos
|
||||
|
||||
Pour reduire l'utilisation memoire, utiliser un modele plus petit:
|
||||
|
||||
```bash
|
||||
# Editer .env
|
||||
ASR_MODEL=small # ou medium, base, tiny
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
```bash
|
||||
# Health check
|
||||
bash deployment/tools/health-check.sh
|
||||
|
||||
# Test rapide: ouvrir le navigateur
|
||||
open http://localhost:8899
|
||||
|
||||
# Verifier WhisperX
|
||||
curl http://localhost:9000/health
|
||||
```
|
||||
|
||||
## Gestion des containers
|
||||
|
||||
```bash
|
||||
COMPOSE_FILE=deployment/docker/docker-compose.local-gpu.yml # ou local-cpu
|
||||
|
||||
# Logs
|
||||
docker compose -f $COMPOSE_FILE logs -f
|
||||
|
||||
# Redemarrer
|
||||
docker compose -f $COMPOSE_FILE restart
|
||||
|
||||
# Arreter
|
||||
docker compose -f $COMPOSE_FILE down
|
||||
|
||||
# Voir l'utilisation GPU
|
||||
nvidia-smi # (profil GPU seulement)
|
||||
```
|
||||
136
deployment/docs/MAINTENANCE.md
Normal file
136
deployment/docs/MAINTENANCE.md
Normal file
@@ -0,0 +1,136 @@
|
||||
# Maintenance — DictIA
|
||||
|
||||
## Backup
|
||||
|
||||
```bash
|
||||
# Backup complet (data, .env, volumes, stats ASR)
|
||||
bash deployment/tools/backup.sh
|
||||
|
||||
# Backup dans un repertoire specifique
|
||||
bash deployment/tools/backup.sh /mnt/backups
|
||||
```
|
||||
|
||||
Les backups sont sauvegardes dans `backups/` avec rotation automatique (garde les 5 derniers).
|
||||
|
||||
Contenu d'un backup:
|
||||
- `data/` — uploads et base de donnees SQLite
|
||||
- `dot-env` — fichier de configuration
|
||||
- `asr-usage-stats.json` — stats d'utilisation GPU
|
||||
- `whisperx-cache.tar.gz` — cache modeles (si volume Docker)
|
||||
- `manifest.json` — metadonnees du backup
|
||||
|
||||
### Schedule recommande
|
||||
|
||||
| Frequence | Action |
|
||||
|-----------|--------|
|
||||
| Quotidien | `bash deployment/tools/backup.sh` |
|
||||
| Hebdomadaire | Copier le backup sur un stockage externe |
|
||||
| Mensuel | Verifier la restauration sur un environnement de test |
|
||||
|
||||
Pour automatiser avec cron:
|
||||
|
||||
```bash
|
||||
# Backup quotidien a 3h du matin
|
||||
0 3 * * * /opt/dictia/deployment/tools/backup.sh >> /var/log/dictia-backup.log 2>&1
|
||||
```
|
||||
|
||||
## Restore
|
||||
|
||||
```bash
|
||||
# Lister les backups disponibles
|
||||
ls -la backups/
|
||||
|
||||
# Restaurer un backup
|
||||
bash deployment/tools/restore.sh backups/dictia-20260211-030000.tar.gz
|
||||
```
|
||||
|
||||
Le script:
|
||||
1. Valide l'archive (presence du manifest)
|
||||
2. Demande confirmation
|
||||
3. Arrete les containers
|
||||
4. Restaure les fichiers
|
||||
5. Redemarre les containers
|
||||
|
||||
## Mise a jour
|
||||
|
||||
```bash
|
||||
# Mise a jour complete (git pull + rebuild + restart)
|
||||
bash deployment/tools/update.sh
|
||||
|
||||
# Rebuild seulement (sans git pull)
|
||||
bash deployment/tools/update.sh --no-pull
|
||||
|
||||
# Git pull seulement (sans rebuild)
|
||||
bash deployment/tools/update.sh --no-build
|
||||
```
|
||||
|
||||
Le script:
|
||||
1. Detecte le profil actif automatiquement
|
||||
2. `git pull origin dictia-branding`
|
||||
3. `docker build -t innova-ai/dictia:latest .`
|
||||
4. Pull WhisperX upstream (profils locaux)
|
||||
5. `docker compose down && up -d`
|
||||
6. Attend le health check
|
||||
7. Nettoie les images dangling
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health check
|
||||
|
||||
```bash
|
||||
# Diagnostic complet (humain)
|
||||
bash deployment/tools/health-check.sh
|
||||
|
||||
# JSON (pour alertes/scripts)
|
||||
bash deployment/tools/health-check.sh --json
|
||||
|
||||
# Code de sortie seulement (0=ok, 1=probleme)
|
||||
bash deployment/tools/health-check.sh --quiet
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# DictIA
|
||||
docker logs dictia -f --tail 100
|
||||
|
||||
# WhisperX (profils locaux)
|
||||
docker logs whisperx-asr -f --tail 100
|
||||
|
||||
# ASR Proxy (profil cloud)
|
||||
journalctl -u asr-proxy -f
|
||||
```
|
||||
|
||||
### Dashboard GPU (profil cloud)
|
||||
|
||||
Le dashboard de monitoring GPU est accessible a:
|
||||
- `http://localhost:9090` (local)
|
||||
- `https://votre-hostname.tailnet.ts.net:9443` (Tailscale)
|
||||
|
||||
Affiche: statut GPU, cout mensuel, historique des requetes, zones de fallback.
|
||||
|
||||
### Metriques cles
|
||||
|
||||
```bash
|
||||
# Espace disque (les transcriptions grossissent)
|
||||
df -h /opt/dictia/data/
|
||||
|
||||
# Utilisation memoire (WhisperX est gourmand)
|
||||
docker stats --no-stream
|
||||
|
||||
# Stats GPU (profil cloud)
|
||||
curl -s http://localhost:9090/stats | python3 -m json.tool
|
||||
```
|
||||
|
||||
## Maintenance Docker
|
||||
|
||||
```bash
|
||||
# Nettoyer les images orphelines
|
||||
docker image prune -f
|
||||
|
||||
# Nettoyer tout (attention: supprime les volumes non utilises)
|
||||
# docker system prune -a --volumes
|
||||
|
||||
# Verifier l'espace Docker
|
||||
docker system df
|
||||
```
|
||||
90
deployment/docs/QUICKSTART.md
Normal file
90
deployment/docs/QUICKSTART.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Quickstart — DictIA
|
||||
|
||||
## Prerequis communs
|
||||
|
||||
- Docker + Docker Compose V2
|
||||
- Git
|
||||
- 2GB+ RAM disponible
|
||||
|
||||
```bash
|
||||
git clone https://gitea.innova-ai.ca/Innova-AI/dictia.git
|
||||
cd dictia
|
||||
git checkout dictia-branding
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Profil Cloud (VPS + GCP GPU)
|
||||
|
||||
Le GPU demarre automatiquement quand quelqu'un transcrit, et s'arrete apres 5 min d'inactivite.
|
||||
|
||||
```bash
|
||||
# 1. Setup interactif
|
||||
bash deployment/setup.sh --profile cloud
|
||||
|
||||
# 2. Setup ASR Proxy (GCP credentials requises)
|
||||
bash deployment/asr-proxy/setup.sh
|
||||
|
||||
# 3. Optionnel: Tailscale Serve pour HTTPS
|
||||
bash deployment/config/tailscale/setup-serve.sh
|
||||
```
|
||||
|
||||
**Requis**: credentials GCP (service account ou OAuth) dans `deployment/asr-proxy/gcp-credentials.json`.
|
||||
|
||||
---
|
||||
|
||||
## Profil Local GPU
|
||||
|
||||
Transcription locale sur GPU NVIDIA. Le plus rapide.
|
||||
|
||||
```bash
|
||||
# Prerequis: nvidia-container-toolkit
|
||||
# https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
|
||||
|
||||
# Setup
|
||||
bash deployment/setup.sh --profile local-gpu
|
||||
```
|
||||
|
||||
**Requis**: token HuggingFace pour la diarisation (pyannote).
|
||||
|
||||
---
|
||||
|
||||
## Profil Local CPU
|
||||
|
||||
Transcription sur CPU. Lent mais fonctionnel pour tester.
|
||||
|
||||
```bash
|
||||
bash deployment/setup.sh --profile local-cpu
|
||||
```
|
||||
|
||||
Prevoir ~10x le temps reel (1h audio = ~10h de traitement).
|
||||
|
||||
---
|
||||
|
||||
## Apres l'installation
|
||||
|
||||
```bash
|
||||
# Verifier que tout fonctionne
|
||||
bash deployment/tools/health-check.sh
|
||||
|
||||
# Ouvrir DictIA
|
||||
open http://localhost:8899
|
||||
```
|
||||
|
||||
Se connecter avec les identifiants admin configures pendant le setup.
|
||||
|
||||
## Commandes utiles
|
||||
|
||||
```bash
|
||||
# Logs en temps reel
|
||||
docker compose -f deployment/docker/docker-compose.<profil>.yml logs -f
|
||||
|
||||
# Redemarrer
|
||||
docker compose -f deployment/docker/docker-compose.<profil>.yml restart
|
||||
|
||||
# Mise a jour
|
||||
bash deployment/tools/update.sh
|
||||
|
||||
# Backup
|
||||
bash deployment/tools/backup.sh
|
||||
```
|
||||
177
deployment/docs/TROUBLESHOOTING.md
Normal file
177
deployment/docs/TROUBLESHOOTING.md
Normal file
@@ -0,0 +1,177 @@
|
||||
# Troubleshooting — DictIA
|
||||
|
||||
## WhisperX OOM (Out of Memory)
|
||||
|
||||
**Symptome**: Container `whisperx-asr` crash ou restart en boucle.
|
||||
|
||||
**Cause**: Modele trop gros pour la RAM/VRAM disponible.
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Utiliser un modele plus petit dans .env
|
||||
ASR_MODEL=medium # au lieu de large-v3
|
||||
|
||||
# Augmenter la limite memoire (local-cpu)
|
||||
# Editer docker-compose.local-cpu.yml
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 24G # au lieu de 18G
|
||||
```
|
||||
|
||||
## Diarisation 403 Forbidden
|
||||
|
||||
**Symptome**: Erreur 403 lors de la transcription avec diarisation.
|
||||
|
||||
**Cause**: Token HuggingFace manquant ou conditions non acceptees.
|
||||
|
||||
**Solution**:
|
||||
1. Creer un token: https://huggingface.co/settings/tokens
|
||||
2. Accepter les conditions: https://huggingface.co/pyannote/speaker-diarization-3.1
|
||||
3. Ajouter dans `.env`:
|
||||
```bash
|
||||
HF_TOKEN=hf_votre_token
|
||||
```
|
||||
4. Redemarrer: `docker compose -f deployment/docker/docker-compose.<profil>.yml restart`
|
||||
|
||||
## GPU non detecte (local-gpu)
|
||||
|
||||
**Symptome**: `nvidia-smi` fonctionne mais Docker ne voit pas le GPU.
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Installer nvidia-container-toolkit
|
||||
sudo apt install -y nvidia-container-toolkit
|
||||
sudo nvidia-ctk runtime configure --runtime=docker
|
||||
sudo systemctl restart docker
|
||||
|
||||
# Verifier
|
||||
docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smi
|
||||
```
|
||||
|
||||
## Upload echoue (fichiers volumineux)
|
||||
|
||||
**Symptome**: Upload de gros fichiers (>100MB) echoue.
|
||||
|
||||
**Causes possibles**:
|
||||
- Timeout Nginx/reverse proxy
|
||||
- Limite upload trop basse
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Si Nginx: verifier client_max_body_size dans dictia.conf
|
||||
client_max_body_size 500M;
|
||||
|
||||
# Si Tailscale Serve: pas de limite cote Tailscale
|
||||
|
||||
# Timeout gunicorn (dans le Dockerfile, deja a 600s)
|
||||
# Pour des fichiers tres longs, augmenter dans docker-compose:
|
||||
environment:
|
||||
- GUNICORN_TIMEOUT=1200
|
||||
```
|
||||
|
||||
## Container dictia "unhealthy"
|
||||
|
||||
**Symptome**: `docker ps` montre "unhealthy" pour le container dictia.
|
||||
|
||||
**Diagnostic**:
|
||||
```bash
|
||||
# Voir les logs
|
||||
docker logs dictia --tail 50
|
||||
|
||||
# Tester manuellement
|
||||
docker exec dictia python3 -c "import urllib.request; urllib.request.urlopen('http://localhost:8899/health')"
|
||||
```
|
||||
|
||||
**Causes courantes**:
|
||||
- `.env` mal configure (SECRET_KEY manquant)
|
||||
- Base de donnees corrompue (restaurer backup)
|
||||
- Port 8899 deja utilise
|
||||
|
||||
## ASR Proxy: "No GPU available"
|
||||
|
||||
**Symptome**: Transcription echoue avec "No GPU available in any Canadian zone".
|
||||
|
||||
**Causes**:
|
||||
- GCP n'a pas de GPU disponible (capacite epuisee)
|
||||
- Credentials GCP expirees
|
||||
- Budget mensuel atteint
|
||||
|
||||
**Diagnostic**:
|
||||
```bash
|
||||
# Verifier le statut du proxy
|
||||
curl -s http://localhost:9090/health | python3 -m json.tool
|
||||
|
||||
# Verifier les stats (budget)
|
||||
curl -s http://localhost:9090/stats | python3 -m json.tool
|
||||
|
||||
# Voir les logs
|
||||
journalctl -u asr-proxy --since "1 hour ago"
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
- Attendre (GCP libere des GPUs regulierement)
|
||||
- Le proxy reessaie automatiquement apres un cooldown de 3 minutes
|
||||
- Verifier le dashboard: http://localhost:9090
|
||||
|
||||
## Build Docker lent/echoue
|
||||
|
||||
**Symptome**: `docker build` prend trop de temps ou echoue.
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Limiter les ressources si le VPS est petit
|
||||
docker build --memory=2g --cpus=2 -t innova-ai/dictia:latest .
|
||||
|
||||
# Nettoyer le cache Docker si le disque est plein
|
||||
docker builder prune -f
|
||||
docker image prune -f
|
||||
```
|
||||
|
||||
## Base de donnees corrompue
|
||||
|
||||
**Symptome**: Erreur SQLite au demarrage.
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Restaurer le dernier backup
|
||||
bash deployment/tools/restore.sh backups/dictia-LATEST.tar.gz
|
||||
|
||||
# Ou recreer la base (perd les donnees)
|
||||
rm data/instance/transcriptions.db
|
||||
docker compose -f deployment/docker/docker-compose.<profil>.yml restart
|
||||
```
|
||||
|
||||
## Port 8899 deja utilise
|
||||
|
||||
```bash
|
||||
# Trouver qui utilise le port
|
||||
sudo lsof -i :8899
|
||||
# ou
|
||||
sudo ss -tlnp | grep 8899
|
||||
|
||||
# Arreter le processus ou changer le port dans docker-compose
|
||||
ports:
|
||||
- "8900:8899" # utiliser 8900 a la place
|
||||
```
|
||||
|
||||
## Mise a jour qui casse tout
|
||||
|
||||
```bash
|
||||
# Rollback: revenir au commit precedent
|
||||
cd dictia
|
||||
git log --oneline -5 # trouver le bon commit
|
||||
git checkout <commit-hash>
|
||||
|
||||
# Rebuild et redemarrer
|
||||
docker build -t innova-ai/dictia:latest .
|
||||
docker compose -f deployment/docker/docker-compose.<profil>.yml down
|
||||
docker compose -f deployment/docker/docker-compose.<profil>.yml up -d
|
||||
```
|
||||
|
||||
## Commande de diagnostic rapide
|
||||
|
||||
```bash
|
||||
# Tout verifier d'un coup
|
||||
bash deployment/tools/health-check.sh --json | python3 -m json.tool
|
||||
```
|
||||
148
deployment/docs/VPS-SETUP.md
Normal file
148
deployment/docs/VPS-SETUP.md
Normal file
@@ -0,0 +1,148 @@
|
||||
# Setup VPS from scratch — DictIA
|
||||
|
||||
Guide complet pour deployer DictIA sur un VPS Ubuntu.
|
||||
Teste sur OVH VPS avec Ubuntu 22.04/24.04.
|
||||
|
||||
## 1. Preparation du VPS
|
||||
|
||||
```bash
|
||||
# Mise a jour systeme
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
|
||||
# Installer les essentiels
|
||||
sudo apt install -y curl git
|
||||
```
|
||||
|
||||
## 2. Docker
|
||||
|
||||
```bash
|
||||
# Installer Docker (methode officielle)
|
||||
curl -fsSL https://get.docker.com | sh
|
||||
|
||||
# Ajouter l'utilisateur au groupe docker
|
||||
sudo usermod -aG docker $USER
|
||||
|
||||
# Se reconnecter pour appliquer le groupe
|
||||
exit
|
||||
# (reconnecter via SSH)
|
||||
|
||||
# Verifier
|
||||
docker --version
|
||||
docker compose version
|
||||
```
|
||||
|
||||
## 3. Tailscale (recommande)
|
||||
|
||||
Tailscale fournit un VPN mesh pour acceder au VPS sans exposer de ports publics.
|
||||
|
||||
```bash
|
||||
# Installer Tailscale
|
||||
curl -fsSL https://tailscale.com/install.sh | sh
|
||||
|
||||
# Connecter au tailnet
|
||||
sudo tailscale up
|
||||
|
||||
# Verifier
|
||||
tailscale status
|
||||
```
|
||||
|
||||
## 4. DictIA
|
||||
|
||||
```bash
|
||||
# Cloner le repo
|
||||
cd ~
|
||||
git clone https://gitea.innova-ai.ca/Innova-AI/dictia.git
|
||||
cd dictia
|
||||
git checkout dictia-branding
|
||||
|
||||
# Lancer le setup
|
||||
bash deployment/setup.sh --profile cloud
|
||||
```
|
||||
|
||||
Le setup va:
|
||||
- Generer le `.env` avec vos identifiants
|
||||
- Creer les repertoires de donnees
|
||||
- Builder l'image Docker
|
||||
- Demarrer les containers
|
||||
|
||||
## 5. ASR Proxy (GCP GPU)
|
||||
|
||||
```bash
|
||||
# Installer le proxy
|
||||
bash deployment/asr-proxy/setup.sh
|
||||
|
||||
# Ajouter les credentials GCP
|
||||
# Copier votre fichier de credentials dans:
|
||||
cp ~/gcp-credentials.json deployment/asr-proxy/gcp-credentials.json
|
||||
|
||||
# Demarrer le service
|
||||
sudo systemctl start asr-proxy
|
||||
sudo systemctl status asr-proxy
|
||||
```
|
||||
|
||||
## 6. Securite
|
||||
|
||||
```bash
|
||||
# Docker daemon config (log rotation)
|
||||
sudo cp deployment/security/docker-daemon.json /etc/docker/daemon.json
|
||||
sudo systemctl restart docker
|
||||
|
||||
# Firewall iptables (bloque trafic non-Tailscale)
|
||||
sudo bash deployment/security/iptables-rules.sh
|
||||
|
||||
# Service systemd pour les regles au boot
|
||||
sudo cp deployment/security/docker-iptables.service /etc/systemd/system/
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable docker-iptables
|
||||
```
|
||||
|
||||
## 7. Tailscale Serve (HTTPS)
|
||||
|
||||
```bash
|
||||
# Expose DictIA et le dashboard ASR via Tailscale HTTPS
|
||||
bash deployment/config/tailscale/setup-serve.sh
|
||||
|
||||
# Verifier
|
||||
tailscale serve status
|
||||
```
|
||||
|
||||
DictIA sera accessible a `https://votre-hostname.tailnet.ts.net/`.
|
||||
|
||||
## 8. Service systemd (auto-start)
|
||||
|
||||
```bash
|
||||
# Adapter le chemin dans le fichier si necessaire
|
||||
sudo cp deployment/config/systemd/dictia.service /etc/systemd/system/
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable dictia
|
||||
```
|
||||
|
||||
## 9. Verification
|
||||
|
||||
```bash
|
||||
# Health check complet
|
||||
bash deployment/tools/health-check.sh
|
||||
|
||||
# Verifier les endpoints
|
||||
curl -s http://localhost:8899/health
|
||||
curl -s http://localhost:9090/health
|
||||
```
|
||||
|
||||
## 10. Premier backup
|
||||
|
||||
```bash
|
||||
bash deployment/tools/backup.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checklist post-installation
|
||||
|
||||
- [ ] DictIA repond sur :8899
|
||||
- [ ] ASR Proxy repond sur :9090
|
||||
- [ ] Tailscale Serve configure
|
||||
- [ ] iptables: seul Tailscale peut acceder
|
||||
- [ ] Docker: log rotation configuree
|
||||
- [ ] Service systemd enable (auto-start au boot)
|
||||
- [ ] Premier backup effectue
|
||||
- [ ] Identifiants admin testes
|
||||
Reference in New Issue
Block a user