Files
dictia-public/deployment/docs/QUICKSTART.md

91 lines
1.7 KiB
Markdown

# Quickstart — DictIA
## Prerequis communs
- Docker + Docker Compose V2
- Git
- 2GB+ RAM disponible
```bash
git clone https://gitea.innova-ai.ca/Innova-AI/dictia.git
cd dictia
git checkout dictia-branding
```
---
## Profil Cloud (VPS + GCP GPU)
Le GPU demarre automatiquement quand quelqu'un transcrit, et s'arrete apres 5 min d'inactivite.
```bash
# 1. Setup interactif
bash deployment/setup.sh --profile cloud
# 2. Setup ASR Proxy (GCP credentials requises)
bash deployment/asr-proxy/setup.sh
# 3. Optionnel: Tailscale Serve pour HTTPS
bash deployment/config/tailscale/setup-serve.sh
```
**Requis**: credentials GCP (service account ou OAuth) dans `deployment/asr-proxy/gcp-credentials.json`.
---
## Profil Local GPU
Transcription locale sur GPU NVIDIA. Le plus rapide.
```bash
# Prerequis: nvidia-container-toolkit
# https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
# Setup
bash deployment/setup.sh --profile local-gpu
```
**Requis**: token HuggingFace pour la diarisation (pyannote).
---
## Profil Local CPU
Transcription sur CPU. Lent mais fonctionnel pour tester.
```bash
bash deployment/setup.sh --profile local-cpu
```
Prevoir ~10x le temps reel (1h audio = ~10h de traitement).
---
## Apres l'installation
```bash
# Verifier que tout fonctionne
bash deployment/tools/health-check.sh
# Ouvrir DictIA
open http://localhost:8899
```
Se connecter avec les identifiants admin configures pendant le setup.
## Commandes utiles
```bash
# Logs en temps reel
docker compose -f deployment/docker/docker-compose.<profil>.yml logs -f
# Redemarrer
docker compose -f deployment/docker/docker-compose.<profil>.yml restart
# Mise a jour
bash deployment/tools/update.sh
# Backup
bash deployment/tools/backup.sh
```