Installation & Setup
This guide walks you through setting up the Generative Creative Lab development environment from scratch.
Prerequisites
Before starting, make sure you have the following installed:
Python 3.12+ — check with
python --versionuv — Python package manager (installation guide)
Docker and Docker Compose — for PostgreSQL, Valkey, and the Grafana observability stack
Node.js and npm — for Tailwind CSS compilation (check with
node --version)Git — for cloning the repository
Clone the Repository
git clone git@github.com:andrewmarconi/generative-creative-lab.git
cd generative-creative-lab
Environment Configuration
Copy the example environment file and adjust as needed:
cp .env.example .env
The defaults work for local development. The key variables:
Variable |
Purpose |
Default |
|---|---|---|
|
Django secret key |
Set in example |
|
Database credentials |
|
|
Database connection |
|
|
Celery broker (Redis-compatible) |
|
|
Optional. Enables LLM-based prompt enhancement |
|
|
Optional. Enables auto-downloading LoRAs from CivitAI |
Quick Setup (Recommended)
The onboarding script automates the entire first-time setup in a single command:
./onboarding.sh
This interactive script handles everything:
Optionally cleans existing
.venvand Docker volumes (for fresh starts)Starts Docker containers (PostgreSQL 17, Valkey, Grafana/Loki stack)
Installs Python dependencies with
uv syncInstalls Node.js dependencies with
npm iRuns database migrations
Imports all seed data:
Model presets (diffusion models, LoRAs)
Audience segments and personas
Reference data (regions, countries, languages, LLM models)
Prompt templates
Pipeline settings (default: Qwen 2.5 7B for all nodes)
Brands (if
data/brands.jsonexists)
Creates an admin superuser (
admin/admin)Stops Docker containers (ready for
./start.sh)
Once onboarding completes, start the application:
./start.sh
Then open:
Django App: http://localhost:8000/app/ (login:
admin/admin)Grafana: http://localhost:3000 (anonymous access enabled)
Flower (task monitor): http://localhost:5555
Manual Setup
If you prefer to run each step individually, or need to troubleshoot, follow the steps below.
1. Start Docker Containers
docker compose up -d --wait
This starts 5 services:
Service |
Purpose |
Port |
|---|---|---|
PostgreSQL 17 |
Primary database |
5435 |
Valkey |
Celery broker (Redis-compatible) |
6379 |
Loki |
Log aggregation backend |
3100 |
Alloy |
Log collector (ships |
— |
Grafana |
Log search UI |
3000 |
The --wait flag blocks until health checks pass.
2. Install Dependencies
uv sync
npm i
uv sync installs all Python packages (including PyTorch, diffusers, etc.) into a virtual environment at .venv/. npm i installs Tailwind CSS for the admin UI.
3. Run Migrations
uv run manage.py migrate
4. Create a Superuser
uv run manage.py createsuperuser
Follow the prompts to set a username, email, and password.
5. Seed the Database
Import configuration and reference data in dependency order:
# Model and LoRA configuration
uv run manage.py import_presets
# Reference data (regions, countries, languages, LLM models)
uv run manage.py import_reference_data
# Prompt templates for the adaptation pipeline
uv run manage.py import_prompt_templates
# Audience segments and personas
uv run manage.py import_segments
uv run manage.py import_personas
# Brand data (if file exists)
uv run manage.py import_brands
Tip
All import commands support --dry-run to preview changes without writing to the database.
6. Start the Application
The recommended way to start all services:
./start.sh
This script:
Starts Docker containers (
docker compose up -d --wait)Builds Tailwind CSS (
npm run tailwind:build)Starts Django, Celery worker, Flower, and Tailwind watcher via Honcho
Alternatively, you can start processes individually:
# Start Docker containers
docker compose up -d
# In separate terminals:
uv run manage.py runserver # Django (port 8000)
PYTHONPATH=src uv run celery -A cw worker -Q default -E --pool=solo # Celery worker
PYTHONPATH=src uv run celery -A cw flower --port=5555 # Flower (port 5555)
Or use Honcho to run them all (without Docker dependency ordering):
uv run honcho start
Process Architecture
The application runs as 5 concurrent processes, defined in the Procfile:
Process |
Command |
Port |
|---|---|---|
docker |
|
Various |
django |
|
8000 |
worker |
Celery worker on |
— |
flower |
Celery task monitor |
5555 |
tailwind |
Tailwind CSS watcher (rebuilds on change) |
— |
The Celery worker uses the solo pool (single-threaded) to prevent concurrent GPU model loading. This ensures one model is loaded at a time, keeping memory usage predictable.
Verifying the Setup
Once all services are running:
Django App — Navigate to http://localhost:8000/app/ and log in. You should see the Diffusion, TV Spots, Audiences, and Core sections.
Database — Verify seed data loaded:
Diffusion > Diffusion Models should list 7+ models
Core > Prompt Templates should list 9 templates
Audiences > Regions should show geographic regions
Celery — Open Flower at http://localhost:5555 and confirm the worker is online.
Grafana — Open http://localhost:3000, navigate to Explore > Loki, and run the query
{job="django"}to confirm log collection is working.
Optional: Pre-download Models
Diffusion models are downloaded from HuggingFace on first use, which can take a while. To pre-download them:
uv run manage.py preload_models
This downloads all models configured in data/presets.json to the local HuggingFace cache.
Next Steps
Your First Image — Generate your first diffusion image
Your First Adaptation — Create your first culturally adapted TV spot