This project is a full-stack collaborative coding platform inspired by the experience of solving problems on LeetCode, but extended with real-time teamwork and AI-powered assistance. It provides an environment where multiple users can simultaneously work on programming challenges, discuss ideas through an integrated chat system, run their code in a secure sandbox, and get generative AI support for hints, debugging, and explanations.
PeerPrep delivers a LeetCode-like experience enhanced with:
Everything is decomposed into Go microservices behind lightweight REST/WebSocket APIs and backed by MongoDB, PostgreSQL, Redis, and Dockerized worker nodes.
google.golang.org/genai + Gemini models, optional fine-tuning pipeline + exporter jobs.| Service | Default Port | What it does | Key dependencies |
|---|---|---|---|
user |
8081 | Auth, profile management, and email notifications | PostgreSQL, Redis, SMTP |
question |
8082 | Question bank CRUD + difficulty filtering | MongoDB |
match |
8083 | Queueing + pairing logic, room provisioning | Redis |
collab |
8084 | Real-time editor, chat, and sandbox orchestration | Redis, Sandbox service, Question service |
voice |
8085 | Voice/WebRTC signaling over WebSockets | Redis |
ai |
8086 | Gemini-backed assistant, feedback exporting/tuning | PostgreSQL, Gemini credentials |
sandbox |
8090 | Docker-based code execution with resource limits | Docker daemon socket |
api-gateway |
80/443 | Optional Nginx reverse proxy for deployment | All backend services |
peerprep-frontend |
5173 (dev) | Vite SPA that consumes all APIs & sockets | Node.js 18+, browser |
| Supporting infra | – | MongoDB 7, Redis 7, PostgreSQL 15, Prometheus, Grafana | Docker volumes |
.
├── services/ # Go microservices + Nginx gateway
│ ├── ai/ # AI assistant & feedback jobs
│ ├── collab/ # Code collaboration sessions
│ ├── match/ # Matchmaking queue
│ ├── question/ # Question bank API
│ ├── sandbox/ # Code runner / sandbox daemon
│ ├── user/ # Auth + account management
│ ├── voice/ # Voice signaling service
│ └── api-gateway/ # Nginx config (deployment)
├── peerprep-frontend/ # React + Vite SPA
├── deploy/ # Docker Compose, seeds, monitoring config
│ ├── docker-compose.yaml
│ └── seeds/ # Mongo + Postgres seed data
├── monitoring/ # Standalone Prometheus/Grafana manifests
├── .github/workflows/ # CI pipeline definitions
├── .env.example # Shared backend/service env vars
└── README.md
| Tool | Version | Notes |
|---|---|---|
| Docker Engine + Compose | ≥ 24.x / v2 | Required for full-stack local run and sandbox |
| Go | 1.22+ (collab targets 1.24, AI targets 1.23) | Install locally if running services outside Docker |
| Node.js + npm | Node 18+ | Needed for Vite dev server and frontend builds |
| Make / Bash | Optional | Helpful for scripting; Windows users can use WSL2 or provided .bat/.ps1 helpers |
cp .env.example .env
secrets/service-account-key.json.GOOGLE_APPLICATION_CREDENTIALS=/secrets/service-account-key.json inside .env.load-env.bat / load-env.ps1 export the same variables for local runs.cp peerprep-frontend/.env.example peerprep-frontend/.env
VITE_* URLs to either the Docker Compose ports (http://localhost:808x) or deployed load balancers.ws:// (or wss://).deploy/seeds/questions.seed.jsdeploy/seeds/users.seed.sql (creates test_1 and test_2, password Password123!).docker compose -f deploy/docker-compose.yaml up --build
What this gives you:
http://localhost:8081http://localhost:8082http://localhost:8083http://localhost:8084http://localhost:8085http://localhost:8086http://localhost:8090http://localhost:9090http://localhost:3000mongodb://localhost:27017, Redis redis://localhost:6379, Postgres postgres://localhost:5432Seeding (recommended on first run):
docker compose -f deploy/docker-compose.yaml run --rm mongo-seed
docker compose -f deploy/docker-compose.yaml run --rm postgres-seed
Rebuild just one service after code changes:
docker compose -f deploy/docker-compose.yaml build collab
docker compose -f deploy/docker-compose.yaml up collab -d
cd peerprep-frontend
npm install
npm run dev # starts on http://localhost:5173
Ensure the .env points to running backend endpoints (Docker or remote).
internal/metrics; their endpoints live at /api/v1/<service>/metrics (sandbox uses /metrics). Match exposes /api/v1/match/metrics without the middleware, and the AI service currently has no Prometheus endpoint.deploy/prometheus/prometheus.yml) defines scrape jobs for user, question, match, collab, voice, and sandbox. Update each job’s metrics_path (default /metrics) or add passthrough routes before expecting samples, and add an ai job if needed.deploy/grafana/; access via http://localhost:3000 (admin/admin).deploy/grafana/dashboards/ and restarting the Grafana container to reload them.docker compose logs -f <service>.cd services/<service>
go test ./... -cover
.github/workflows/ci.yml) runs lint + tests for every service on pushes/PRs and produces per-service coverage badges.cd peerprep-frontend
npm run build
| Symptom | Likely cause | Fix |
|---|---|---|
| Sandbox refusing requests | Docker socket not mounted or Docker Desktop not running | Ensure /var/run/docker.sock is shared and restart sandbox container |
ai service fails to start |
Missing Gemini API key or GOOGLE_APPLICATION_CREDENTIALS path |
Populate .env and mount secrets/ volume |
| Frontend cannot connect via WebSockets | Using http:// instead of ws:// in .env |
Update VITE_*_WEBSOCKET_BASE to ws://localhost:PORT |
| Seed scripts exit early | Databases not ready | Rerun mongo-seed/postgres-seed after docker compose up shows Mongo/Postgres healthy |
| Grafana dashboards empty | Prometheus scraping /metrics while services expose /api/v1/<svc>/metrics |
Update metrics_path in deploy/prometheus/prometheus.yml or expose /metrics routes before reloading Prometheus |
AI tools were used in this project to assist with code autocompletion, refactoring, code review suggestions, and documentation improvements. All outputs were reviewed and verified by the development team before inclusion.