Local AI for everyone who can't â or won't â send their data to the cloud. LocoPuente bridges the digital divide -- equitable, privacy-first AI access for every student, running on hardware the university owns.
A single on-campus server running locally hosted AI services, accessible to all students on the university network.
LocoPuente is where the entire LocoLabo research programme converges into something a student can use.
The full stack is operational on hardware that costs less than a single semester of commercial AI subscriptions for a cohort of students.
| Machine | GPU | VRAM | Role |
|---|---|---|---|
| Pulpo | RTX 3060 | 12 GB | Primary LLM + image generation |
| Puente | RTX 2060 Super | 8 GB | Voice (TTS/STT) + secondary LLM |
| Service | Tool | GPU |
|---|---|---|
| LLM inference (primary) | Ollama | RTX 3060 |
| LLM inference (secondary) | Ollama | RTX 2060 Super |
| Voice STT/TTS | Speaches + Whisper + Kokoro | RTX 2060 Super |
| Chat interface | Open WebUI | -- |
| Research & notes | Open Notebook AI | -- |
| Image generation | ComfyUI | RTX 3060 |
All services expose OpenAI-compatible APIs. All run without internet access. All student data stays on the machine.
Part of LocoLab -- frontier AI on a budget.