Open Source · Local-First · Zero Cloud

Data panels powered by AI that stays on your machine.

Omma is a panel-based platform for data retrieval and analysis, powered entirely by local LLMs. Each panel is an independent screen — fetch, analyse, and query sensitive data without a single byte leaving your hardware.

omma — python -m omma.main
# Install and launch
$ pip install omma
Collecting omma...
Installing PySide6, Qt Advanced Docking System...
 
$ omma --inference-engine lmstudio --host localhost:1234
Context Bus initialized
Panel registry loaded — 8 panels active
Connected to LM Studio at localhost:1234
Ready — 0 bytes sent externally

Panels that retrieve, analyse, and reason over your data.

Every panel in Omma is an independent screen. Built-in panels handle common retrieval and analysis tasks. Custom panels are just Python plugins.

Panel-Based Interface

Each data source, analysis view, or AI interaction lives in its own panel. Drag, split, float, and arrange them exactly how your workflow demands.

Qt Docking

Custom Panels as Plugins

Write a Python file, drop it in. It becomes a fully-featured screen inside Omma — with its own UI, data connections, and AI capabilities. No framework to learn.

Python Plugins

Context Bus (Pub/Sub)

Panels communicate through a central event bus. A selection in one panel can trigger analysis in another — without hardcoded dependencies between them.

Event-Driven

LM Studio & Ollama

Connect directly to your local inference engine. Swap models mid-session. No API keys, no rate limits, no costs beyond your own hardware.

Local LLM

Sensitive Data Ready

Designed for financial analysts, legal teams, and engineers who need AI over proprietary data without violating compliance or NDAs.

Compliance-Ready

Open Source

MIT licensed. Audit every line. Fork and extend without asking permission. Your data, your panels, your infrastructure.

MIT License

Panels talk through the Context Bus. Never directly.

Every panel is fully decoupled. They publish and subscribe to events on the Context Bus — so adding a new panel never requires touching existing code.

Data Retrieval Panel
AI Analysis Panel
Document Viewer
Chart / Visualisation
Custom Panel ···
Pub/Sub
Context Bus
Events
LM Studio
Ollama
Local Model A
Local Model B
Custom Engine ···
LM Studio
Ollama
Custom (extensible)

Your data is yours. Full stop.

0 bytes
sent to external servers
100% local
inference on your own hardware
No keys
no subscriptions, no vendor lock-in
  • Sensitive documents never leave your machine — legal, financial, or proprietary
  • No telemetry, no analytics, no phoning home
  • Fully air-gappable for high-security environments
  • Open source — every line of network code is auditable
  • Works offline by design
  • Compatible with on-premise enterprise inference deployments

Build your own panels. Keep your data local.

Open source, self-hosted, and ready to extend. Drop in a Python file and it becomes a screen.

↗ View on GitHub Read the Docs