My Projects
This is a full-stack AI system I built to match how I think, write, and work. I structured it around three principles:
- Action — Build instead of theorizing
- Autonomy — Own every layer, avoid lock-in
- Resilience — Design for failure, censorship, and drift
The system has three layers I developed independently:
- Ghostrun handles infrastructure and model access
- Synaptica manages writing and thematic intelligence
- nathanstaffel.com is the public interface and deployment surface
Each is a standalone project I created, but they operate as an integrated system. Everything is modular, swappable, and tested under real use.
Ghostrun — Model-Agnostic Inference Operating System
Ghostrun is the infrastructure layer I built. It provides a unified API I designed to route requests across multiple LLM providers—OpenAI, Claude, Groq, Mistral, Cohere, Gemini, and local models. I abstracted away vendor-specific formats, handle authentication centrally, and provide direct control over routing logic.
Capabilities I implemented:
- Single /generate endpoint for any model
- Swap providers with a single provider parameter
- Multi-turn request threading, even across providers
- Plug-in RAG pipelines via rag_pipeline_id
- No markup on provider usage—token-level pricing is transparent
- Full credential storage and management via Ghostrun key
- Adds minimal latency (~30–60ms); performs near-native speeds
Ghostrun is not a wrapper. It's an operating layer I designed that lets you route around failure, cost spikes, censorship, or platform decay. It gives you one interface to any LLM, and full control over what gets called, when, and how. The rest of my stack depends on Ghostrun for its flexibility and durability.
Synaptica — Constrained Generative Engine
Synaptica is the generative layer I developed. It's a writing engine I built that uses Ghostrun to call models, then applies strict constraints on style, structure, and output quality I designed. It doesn't assist—it disciplines.
The system pulls from a custom RAG corpus I created from my writing, notes, and hand-curated source material. I designed it to:
- Extract high-signal themes, not summaries
- Enforce rewrites based on rhetorical structure
- Control tone, cadence, and form
- Strip filler and eliminate repetition
- Align output with fixed voice profiles (e.g., Hemingway, Harrison)
I use Synaptica for essays, conceptual development, and narrative testing. It's tuned to create friction—forcing both the model and the human to reach clarity through structure. It sits at the intelligence layer, translating raw model capacity into shaped, usable language.
nathanstaffel.com — Public Interface and Deployment Layer
This is the application layer I built. My website runs live, with both Ghostrun and Synaptica connected under the surface. It's not a placeholder or a brand page—it's the actual operations interface I created.
It includes what I built:
- Longform writing and projects surfaced through Synaptica
- Live tool demos and AI utilities powered via Ghostrun
- Full self-hosting, with no platform dependencies
- Documentation and deployment logs
- Frontend architecture that's portable and modular
The site is fully owned, fully controlled, and used as the primary place for my deployment, publishing, and public experimentation. It makes my stack visible without outsourcing control to a third party.
System Integration
These three layers form a working architecture I designed:
- Ghostrun gives routing, control, and failover across any model
- Synaptica applies structure and direction to generation
- nathanstaffel.com puts it into use in the open
Everything is modular. If a model disappears, my system reroutes. If a constraint fails, Synaptica forces a rewrite. If a platform goes down, my site stays live.
The system runs daily. It's not theoretical. It reflects my worldview: build your tools, own your stack, and design for the world you actually live in—not the one you're promised.