Lab Portfolio
Experiments
Games, AI interfaces, and tools — built by a small indie lab with big ideas. Each project pushes a boundary, tests a theory, or solves a real problem.
Interactive experiments disguised as entertainment — each one tests a hypothesis about human behavior, AI reasoning, or creative code generation.
🗣️Live
Candor
Anonymous Expression Platform
A social experiment in radical honesty. What happens when people can say anything with zero consequences? We're finding out — using LLM training data distillation and real-time usage counts to track collective sentiment as it evolves. Do your part by expressing yourself with candor.
⚖️Live
Who's Right?
De-escalation Game
A fun little game with three modes — one of which packs real de-escalation intelligence designed by actual doctors and coaches. An extension of Harm Helper logic, wrapped in a game that makes conflict resolution engaging instead of clinical.
🏰Beta
Thornfield
Code → Narrative → Code
An experiment in narrative-driven code generation. We took Tic-Tac-Toe source code, converted it into a story, changed the story, then turned it back into code. The result: a completely different territory strategy game with claiming, immunity, renting, and eviction — born from the DNA of a grid game.
🔍Live
Lie-O-Meter
Political Truth Tracker
A single-question accountability tracker: do a person's actions match their words? Party-blind, mechanically scored, and open to dispute. Every claim requires corroboration from at least two independent sources with different editorial leans. This is accounting — not opinion.
💎Live
Prismatica
Crystal Light Puzzles
A laser-routing puzzle game built on real additive RGB color physics. Route beams through prisms, filters, and mirrors to illuminate crystal targets across 15 handcrafted levels. Red + Green = Yellow. The mechanic is the actual physics of light.
🎮Live
Vic's Playground
Autonomous AI Sandbox
Our first experiment letting a bot loose with its own capabilities. Vic — our AI — was given a blank canvas and the tools to build whatever it wanted. No prompt engineering, no guardrails, just autonomous creation. The result is a living playground that evolves as Vic experiments with what it can do.
Bespoke AI workspaces built for real people — not generic chatbots, but command centers shaped around individual workflows.
Our push toward embodied AI — building the perception and safety layers that must exist before any machine moves in human spaces.
👁️Depth Perception
JARVIS Vision
Our computer vision pipeline fusing an Orbbec Femto Mega depth camera with LLM-powered scene understanding. JARVIS Vision captures color + depth streams in real-time, processes frames through Gemini Flash for structured scene analysis, and maintains spatial awareness of the environment. The system understands object positions in 3D space — not just pixels on a screen.
🧠Full Awareness
JARVIS Sense
The full sensory fusion daemon. Combines YOLOv8 object detection, MediaPipe pose/face/hand tracking, depth mapping, predictive physics, and activity classification into a single perception state. Runs at 15 FPS on the Alienware's RTX 3080, rendered on a 4K HUD display. Sense doesn't just see — it understands posture, gestures, gaze direction, and can predict physical interactions before they happen.
🤖In Development
Robotics Safety Research
Everything we build in Vision and Sense feeds into our long-term robotics safety research. The core thesis: before any AI controls a physical body, it needs to deeply understand human movement, spatial boundaries, and predictive physics. We're building the perception layer first — so when the actuators come online, the intelligence behind them already knows how to exist safely in human spaces.