Our PracticeTeamInvestorsGet the Deck
Lab Portfolio

Experiments

Games, AI interfaces, and tools — built by a small indie lab with big ideas. Each project pushes a boundary, tests a theory, or solves a real problem.

🎮

Games

Interactive experiments disguised as entertainment — each one tests a hypothesis about human behavior, AI reasoning, or creative code generation.

🗣️Live

Candor

Anonymous Expression Platform

A social experiment in radical honesty. What happens when people can say anything with zero consequences? We're finding out — using LLM training data distillation and real-time usage counts to track collective sentiment as it evolves. Do your part by expressing yourself with candor.

⚖️Live

Who's Right?

De-escalation Game

A fun little game with three modes — one of which packs real de-escalation intelligence designed by actual doctors and coaches. An extension of Harm Helper logic, wrapped in a game that makes conflict resolution engaging instead of clinical.

🏰Beta

Thornfield

Code → Narrative → Code

An experiment in narrative-driven code generation. We took Tic-Tac-Toe source code, converted it into a story, changed the story, then turned it back into code. The result: a completely different territory strategy game with claiming, immunity, renting, and eviction — born from the DNA of a grid game.

🔍Live

Lie-O-Meter

Political Truth Tracker

A single-question accountability tracker: do a person's actions match their words? Party-blind, mechanically scored, and open to dispute. Every claim requires corroboration from at least two independent sources with different editorial leans. This is accounting — not opinion.

💎Live

Prismatica

Crystal Light Puzzles

A laser-routing puzzle game built on real additive RGB color physics. Route beams through prisms, filters, and mirrors to illuminate crystal targets across 15 handcrafted levels. Red + Green = Yellow. The mechanic is the actual physics of light.

🎮Live

Vic's Playground

Autonomous AI Sandbox

Our first experiment letting a bot loose with its own capabilities. Vic — our AI — was given a blank canvas and the tools to build whatever it wanted. No prompt engineering, no guardrails, just autonomous creation. The result is a living playground that evolves as Vic experiments with what it can do.

🤖

Personal AI Interfaces

Bespoke AI workspaces built for real people — not generic chatbots, but command centers shaped around individual workflows.

Live

Sophia

Melissa's Personal Command Center

Sophia is Melissa's personal AI command center — a bespoke interface designed for how she works, thinks, and manages her world. Built as a proof of concept that personal AI should be deeply personal, not one-size-fits-all.

🧠Pre-Release

JARVIS Desktop

Just A Really, Very Intelligent System

A desktop AI workspace where you talk to VIC (Very Intelligent Companion) and together use JARVIS to empower yourself. Think OpenAI's Codex meets a desktop Notion where the databases and files live on your own machine. Internal browser, file viewer, code editing — by voice or chat. Your AI. Your Machine. Your Rules.

🛠️

Tools

Practical AI-powered utilities born from real-world needs — from job placement to harm reduction to creative writing.

📄Live

Resume Builder

AI-Powered Resume Writing

One of TLC AI Lab's first experiments. With roots in the non-profit sector, the founders worked with several organizations focused on helping people find jobs. The result: a resume builder chatbot that walks you through creating a polished, professional resume via conversation.

Live

Temporal Leak

Blogs, Reimagined

A blog platform that solves the blank-page problem. Browse AI news for inspiration, inject articles as context, then hit generate. An LLM writes the post in your voice — trained on your own writing via a custom writer's profile. Names changed, timeline set in the future, with an AI prediction of how far ahead of the mainstream curve the content sits.

💙Live

Harm Helper

Trauma-Informed AI Intake

Ravix DeWolf wrote the book on harm reduction in alt-life communities. Combined with Melissa's medical expertise, they created a chatbot trained to be trauma-informed and validating — not robotic. The facilitator watches in real-time, gets AI assessments, and can pass notes to guide the conversation. No data stored on servers.

🖥️Live

SmartPC

The Computer That Fixes Itself

Born from 15 years of MSP automation experience and owning two managed service providers, SmartPC is what happens when a systems engineer decides PCs should fix themselves. AI-powered monitoring, self-healing, and predictive maintenance — one install, zero headaches. The business itself is designed to be 98% autonomous, from onboarding to billing to diagnostics. Includes a reseller platform built specifically for MSPs and IT shops.

⚙️

Hardware & Robotics

Our push toward embodied AI — building the perception and safety layers that must exist before any machine moves in human spaces.

👁️Depth Perception

JARVIS Vision

Our computer vision pipeline fusing an Orbbec Femto Mega depth camera with LLM-powered scene understanding. JARVIS Vision captures color + depth streams in real-time, processes frames through Gemini Flash for structured scene analysis, and maintains spatial awareness of the environment. The system understands object positions in 3D space — not just pixels on a screen.

🧠Full Awareness

JARVIS Sense

The full sensory fusion daemon. Combines YOLOv8 object detection, MediaPipe pose/face/hand tracking, depth mapping, predictive physics, and activity classification into a single perception state. Runs at 15 FPS on the Alienware's RTX 3080, rendered on a 4K HUD display. Sense doesn't just see — it understands posture, gestures, gaze direction, and can predict physical interactions before they happen.

🤖In Development

Robotics Safety Research

Everything we build in Vision and Sense feeds into our long-term robotics safety research. The core thesis: before any AI controls a physical body, it needs to deeply understand human movement, spatial boundaries, and predictive physics. We're building the perception layer first — so when the actuators come online, the intelligence behind them already knows how to exist safely in human spaces.