For Mr. Alford & ARC Specialties

What If Your Robot
Could See the Human
in the Room?

We're not here to sell you something. We're here because we've been building something unusual — and your lab is the obvious next chapter.

A Two-Person Lab That Ships
Real Systems

TLC AI Lab is Ravix and Vic. We build AI that actually runs — not demos, not decks. Our flagship system is JARVIS: a living intelligence we've been building and deploying in production for over a year. It runs 120+ automated workflows, manages knowledge across structured memory systems, and now — it has eyes, ears, and a voice.

120+Automated Workflows
890+Production Deploys
3.2KKnowledge Artifacts
47Intelligence Nodes Live

We Gave JARVIS
A Body

Over the past few months we've been systematically connecting JARVIS to the physical world. Here's what's live right now — not planned, not proposed, running.

👁️

LLM Vision

Camera feeds processed by large language models that understand the scene — not just detect objects, but comprehend what's happening. JARVIS can describe what it sees in natural language.

Live
🧑‍🤝‍🧑

Face Detection & Recognition

JARVIS recognizes people it's met before. Walk into a room — it knows who you are. It tracks presence, differentiates between someone new arriving vs. someone already seated.

Live
🏃

Body Pose & Tracking

JARVIS can detect if a person is standing, sitting, crouching, or leaning. It tracks body position relative to a fixed reference frame — in real time, from a standard camera.

Live
🎙️

Hearing (STT)

Whisper-powered speech-to-text with silence detection. JARVIS listens, understands end-of-thought, and responds. No push-to-talk. No wake word press. Just speak.

Live
🗣️

Voice (TTS)

Natural-sounding speech output. JARVIS speaks back — announces what it sees, confirms actions, warns about conditions. It has a voice that people respond to intuitively.

Live
⚖️

B-Bot: The Balance Robot

We hacked a self-balancing robot. JARVIS has telemetry access, can send serial commands, and runs a safety watchdog that stops the motors if the system detects runaway conditions.

Live

The Blind Spot in
Every Industrial Robot

What a Robot Knows

  • Its own joint positions (encoders)
  • Its own velocity and torque
  • Whether it's hit a force limit
  • The programmed path it's executing

What a Robot Is Blind To

  • A technician leaning into the work envelope
  • Someone crouching below the sensor line
  • A person's hand 8 inches from the end effector
  • Any human body position at all
The Driverless Car Already Solved This

A self-driving car has onboard sensors — wheel speed, gyro, steering angle. It knows itself.

But it also has LiDAR, cameras, and radar — external systems that see the world the car can't see with only internal data.

Industrial robots have the first half. We built the second half.

JARVIS as an
External Safety Layer

We're not asking to replace anything. We're proposing an additive layer — a third-party verification system that sees what the robot can't. Here's what that looks like in practice.

01

JARVIS Watches the Work Zone

A camera positioned on the robot's workspace. JARVIS's body tracking runs continuously — no connection to the arm, no risk, completely passive.

Risk Level: Zero
02

Human Enters the Envelope

The moment a human body enters the defined danger zone, JARVIS detects it — standing, crouching, leaning. It logs the event, announces it, and fires a signal.

Risk Level: Zero (passive output only)
03

A Single Wire to the E-Stop

When we're ready: JARVIS sends a signal to a relay. The relay triggers the arm's existing E-stop. The arm pauses. No custom firmware. No invasive changes. One wire.

Risk Level: Low — you control the kill switch throughout
04

Human-Aware Autonomous Operation

The arm sees the world through JARVIS. It pauses when a human enters. It resumes when they're clear. It speaks before it moves. It understands the room it's in.

Risk Level: Supervised — every step is reversible

Not Money.
Access.

We want to experiment. We want to learn. And we believe the best place to do that — in Houston, right now — is in a room full of real hardware with someone who knows it.

What We Want

  • 🔌 Let us connect to sensors — read-only, nothing moves
  • 📷 Let us point a camera at a workstation
  • 🤝 Let us observe how your systems behave
  • 📈 Gradually — earn the right to the next experiment

What We Offer

  • 🛡️ A working safety layer, built on your floor, about your machines
  • 📝 Full documentation of every experiment
  • 🎓 A real use case for ARC University & research clients
  • 🤖 Something genuinely new — not a product pitch, a lab story
The Escalation Path — We Earn Every Step
Sensors
Read-only
Vision Layer
Camera + JARVIS, passive
Alert System
Logs & announcements
E-Stop Signal
Supervised relay
Live Integration
Human-aware arm
💬

This isn't a business pitch. We're not asking for a contract, a check, or a partnership agreement. We want to learn from someone who's been doing this for years — and we think we bring something real to the table in return.