Responsive AI
Real-time voice with end-of-turn detection. The lecturer can be interrupted at any point — it loses the thread gracefully and picks back up.
Three loops running in parallel. Each one was a different sponsor's sandbox. Together they made the lecturer feel alive.
Real-time voice with end-of-turn detection. The lecturer can be interrupted at any point — it loses the thread gracefully and picks back up.
Slides and whiteboard regenerate in response to verbal cues. Wikipedia, YouTube transcripts, and image search feed the same context window.
A side channel watches attention and confusion. The lecturer pauses, slows down, or backtracks based on what it sees on your face.
AdaptEd shipped at LA Hacks 2024 — UCLA's annual hackathon, the largest collegiate one in Southern California. April 19–21, 704 participants across 142 teams, sponsored by Google, Intel, Fetch.ai, MongoDB, Auth0 and a dozen others.
Out of those 142 teams, the original four — Bill Zhang, Spike O'Carroll, Jay Wu, and Jasmine Wu — took first in the Google Gemini Challenge and also won Fetch.ai's Agentified Winners prize. Their thesis: of 16 million U.S. university students, half fall behind on static, one-sided lectures while fewer than three percent have access to quality tutoring programs. So they built a lecturer that talks back.
This page is a frontend showcase of that build — the original FastAPI backend is preserved in this repo as a historical artifact. The voice loop in the demo here is mocked client-side via the Web Speech API, but everything you see in the demo reel above is real footage from the night of.