Note: Event details may change. Please refer to the University of Toronto Robotics Institute’s events page for the most current information.
Speaker:
Sergey Stavisky
Talk title:
A multi-modal brain-computer interface for restoring lost communication
Date: Friday, March 20, 2026
Time: 3-4 p.m.
Location: MY580 and Online via Zoom
Abstract:
Restoring the ability to communicate to people with neurological injuries has long been a goal of neurotechnology research; today, this dream is on the verge of fruition with ongoing commercial cursor and click brain-computer interface (BCI) clinical trials. I will describe our lab’s development of an intracortical speech BCI, which is the next frontier in restoring communication. First, we built a 99% word accuracy “brain-to-text” speech BCI. To this core capability, we’ve added neural cursor control over the participant’s personal computer (despite recording from orofacial cortex). We’ve also augmented text decoding with a loudness layer and a gesture (emoji) layer, both of which provide added expressivity, and we prototyped a neural error decoder which can reduce user frustration. Lastly, I’ll describe our progress towards an instantaneous voice synthesis BCI aimed at functionally replacing the paralyzed vocal system.
Bio:
I’m a neuroscientist and neural engineer working at the intersection of systems and computational neuroscience, neural engineering, and machine learning. I’m trying to understand how the brain controls movements, and to use this knowledge to build brain-computer interface (BCIs) that treat brain injury and disease. My immediate goals are to develop BCIs for restoring speech. Closely related, I’m developing next-generation neural interfaces for human use.
As an Assistant Professor in the Department of Neurological Surgery at the University of California, Davis, I co-direct the UC Davis Neuroprosthetics Lab . Prior to that I was a postdoctoral fellow in the Stanford Neural Prosthetics Translational Laboratory led by Jaimie Henderson and Krishna Shenoy.
