Researchers at Meta have come up with a wristband that picks up your muscle twitches and turns them into real-time computer commands – no cameras or implants required.
According to a peer-reviewed paper from scientists at the company formerly known as Facebook, the wrist-worn device prototype allows users to interact with computers through hand gestures, including handwriting and pinch movements. It streams muscle signals wirelessly over Bluetooth and decodes them into computer commands in real time.
The paper, published in the research journal Nature on Wednesday, says the history of the user interface has seen the introduction of keyboards, mice, and touchscreens, all of which require contact and are difficult to use on the move. While gesture-capture devices have also been developed, some require line-of-sight camera sensors, while others are intrusive.
Meanwhile, researchers have long imagined tapping brain or muscle electrical signals to control computers without any physical device. In practice, it usually requires invasive implants and software custom-trained to each person’s unique signal patterns.
The authors claim their new system can capture and decode hand movements without the need for personalized calibration or invasive procedures.
Led by Meta Reality Labs’ research science director Patrick Kaifosh and research VP Thomas Reardon, the team showed how the wristband could be used to recognize gestures in real time to control a one-dimensional cursor, select commands, and even create text on the screen by detecting handwriting gestures.
It’s worth noting that handwriting recognition runs at about 20.9 words per minute (WPM), compared with roughly 36 WPM on mobile-phone keyboards and over 40 WPM for proficient typists.
Nonetheless, it’s impressive that the wrist-attached surface electromyography (sEMG) can decode movements generalized among users. sEMG places electrodes on the skin to record the tiny electrical signals your muscles produce when they contract.
To our knowledge, this is the first high-bandwidth neuromotor interface with performant out-of-the-box generalization across people
“We developed a highly sensitive, easily donned sEMG wristband and a scalable infrastructure for collecting training data from thousands of consenting participants. Together, these data enabled us to develop generic sEMG decoding models that generalize across people,” the authors write in the paper.
“We demonstrate that the decoding performance of handwriting models can be further improved by 16 percent by personalizing sEMG decoding models. To our knowledge, this is the first high-bandwidth neuromotor interface with performant out-of-the-box generalization across people.”
The Reality Labs team is publicly releasing a repository containing over 100 hours of sEMG recordings from 300 participants across all three tasks in the publication in an effort to boost future work into studying the wrist-action UI model.
They also suggest that the system could be adapted to help those with reduced mobility interact with computers more easily.
“It is unclear whether the generalized models developed here and trained on able-bodied participants will be able to generalize to clinical populations, although early work appears promising. Personalization can be applied selectively to users for whom the generic model works insufficiently well due to differences in anatomy, physiology or behavior.
“However, all of these new applications will be facilitated by continued improvements in the sensing performance of future sEMG devices, increasingly diverse datasets covering populations with motor disabilities, and potentially combining with other signals recorded at the wrist,” the paper said. ®