handsfree-discourse
This component uses head tracking through a webcam to control a cursor. Tilt head up/down to scroll the page. Smile or smirk to one side to click (click doesn’t work in FireFox yet).
GitHub: https://github.com/BrowseHandsfree/handsfree-discourse
Demo (no content yet): https://browsehandsfree.com
Component Stats
Filesize: 3.28Mb before gzip
Browsers: All major browsers, excluding some “default” browsers on mobile devices (clicks don’t work in FireFox yet)
Distance: 5m+ in good lighting, ~1m in total darkness
Licence: Apache License 2.0
Background
Hi! So for the last year, I’ve been working on a tool called Handsfree.js which helps people with disabilities use the web. With this one library, you’re able to:
- Move cursors on a webpage and interact with elements through native events
- Trigger webservices with gestures
- Control robotics and devices
- and your desktop!
The library uses computer vision to determine where on the screen you’re facing and places a cursor there. I started it while I was homeless to help a friend recovering from a stroke use the web and connect with friends/famliy.
This year, my aim is to try and bridge the Creative Code and Accessibility communities with a series of Discourse components, the main one being this topic!
About this project
This component is being designed to help people access Discourse communities hands-free. Specifically, my intent is to create a “Creative Code” category on my forum where people can copy/paste code sandboxes (from CodePen, CodeSandbox, Glitch, etc) that others can use handsfree.
Because Discourse is a SPA, people can hop around from topic to topic chatting and trying out different embedded apps, without ever re-initializing the camera (like in that first GIF). This is the first/main component in a set of upcoming components that will work together to create a 100% hands-free experience!
Features
- Built on top of the Jeeliz deep learning library, so it’s extremely fast and lightweight
- 100% client side and bundled into the component so there are zero external requests
- Scroll pages by moving cursor above/below screen
- Click elements and focus input fields (no virtual keyboard yet)
- Comes with a webpack environment, deploy scripts, and a sandbox template for local development
Roadmap
- Virtual keyboards
- Stability controls (for people with tremors)
- Custom gestures and macros
- Full-body pose estimators (for using arms as well)
- Eye tracking
- Hand trackers
- Predictive clicking/navigation
- Predictive typing
- Lip reading
- Voice to Text
- EEG peripherals (for mind control)
- …more info coming soon…
More soon
I’ve been invited to Eyeo Festival as a Fellow, and my goal is to try and give a lightning talk on this component/project so I’ll have tons of more info soon!