Hands-free Discourse - An accessibility component w/ head tracked pointers


This component uses head tracking through a webcam to control a cursor. Tilt head up/down to scroll the page. Smile :blush: or smirk to one side :smirk: to click (click doesn’t work in FireFox yet).

GitHub: https://github.com/BrowseHandsfree/handsfree-discourse
Demo (no content yet): https://browsehandsfree.com

Component Stats

Filesize: 3.28Mb before gzip
Browsers: All major browsers, excluding some “default” browsers on mobile devices (clicks don’t work in FireFox yet)
Distance: 5m+ in good lighting, ~1m in total darkness
Licence: Apache License 2.0

:spiral_calendar: Background

Hi! So for the last year, I’ve been working on a tool called Handsfree.js which helps people with disabilities use the web. With this one library, you’re able to:

  • Move cursors on a webpage and interact with elements through native events
  • Trigger webservices with gestures
  • Control robotics and devices
  • and your desktop!

The library uses computer vision to determine where on the screen you’re facing and places a cursor there. I started it while I was homeless to help a friend recovering from a stroke use the web and connect with friends/famliy.

This year, my aim is to try and bridge the Creative Code and Accessibility communities with a series of Discourse components, the main one being this topic!

:bookmark: About this project

This component is being designed to help people access Discourse communities hands-free. Specifically, my intent is to create a “Creative Code” category on my forum where people can copy/paste code sandboxes (from CodePen, CodeSandbox, Glitch, etc) that others can use handsfree.

Because Discourse is a SPA, people can hop around from topic to topic chatting and trying out different embedded apps, without ever re-initializing the camera (like in that first GIF). This is the first/main component in a set of upcoming components that will work together to create a 100% hands-free experience!

:electric_plug: Features

  • Built on top of the Jeeliz deep learning library, so it’s extremely fast and lightweight
  • 100% client side and bundled into the component so there are zero external requests
  • Scroll pages by moving cursor above/below screen
  • Click elements and focus input fields (no virtual keyboard yet)
  • Comes with a webpack environment, deploy scripts, and a sandbox template for local development

:world_map: Roadmap

  • Virtual keyboards
  • Stability controls (for people with tremors)
  • Custom gestures and macros
  • Full-body pose estimators (for using arms as well)
  • Eye tracking
  • Hand trackers
  • Predictive clicking/navigation
  • Predictive typing
  • Lip reading
  • Voice to Text
  • EEG peripherals (for mind control)
  • …more info coming soon…

More soon

I’ve been invited to Eyeo Festival as a Fellow, and my goal is to try and give a lightning talk on this component/project so I’ll have tons of more info soon!


:scream: :blush:

Oz this does look super cool and like something that could help certain people a lot. I wonder if you should go the “browser plugin” route here to get the best reach.

The people who need this kind of tech would love it on every Discourse site. Convincing admins to install an optional component everywhere is going to be tricky.


Thanks Sam, and that’s a great suggestion about the browser plugin! Actually, I think I’ll give that a try over the weekend.

The main reason I started with Discourse is because it’s super important to me that people can use it on locked down computers (public libraries, hospitals, etc) with no extra downloads/setup. The other reason is that I think my initial target audience will have to actually be parents, nurses, and caretakers and I want to try to remove as much friction to get them to try it as possible