Skip to Content


Interfaces without touch

By: Michael Diedrick on Sep 22, 2020

Tags: Libraries (6) API Design & Development (2) Local Business (2) Kiosks / Touchscreens (1)

The Cleveland Museum of Art has been a digital-forward museum for many years, and they created an impressive way to explore their collection through their ArtLens Wall, a 40-foot interactive, multi-touch floor-to-ceiling interface. I met CMA's Chief Digital Information Officer, Jane Alexander, at the Best in Heritage Festival in Croatia, and she stood very firm on an opinion that was pretty amazing to me at the time: touchscreens are dead. (I think she went on to say that there will never be another touchscreen at CMA again for as long as she's alive, but c'mon, the POS can't be touchscreen?)

We've been working for Milwaukee Public Library for many years and created a display monitor management system where they manage multiple screens in multiple libraries via the CMS. I've always felt that we needed to find a way to let people interact with these screens, but in a time post COVID and based on where these are installed, a touchscreen wouldn't do the job right at all, or at least not on the screen itself.

We've explored a variety of opportunities for using sensors like the Kinect or the Leap Motion controller, but we've always wanted to just use a camera since they're hardware independent. Interpreting a camera's vision live is quite a feat, actually, but the modern advances in machine learning have made our dream a potential reality of late.

Of course we're not the only digital agency interested in finding touchless ways to control devices, and Hello Monday created a proof of concept application to show that it was possible to navigate an experience by computer vision using hands alone. They created Touch-less and have open sourced the code so others can build on it. The demo shows that it can definitely work, but it also shows how far away it is from being usable in a production or real life environment. 

This demo uses TensorFlow by Google, an open source platform for machine learning, and libraries that others have built on it, specifically Handpose. Handpose's demo is pretty amazing, albeit definitely heats up our CPUs when playing with it. Other libraries we're keen on exploring are Fingerpose which can see specific fingers and connect the hand's shape (like a victory sign or thumbs up).

The bigger question in the end is how to make a system that people will intuitively know how to use or be easily taught. Touch-less leaves the user pretty quickly exhausted -- its default position is a flat, open hand, and hold your hand in a flat open position for more than a minute wears pretty quickly, much less holding your hand in one place for a long time. But that's the point of a good proof of concept, show what's possible and let UX folks start thinking through better ways for the humans and programmers find better ways for the burning CPUs.

Will computer vision with machine learning make interfaces without touchscreens possible? We think so, but it's going to be a bit more research and development time before we can make something that stands out.