FULL ARTICLE

The user is disabled: solving for physical limitations in VR

Motion tracking has several accessibility challenges, and not just because of physical disabilities, but because we currently can't do a good job with complex movements involving body parts that can't hold controllers. We are doing a good job on the head tracking front because, duh, we're wearing those goggles on our heads. But when it comes to the 1:1 translation of movement into VR, we're fairly limited (for now).

As some people may have experienced in the past with the Kinect, if the optics software was designed to expect you to have four limbs clearly visible, or to be standing, your user experience sucks.

Consider optic motion tracking, where we already have rudimentary full-body tracking, or finger tracking — but only within certain bounds as dictated by the limitations of how optics tracking fundamentally works.

Optics tracking has real problems like occlusion, or being designed for people who fit a certain body stereotype. As some people may have experienced in the past with the Kinect, if the optics software was designed to expect you to have four limbs clearly visible, or to be standing, your user experience sucks. You might not even be able to use the technology at all!

Present-day optics do a bad job of detecting where you're located or what your limbs are doing if you don't match what the software expects your body to look like.

Defining the design challenges

So, we don't have good 1:1 full-body tracking for everyone, or anyone, whether it's with Lighthouse-style lasers or the more traditional optics solution. And based on the designs of the Vive and Rift controllers, we also know that most(?) of our finger interactions are going to be abstracted through button inputs.

These two design factors means that every single user, disabled or otherwise, will be unable to perform many basic physiological human functions within virtual reality, simply because we have no way of translating those actions into the game world. One of the obvious ways the user is limited in this way would be like how eye tracking isn't even really a thing (yet), but also how certain actions like anything to do with finger gestures or moving your feet won't be possible inputs.

In essence, there are two challenges here:

  1. People who are fully or mostly able-bodied in real life won't have full-body functionality in virtual reality due to hardware limitations

  2. People who have some kind of physical limitation in real life will struggle with assumptions that the user is able-bodied, to the point where certain VR experiences are impossible (thanks optics)

In a bad-good way, having to accommodate #1 will level the playing field for lots of people who are impacted by #2. Because we can't run around freely with our VR goggles on, walls be damned, everyone is limited to the physics and geography of real-space, not just people with physical disabilities.

What works well?

The Lighthouse system in particular is actually great for accessibility all-around because there's no assumption being made about the completeness or orientation of the user's body, like what optics has to rely on. All that Lighthouse cares about is how high your head is from the ground and whether you can hold and manipulate the controllers.

But, since Lighthouse is made for room-scale VR, we also need to consider how the user is being asked to move around in real-space in order to manipulate themselves in VR-space. This is where content creators come in!

Sitting VR experiences are perfectly suited to anyone who can sit and has a place to do so comfortably, but denies locomotion and severely restricts anything that can't be done from one's seat. These are great for games that keep your user in a cockpit, or sitting at a control panel where every interactive piece is easily within reach. Lots of accessibility needs met, since there are few physical demands.

Standing-only VR experiences start allowing for full-body movements like ducking and dodging, but also make an assumption that the user is able to stay standing the entire time.

...we start making assumptions that the user can walk, crouch, kneel, and sometimes do so repeatedly over a long period of gameplay. You don't have to be physically disabled to understand how this might not be the best user experience.

When we get out as far as room-scale, where the user is able to move freely around within the bounds of the active tracking space, we start making assumptions that the user can walk, crouch, kneel, and sometimes do so repeatedly over a long period of gameplay. You don't have to be physically disabled to understand how this might not be the best user experience.

Anyone with bad knees or an old injury who is asked to repeatedly crouch down and pick things up off the ground can quickly hit their limit and put your game down simply because they can't meet the physical demands. Even reaching for the floor can be difficult, depending on the user's range and flexibility.

using the vive in a wheelchair Pictured above: Brian reaching down to pick an object up off the floor inside a Vive demo

Accessibility options for UI design

Right now, virtual reality is primarily limited to seated or standing experiences, with a growing space for room-scale thanks to the Vive, and content creators have a big design challenge on their plate to maximize awesome interactions while minimizing physical discomfort and dissonance between user input and game output (e.g. pressing a button to walk forward, instead of just, you know, using your legs).

Players who have trouble making use of VR controllers can be offered the option to use gaze as an alternative UI to interact with the environment.

We can also provide alternative interaction models or user interfaces within the VR experiences we're creating. This could be something as simple as providing a Seated Mode of gameplay, where all of the interactive objects in the game are within arm's reach while sitting down, or maybe it changes player movement in-game to suit users who can't stand or move around easily.

Players who have trouble making use of VR controllers can be offered the option to use gaze as an alternative UI to interact with the environment. This is something that we can do on any VR headset, not just the Vive. A little Gear VR demo called Bazaar makes great use of this model of interaction along with nodding to confirm. The win for accessibility here is the happy side effect of being designed for a VR headset that can't rely on the user having a controller at all.

And if you think you can make a safe assumption that most people will be able to play your game comfortably, check your assumptions with tons of user testing and feedback. Almost 33% of adults in the U.S. have at least one basic action difficulty or complex activity limitation. That's a huge chunk of VR's potential consumer market, so interaction designs should be evaluated for any experiences that require uncomfortable or even impossible interactions from users.

Another smart consideration of this design need is a game coming out later this year for the Vive called The Gallery, which puts the player on a platform that moves both vertically and horizontally, serving as a way to guide the player around the game world while keeping interactive controls close-by. This is great for users who can't continually crouch, reach, stoop down, or other types of physical activity that cause discomfort.

By defining user input and interface limitations on both the hardware side and the human side, we can better tailor VR experiences to specific degrees of physical demands and limitations for everyone, and also offer accessibility settings similar to a more real-life type of "difficulty" setting that matches the VR interactions and UI to the capabilities of the end user. Designing with accessibility in mind from the beginning truly makes it possible for everyone to experience the magic of VR.

AUTHOR

Adrienne Hunter

Founder @ Tomorrow Today Labs. Co-creator of NewtonVR.com and insistent on good UX in virtual reality.

COMMENTS