<![CDATA[VRINFLUX]]>http://www.vrinflux.com/Ghost 0.7Sun, 01 Oct 2017 19:04:48 GMT60<![CDATA[Reducing cognitive load in VR]]>First of all, what is cognitive load and where does it come from? Simply put, cognitive load is the amount of mental processing power needed to use your product. It's a term that is used a lot in UX design, and can be affected positively or negatively by every single

http://www.vrinflux.com/reducing-cognitive-load-in-vr-ux/0e175d40-72a3-42dd-9d1a-fc8dd830aa50Fri, 02 Sep 2016 22:29:38 GMT

First of all, what is cognitive load and where does it come from? Simply put, cognitive load is the amount of mental processing power needed to use your product. It's a term that is used a lot in UX design, and can be affected positively or negatively by every single design choice you make. Even the simplest decisions, such as the basic composition of content elements, or the color palette of your user interface, can help or hurt cognitive load.

The consequence of high cognitive load is fairly significant. Kathryn Whitenton said it best in her article "Minimize Cognitive Load to Maximize Usability":

“When the amount of information coming in exceeds our ability to handle it, our performance suffers. We may take longer to understand information, miss important details, or even get overwhelmed and abandon the task.”

Ignoring the impact that cognitive load can have on a VR experience means risking your users missing critical details or instructions, or even giving up on the experience entirely. Below are just a handful of VR experience design topics that you can use to critically examine whether what you’ve built is working for — or against — your users.

1. Introduce new objects or concepts one at a time

For those of us who are in VR regularly, it's easy to forget how overwhelmed you can get as a new user. Plop someone down into the middle of a VR experience full of virtual objects just screaming to be picked up and played with — you'll easily short-circuit their ability to focus on anything, especially the one thing you want them to be paying attention to. Populating the environment with one object at a time, removing clutter, and limiting the number of distractions will help your users focus on learning the new concept you're trying to show them.

Example: Fantastic Contraption introduces all the moving parts of their VR game one at a time and waits until the player performs each task before moving on to the next.

2. Rely on reality (to start)

Designing virtual environments that your users are already familiar with will cut down hugely on cognitive load — this is arguably the most impactful change you can make. When an environment is familiar to your users, they will be able to rely on their assumptions about what they can do there.

Similarly, virtual objects that look and behave like their real-life counterparts are also easier for users to predict and understand. A wrecking ball is heavy in real life, so swinging a virtual wrecking ball into the side of a building should cause some serious damage. If your intention is to deviate heavily from real life in the design of your environment and interface, look to industrial design or product design concepts that convey their uses through form and the way they behave when someone interacts with them.

Example: Job Simulator takes advantage of the design of familiar objects and uses realistic physics to make them behave in an expected way, while Cosmic Trip sets the bar high when it comes to conveying how to interact with unknown objects in an unfamiliar environment.

3. Put similar objects or actions close to each other

A term for this is "content chunking," which refers to the way we group similar pieces of information. Think about the way you organize your kitchen: if we have multiple items meant for a specific task (spices, cooking utensils), those objects tend to be stored, stacked, or placed next to each other. It's easier to remember where you can find the paprika if you know all of the spices are kept in the same place. Where and how to group things is one of the fundamentals of good UX design — a common real-life example is where to put the raisins in a grocery store so people can easily find them.

In every VR experience, there will be many different ways to group objects together: type, use, size, color, etc. Focusing on what each object is meant to be used for will help you decide where it should be located in order to reduce cognitive load. This is an iterative process, and it affects many different aspects of the user experience, so it is unusual to get it right on the first try.

Example: Itadakimasu, a therapeutic VR experience, puts hand gesture icons in the environment next to animal characters that respond to those gestures.

4. Map core actions to already-known or easy to learn input methods

The easier the input method, the easier it will be for your users to learn & recall how to perform the task that is associated with that input. Save your product's core tasks or mechanics for basic gestures, single-button inputs, or familiar input methods.

What constitutes a "known" input method will depend on your VR hardware. With NUI (natural user interface) devices in particular, such as the Leap Motion, users may be familiar with how hands work, but unfamiliar with how hands are tracked by the device. They may expect to be able to use hands in all the ways they are used to using them, even if your VR experience only supports a limited set of inputs.

Also keep in mind that "simple input" doesn't necessarily mean it will be easier for users to remember. The grip buttons on the HTC Vive controllers are notorious for tripping up new VR users, who may frequently forget that they exist or what they are used for.

Example: Tilt Brush made it easy to change brush size on the fly by mapping the brush size tool to the touch pad of the same controller that users paint with. (Presumably this means that brush size changing is a core action for artists in Tilt Brush, since it's being treated like one.)

5. Make permanent actions hard to fuck up

Being under a lot of cognitive load can make it easier to make mistakes. VR users can feel mentally pummeled with a myriad of new tasks to learn and environments to adapt to. Users under significant cognitive load are more prone to errors, which makes putting the "Delete" interactable next to the "Save" interactable dangerous.

If your VR experience has a way to delete, remove, destroy, or otherwise permanently affect something in a big way, you should make it harder for users to perform those actions. Permanent actions that take effort will go a long way toward preventing mistakes that are difficult to recover from, if recovery is even possible.

Example: Instead of selecting a button that says "Delete", Fantastic Contraption has players throw levels they don't want anymore into the mouth of a tiny volcano that’s located off to the side of the main menu interface.

6. Run your design through usability testing to make sure it’s working with users and not against them

A wise designer once said, "You are not your user." Users rely on a lifetime's worth of knowledge about systems, relationships, and communication, as well as their beliefs about how products work. This is their "mental model," the cognitive framework that each user refers to in order to determine what’s possible and what’s likely to happen while inside your VR experience.

Mental models can vary from person to person, sometimes wildly and sometimes in patterns. If you have ever had users tell you that a feature feels out of place, a part of the interface should be moved somewhere else, or that something fundamental seems to be missing from your product, this is a symptom of a mismatch in your design and their mental model. You can either change the design to better represent the user's mental model, or improve the user's mental model by making the interface more accurately reflect the systems underlying it.

Optional: Not sure what's wrong? Get help from an expert

If you are struggling with ways to improve the UX of your VR thing, you might want to hire a UX designer. I've been told it's difficult to find a UX designer who has experience with VR right now — if you're on the hunt, I can point you in the direction of some professionals I know.

If you need help, but can't hire a full-time designer, you might want to get a heuristic evaluation (fancy way of saying "expert review") of your VR thing from a UX professional.

Generally speaking, getting a UX person involved will probably cost money if it's more than just a simple question... but trust me, catching UX problems early will save you a lot of time, money, and grief later on.

Additional Resources

Thanks for reading! If you want to hear more about UX design in virtual reality, see cool stuff my friends are making in VR/AR, or add a steady stream of unicorn emoji to your feed, you can follow me on Twitter @snickersnax 🦄

Img source: http://www.emdocs.net/cognitiveload/

<![CDATA[Accessibility in VR: Head Height]]>Greetings Internet! My name is Brian Van Buren, and I am the Narrative Designer at Tomorrow Today Labs, a VR studio based in Seattle. I am also a wheelchair user — spinal cord injury at the L2/L3 level for those in the know — and am unable to walk or stand.

http://www.vrinflux.com/vr-accessibility-why-changing-head-height-matters/191f8e40-eb49-46ec-a29b-2283bf8eaa86Tue, 05 Jul 2016 18:44:00 GMT

Greetings Internet! My name is Brian Van Buren, and I am the Narrative Designer at Tomorrow Today Labs, a VR studio based in Seattle. I am also a wheelchair user — spinal cord injury at the L2/L3 level for those in the know — and am unable to walk or stand. This is the first post of a series focusing on ways to make VR experiences accessible for disabled/mobility impaired users.

VR has tremendous potential to change the way people work, learn and play, but with this unique medium comes unique challenges. Making sure that designers and developers are solving them at this early stage is paramount. My goal in this Accessibility in VR series is to identify the challenges and show ways I and others have attempted to solve them. This post will focus on something unique to room-scale VR: head height.

Head Height Tracking

Before VR, we designed game environments and interactions knowing we had control over the view angle and interaction models. In first-person games you controlled the viewing angle for the player; you could set the view angle, take control over the camera when necessary, and design the space and interactions around that.

While this works fine for the classic monitor and mouse-and-keyboard/controller configuration, in VR it is problematic. We don't have control of the camera any more; users can look at whatever they want whenever they want. Taking control of the camera in VR can make users disoriented and nauseous. With the HTC Vive, which uses sensors to determine the head height of the user, we don't even have control of the camera height.

With the Vive, the head height (and controller location) of the user is automatically tracked, placing the "camera level" of the user at something closely approximating their natural head height. The view of the user in the HMD is accurately placed and tracked within the environment, as are the controllers.

Accurate head height/controller placement creates a high degree of immersion; that magic state where you believe that the artificial space is real and that you have presence within it. Since immersion is key to creating engaging VR experiences, and having your natural head height helps increase immersion, this is a great thing, right? Well...

When creating room scale VR spaces, you cannot assume the head height of your user.

Users come in all shapes and sizes; some tall, some short, some seated. Since the user interacts in a room-scale VR space with a realistic approximation of their body, the physical dimensions of both the space and the user matter. Depending on the design of the space and the dimensions/limitations of that user, they may not be able to interact with the space in an ideal fashion, if at all.

Take the Job Simulator Demo for example. In the cooking level, one of the tasks asks the user to put ingredients into a pot on a stove, which is placed at about the same height as a real-life stove top. It was difficult for me to put objects in the pot, and just like in real life I couldn't see into it from my seated perspective. However, in real life I would prop myself up using a counter top to look in; doing that in VR ends up with me doing this.

Standing vs. Seated Perspectives

The following gifs show the differences in interacting with objects at different head heights. In the first, the user is interacting with the environment from a standing perspective (in our own NewtonVR sandbox experience).

This next gif shows a user interacting with the same environment from a seated perspective.

From the seated perspective, it is difficult to see into the drawer, but easy to interact with objects on the floor. From the standing perspective, seeing into the drawer is easy, but the user must duck down to interact with the floor objects. So while the space is accessible from a seated perspective, it is not ideal; neither is forcing a standing user to bend over or kneel down, especially if done often.

So what do we do ensure an accessible experience for all users when we can't control their physical dimensions? Here are some different ways to create head height accommodations in VR.

Adjustable Head Height

The most obvious accommodation is to allow the user the option to adjust the head height. This allows the user to set a comfortable head height that gives them access to the environment without requiring changes to the design of the space. This is not only important for seated users, but all short users - if you've ever seen a 5-year old try to function in a room scale VR space you know what I'm talking about.

In this example from Hover Junkers, the head height is fully adjustable. Hover Junkers is a competitive shooter where the player must duck behind walls to avoid attack, so creating an accurate representation of the body model through head height tracking is important.

As a design solution, it is functional and fits within the game world and concept — you're being measured for a casket, so the game needs to know how tall you are. Also, the head height adjustment option was added and tested early in development.

Recently, Job Simulator added a "shorter human" mode that increases the head height by a specific amount; it makes the head height taller but is a one-size-fits-all solution. The design of the option — a switch — is elegant and simple, and has a real-world analog that is common and understandable.

Though the feature was added post-release, the solution works for the most part. It does illustrate the need to identify and fix accessibility issues in the design phase, because users will encounter them eventually.

Adjustable head height is a good accommodation, but can be problematic for some games. In a multiplayer shooter, for example, if the hit box for the player is set to a body model, that does the hit box change when the head height changes? Players may exploit this by choosing the smallest body model possible as to avoid being hit.

When changing head height make sure that the Vive controllers also change height to match. Increasing head height without pairing the controllers may fix visual problems but won't fix interaction problems. Also, raising the head height may put floor objects out of reach, so dropped items may become inaccessible.

Designing Accommodating Spaces

As a designer, I'm partial to design-based solutions. The gifs to follow are from an unreleased prototype from Tomorrow Today Labs. The ping pong ball dispenser went through a few iterations before we came to this one. Initially, the balls were dispensed directly onto the floor, and being small objects they were hard to grab and the repeated kneeling/standing needed to perform the task became uncomfortable quickly. The solution was to dispense the balls into a raised container, but where should it go?

From the seated perspective, the container is about shoulder height.

From the standing perspective, the container is about waist height.

The space is comfortable for a standing user without being difficult for the seated user. This is an example of Universal Design; designing so that all users can function within an inclusive space without that inclusiveness being obtuse. Designing accommodating spaces requires forethought and planning, but is a more robust solution than adding it in after the fact.

Bypassing Content

Sometimes, the design of the experience is based around a set of actions that cannot be performed from a seated position. Crawling, lying prone, reaching objects set at a great height, crouching to hide behind chest-high walls — these actions may not be possible for the seated user. In other instances, creating an accurate representation of a real-world space may trump accessibility needs; take it from me, the real world isn't always accessible.

As a designer, you must ask yourself why these actions are required for your experience. Is there another way to maintain/increase immersion without forcing the user to perform the physical actions? Is this mechanic so central to your design that without it the experience is lessened/impossible? Is the accuracy of presenting a real-world space more important than a user's ability to utilize that space?

In Unseen Diplomacy, there is a limited mobility option that allows the user to bypass certain content that cannot be completed from a seated position. If the user cannot advance through the experience, nothing else about it really matters, so providing a bypass option is better than nothing. This interview with designer Katie Good about the accessibility option in Unseen Diplomacy details her thoughts about accessibility design in VR.

When Accessibility is Required

The above examples come from the gaming world, but VR has tremendous potential in other fields. The entertainment field is the first major adopter, and as VR movies and interactive field trips become more popular even more types of experiences will become available. Also, educational and training software in VR has huge advantages in that real-world spaces and actions can be mimicked in safe environments and at lower costs.

In many countries software used in a business setting or for government use is required to have accessibility options. In the United States, for example, the Americans with Disabilities Act is the law regarding accommodation for and discrimination against disabled people. "The law forbids discrimination when it comes to any aspect of employment, including training", so training software used in the United States should be designed with reasonable accessibility options. Microsoft productivity software has very robust accessibility support, so look to them and others for role models on how to make business software accessible.

Final Thoughts

Creating accessible VR spaces to operate in is not difficult. It requires forethought, planning and testing, and sometimes requires creators to make hard choices. But above all it needs awareness within the development/design community.

VR spaces aren't simply meant to be traversed or navigated through; they are meant to be inhabited, by everyone and anyone.

I hope you enjoyed this Accessibility in VR post, and have learned as much from reading it as I did from writing it. Questions? Suggestions? Ideas for future topics in this series? Post them in the comments below, and make sure to follow me on Twitter. Thanks for reading, and see you around the Internet!

Related Reading

<![CDATA[Get started with VR: user experience design]]>leap motion cockpit

User experience in VR is already a very broad topic. If you're just getting started with virtual reality, you'll quickly realize that we're all standing on the tip of an iceberg, with a lot of undiscovered VR interactions and experiences laying unexplored beneath the surface.

Below is a collection of

http://www.vrinflux.com/the-basics-of-virtual-reality-ux/c2c8b82d-2105-4cbf-ba2f-949b08c083ccWed, 09 Mar 2016 17:09:00 GMT

Get started with VR: user experience design

User experience in VR is already a very broad topic. If you're just getting started with virtual reality, you'll quickly realize that we're all standing on the tip of an iceberg, with a lot of undiscovered VR interactions and experiences laying unexplored beneath the surface.

Below is a collection of insights that I've had in the VR design work I've done myself, as well as observations I've made going through a wide variety of VR experiences made by others. Developers and designers who are new to the medium can use this guide to get a jumpstart on their own journey into VR.

VR: it's like theatre in the round

In a lot of my own work, and the way I will talk about some of the topics here, I draw a lot of inspiration from theatre. In particular, theatre in the round is very relevant. Both VR and acting in the round have a lot of the same unique features, most notably:

  1. No place to hide, no angle your audience won't be able to see things from.
  2. Deep connection and engagement with the audience due to the intimacy of the setting and proximity of everyone involved.

In VR, the line between the audience and the actors is blurred beyond recognition. A user in VR isn't an audience member, they're an actor you've invited on-stage to perform with the rest of the company. And even more perplexing, they haven't read the script!

Get started with VR: user experience design
A maybe not totally inaccurate depiction of dropping new users into a VR experience

This places huge importance on the first few minutes of the VR experience: as far as the user is concerned, it's all improv. Make sure you've considered what scaffolding guidance you can offer to ensure that your users know where they are, who they are, and what they're supposed to do next.

Other topics like stage directions, set design, and using props are all areas that someone building VR experiences should familiarize themselves with. Here are some handy rules about staging for theatre in the round that you can consider when considering the user experience your virtual world is providing. I also recommend the book Computers as Theatre for theatre-inspired design thinking that dives deep into the details.

Drawing attention

When you're given the freedom to move around and look at whatever you want, it can be challenging to get users to pay attention when you want them to. It's easy to miss action outside your field of vision, or instructions/hints for how to complete a puzzle.


How everything is lit can help direct, guide, and hold attention. Spotlights are handy for pointing out specific areas / objects that you want users to focus on, especially if they come with a directional "turning on" sound effect. The way certain areas remain lit or unlit can provide passive information about where users are able to go or what they're able to interact with.

Lowering the "house lights" and using directional lighting on active NPCs can be a good way to lead the user's attention through scenes where they're not directly interacting with anyone.


Set design and environment design go hand-in-hand when you're working in VR. Objects that the player directly manipulates with their hands the most should be facing toward the player and placed well within reach, making them easy to find.

Adding visual effects to objects is a great way to call out specific objects in the environment. There are a handful of VR experiences that highlight objects or places where objects can be used in order to clarify that there is a possible interaction.

The example below is from Job Simulator, showing how color can be used to indicate potential interactions with objects near the player's hands:

Get started with VR: user experience design
Interactable objects highlighted in blue in Job Simulator by Owlchemy Labs

Audio cues

Audio provides a passive steady stream of information that tells users everything they want to know about their surroundings, including where everything is located and where the action is happening.

Use directional audio in 3D space to direct attention where you want it to go. Sound effects that are carefully placed in the virtual environment can help turn heads so your players don't miss important events, especially when used in tandem with attention-catching visual effects.

If a character is talking, their voice should be coming from their physical location, and you may even need to move the character around while they're talking in order to direct attention where it's needed.

Eye contact

Humans are naturally attracted to looking at faces and eyes. Getting someone's attention can mean using virtual eye contact to direct their gaze and tell them what to focus on. Are there characters in your game that the player is directly interacting with?

Get started with VR: user experience design
Henry, a VR experience from Oculus Story Studio that uses eye contact to increase the feeling of presence

Henry is a character in a VR experience made by the Oculus Story Studio who makes eye contact with the player at meaningful moments to create presence in the space with him.

When used in moderation, meaningful eye contact is very good at creating a sense of presence in VR, and can be effective even to the point of creepiness, e.g. if a character is following you around with their eyes at all times. (It depends on what mood or feeling you're trying to evoke!)

What doesn't work as well?

There are a handful of attention-grabbing techniques that are hit or miss, depending on how they're implemented:

  • text placed on or near the user's hands
  • text floating around in the environment
  • static signs placed in the user's field of view that try to convey vital info

Your surroundings in VR can be so immersive and arresting that it's easy for some users to miss mundane tutorial signs or text near the controllers (the VR equivalent of tooltips).

Fantastic Contraption is a great example of where big text anchored to the controllers is effective enough to serve as an instructional guide that helps people understand how to play the game.

Your mileage may vary, these methods are unreliable. It's not immediately intuitive for users to look at their hands in order to receive instructions, and users who don't notice your helper text anchored to the controllers might end up lost or confused about what they're supposed to do.

While it's true that anything can get attention if there are very few things to look at, or if something is so huge you can’t help but see it, big floating lines of text comes at the cost of obscuring the user’s (beautiful, immersive) surroundings. Use text or signage in VR intentionally and make every effort to integrate it into the visual style / atmosphere you're trying to create.

Height and accessibility

The VR headset you will be working with places the camera at face-level for the person wearing the HMD. In some VR prototypes, it's easy to guess how tall the designer was because every virtual object tends to get placed at the perfect height for someone exactly as tall as they are.

If you're 5' 10" and built the environment to suit people as tall as you are, you are missing crucial accessibility issues, not just for people shorter than you but users with different physical abilities as well.

Women will tend to have a shorter head-height, and so will people sitting in wheelchairs, or users who are bed-bound. Can people sitting down still play your game or move around your VR environment?

Get started with VR: user experience design
Wheelchair range of use while in the HTC Vive

We also need to consider kids who have short legs & arms, they might not be able to see or reach everything an adult can. Is something placed high up on a shelf or counter, putting it out of sight for anyone under 5 feet tall? Can an 8-year-old easily reach everything they need to interact with?

Adjustments in the form of a height setting the user sets before they begin the VR experience can be used to alleviate some of these problems. Adapted environments and interaction systems can be provided to users who are unable to use the controllers, or who are unable to navigate around in VR by moving their body.


It wouldn't be a good user experience guide for VR without talking about virtual simulation sickness. This topic has already received the most amount of attention of any topic we're discussing here.

The single best thing you can do to avoid nausea is to maintain the ideal framerate that has been suggested by HMD hardware companies like HTC and Oculus: 90fps in each eye. Beyond that, it depends on the design of your user experience.

The most straight-forward guide on designing to prevent nausea in users is UploadVR's article that presents five ways to reduce motion sickness. There are also other solutions that people have tried, like giving the player a virtual nose or using audio design to alleviate symptoms.

There isn't and will probably never be a one-size-fits-all answer to preventing nausea entirely for every user in every experience. Each VR project will have its design challenges when it comes to VR sickness, depending on your method of locomotion, variable framerates, etc.

A minority of people seem to be completely immune to VR-induced nausea, and can comfortably go through experiences in virtual reality that would make other users instantly sick. Testing early and often on a variety of users is the best way to tell if your user experience is turning people's stomachs.

Room-scale and beyond

If you're working on a VR experience that provides motion tracking, you will want to consider the space people have available at home or in their office, as well as what movements are possible with the hardware you're making your VR experiences for.

Here is an example of a room-scale VR setup from Stress Level Zero's studio, with the boundaries outlined in 3D space around the VR user:

Get started with VR: user experience design
A visualization of room-scale playspace using the HTC Vive

Designing within limits of the space that users will have available to them is up to each project. There's a desk that's sitting in the active space available in the picture above, which is handled by the HTC Vive chaperone system.

But what can we do if the virtual space exceeds the physical space available to move around in?


Teleporting is a solution that many have implemented, and seems to work best when it’s integrated into the environment. Show users where they can teleport to, or give them something that lets them teleport whenever they want to.

Here's an example of teleporting mid-action from Bullet Train by Epic Games:

Get started with VR: user experience design
Teleporting as a seamless part of gameplay in Bullet Train by Epic Games

And another example of teleporting from Budget Cuts by Neat Corp.:

Get started with VR: user experience design
Teleporting to maneuver in Budget Cuts

Players who are allowed to teleport quickly over and over again can make themselves sick, so be careful when relying on this kind of locomotion design.

There is also a teleportation system in development called Blink VR that is worth taking a look at, as well as a variety of other approaches that may be more or less successful, depending on how it gets implemented.

Moving the player camera

If your experience design requires the player camera to move independently of the player's head, try using linear movement with instantaneous acceleration (no ease-in or ease-out). You can read a more in-depth exploration of why and how to design using linear movement from the developers of a VR game called Dead Secret. Here is an example of linear movement in action from an upcoming VR game in the Attack on Titan series.

Beware, even this approach might make sensitive users nauseous. Be sure to test linear movement often and with a wide variety of users.

Full-screen camera motion

A VR experience where users are piloting a ship or plane that moves through space with them inside can also spell nausea. Give the user a near-field frame of reference like a cockpit or the interior of your dashboard so their vestibular and proprioception systems don't go crazy with all the contradictory information they're getting visually.

Here is a hands-on example of what near-field framing looks like in Elite: Dangerous, and another example using near-field objects and structures from Hover Junkers for the HTC Vive.

Atmosphere and emotion

Because VR transports users so well into their new surroundings, the atmosphere and emotional impact of the virtual world will color the user experience heavily. Is the mood ominous or playful? Is the world close-quarters, or impossibly huge? High up in the air or underwater?

Mood created with lighting, music, and visual style will influence feelings like how trustworthy the space feels, or whether the user is calm or anxious. Take every effort you can to harmonize the VR environment with the emotions you want your users to have.

Object scale

You can also play with environment / prop scale to create a specific feeling. Small objects will feel cute and toy-like, easy to pick up with your hands. Bigger objects placed nearby will make users feel fenced-in, as if they have to lean or walk around them.

Prop furniture can take advantage of this if it's life-size, things with hard surfaces might come across so realistic that some users forget they're not real and try to place a hand or their controller on a nearby table!

Environment & world setting

Transporting users to places they've never been also means being able to take them to beautiful locations. Think outside of the box when it comes to the environment your users are in and how it will influence their emotional state. If anything, VR is a chance to get creative and artistic with your environment.

User interfaces

In real life, a lot of the objects we use are part of the user interface of our environment, e.g. light switches. One of the best parts of VR is the enjoyment and immersion that comes from those same physical interactions we get in real life with objects that look and feel like the real thing.

The freedom to interact with the virtual UI the same way I can with objects in reality will help increase immersion and can bring a sense of delight when users decide to interact with the interface. Fantastic Contraption is a great example of making a user interface fun to interact with.

Get started with VR: user experience design

Yes, the cat is part of the UI! (Its name is Neko, of course.)

Here's another example of a menu that's been hidden inside a briefcase as physical objects from another VR game coming out soon:

Get started with VR: user experience design
Menu choices as physical objects in I Expect You to Die

If your UI can't actually be part of the environment (or a cat), allow the player to call it up / move it out of the way whenever they want to. Tiltbrush does a great job of this by mapping the majority of their UI menu to a very specific action: pointing the right controller at the left controller.

Get started with VR: user experience design
2D menu drawn in 3D and attached to the controller in Tiltbrush

As soon as you access the menu, you can quickly make a selection from it. When you move your hand away to use the tool you've selected, the menu hides out of the way.

Bringing 2D into 3D

What worked for UI on flat screens and mobile devices might not translate well to virtual reality. 2D user interfaces commonly use abstractions of real objects, like buttons and switches, to represent actions you can perform. Since VR puts us inside a 3-dimensional virtual space, being abstract in the way we represent objects isn't really necessary anymore.

If we don't need to be abstract, there's no reason to. Instead of giving your users a laser pointer and having them select the "turn on" button from a flat 2D panel floating in mid-air, try offering them a physical switch panel that clicks into place and turns the lights on when they flip it.

Get started with VR: user experience design
Various physical object interactions using a physics-based interaction system

Make your interactions tactile wherever possible and try a physical object approach to your user interface. Only bring in 2D screens when your UI absolutely needs it, e.g. when displaying large or complex sets of data or options. Take care to consider where / how the UI itself integrates into your virtual environment. Space Pirate Trainer uses 2D menus projected in space and allows the user to shoot laser guns in order to select menu options:

Below is an example from The Gallery of a 2D user interface integrated into a 3D tablet that the player takes out to access menu options:

Get started with VR: user experience design
A physical tablet menu in The Gallery by Cloudhead Games

Interaction triggers and feedback

The design of our interactable components is important and can be considered one of the most direct ways we can let our users know that their actions have had an impact on the environment.

Make triggers obvious by providing sound effects, visual effects and animations as feedback whenever you can, even to the point of over-exaggeration. Mechanical components and devices are fun to interact with for users, encouraging a feeling of immersion. Look to physical buttons, switches, levers and dials that move up and down, light up, change colors, etc.

Making virtual objects feel real

We've already gone over several different ways to help support the feeling of immersion, but I wanted to go over a couple of more specific design applications.

Can I interact with that?

Users view the virtual world the same way they view the physical world. If an object looks like it can be picked up, knocked over, or pushed, users will try to do so. Every effort should be made to allow for those interactions. Users being able to modify the environment by physically interacting with it helps create a sense of immersion.

The more objects you put in the environment that can't be interacted with, the less the user will feel like they physically exist in the space, and may begin to think their actions are futile or will have no impact.

To physics or not to physics?

Using physics as a foundation for your interaction design can provide realistic physical qualities to virtual objects. For example, we can use mass to make sure that heavier objects won't budge when lighter objects are used to try to knock them over. NewtonVR is a free physics-driven interaction system for VR that uses Unity, a popular software and game development engine.

Get started with VR: user experience design
Showing how a difference in mass affects object interactions when using NewtonVR, a physics-based interaction system

Physics might not solve every problem unique to your design. There are times in certain VR experiences where you will want to let the user defy physics (using absolute position) in order to improve the feel of the interactions themselves. Getting in the HMD yourself and testing out various approaches using physics or absolute position is key to finding the right balance.

Haptic feedback

If you're designing an experience that uses controllers, you have the ability to make the controllers vibrate at strategic moments to provide haptic feedback.

Carefully consider where and when it would make sense to use vibrations or patterns of vibrating to tell users something about the world around them, or the way they're interacting with it. In Valve's Longbow demo, if you draw the string of the bow back in order to fire an arrow (depicted in the video below), the controller vibrates in the hand that's drawing the bowstring back, which lends a bit of realism to the interaction.

There are a lot of people currently exploring more sophisticated methods of haptic feedback in a lot of different ways: gloves, running platforms, chairs, steering wheels, etc. Haptic feedback options will continue to grow over the near future. The solutions that become widely adopted will give designers another vector to provide real-time feedback to the user about their interactions & the world around them.

Experiment, get messy, make mistakes

There’s plenty to learn still about what works well in VR and under what circumstances. Test your designs out with real people as often as you can. Get users who have little or no experience with VR, they will be able to offer a perspective that you might not otherwise get to hear from. People who haven’t seen what you’re working on will also provide good feedback on what's working well vs what needs more work.

Every VR experience is different and unique which means lots of trial and error. What works for someone else’s experience might not work for you, not just because of the emotional impact VR can have, but also due to the design choices you make as you create new interactions, environments and UI.

I hope this intro will help you create amazing user experiences in VR. Drop me a line on Twitter or in the comments below if you have any questions or need clarification on anything.

Further reading & watching:

<![CDATA[Can game music and sound combat VR sickness?]]>Virtual Reality Sickness: the nightmare of VR developers everywhere. We all know the symptoms. Nausea. Headache. Sweating. Pallor. Disorientation. All together, these symptoms are a perfect recipe for disaster. No one wants their game to make players feel like they've been spinning on a demon-possessed merry-go-round. So, how do we

http://www.vrinflux.com/can-game-music-and-sound-combat-vr-sickness/a59e9342-1861-4be5-b090-43dbb534999aTue, 08 Mar 2016 19:08:00 GMT

Virtual Reality Sickness: the nightmare of VR developers everywhere. We all know the symptoms. Nausea. Headache. Sweating. Pallor. Disorientation. All together, these symptoms are a perfect recipe for disaster. No one wants their game to make players feel like they've been spinning on a demon-possessed merry-go-round. So, how do we keep this affliction from destroying the brand new, awesome VR industry before it even gets a chance to get off the ground?

In response to this possible VR apocalypse, the top manufacturers have taken big steps to improve their popular devices. Oculus improved the display on its famous Rift device, Valve introduced a motion-tracking system that helps us orient ourselves and not get nauseous when wearing the Vive, and PlayStation VR incorporated a wider field of view designed to make players feel more comfortable. Even with these efforts, players are still reporting motion sickness symptoms, and the creators of the VR systems have responded by pointing the finger of blame at game developers. So, if the developers of VR games have to solve the problem, then how can the music and sound folks help? Can game music and sound combat VR sickness?

Let's start with some good news

First, before we explore the potential of music and sound to address the symptoms of VR sickness, let's celebrate one morsel of good news for the game audio community: VR sickness is not our fault (generally speaking)! According to research, the presence or absence of sound does not make the nausea more likely to occur, nor does the presence or absence of sound worsen the nausea once it has started.

In a 2012 study conducted at Johannes Gutenberg University in Germany, 69 study subjects were shown a presentation specifically designed to turn stomachs. It turned out that the biggest culprit for motion sickness was the immersive 3D display - when shown in 2D, the video didn't bother people at all, and sound didn't make any difference, whether it was turned on or off.

While there is one example of a very specific sonic element exacerbating VR sickness (more on this subject later), the general presence of sound in a VR experience isn't an issue when it comes to the comfort and happiness of players. So, if (generally speaking) sound isn't a part of the problem, how can it be a part of the solution?

Whistle a happy tune...

Can game music and sound combat VR sickness?

At the Toronto Rehabilitation Institute, researchers wondered if music might reduce the symptoms of what they called Visually Induced Motion Sickness (VIMS) commonly experienced in VR environments. They recruited 93 brave souls to endure a 14 minute presentation of a bicycle ride from hell. These woozy folks were split into experimental groups, with some groups hearing music during the presentation, while other groups did not. The findings showed that the VIMS was significantly reduced by music, but only if the study subjects found the music to be pleasant. This description was key. The music had to be pleasant for the motion sickness to be best reduced.

The idea of using music to combat nausea isn't particularly new. The Centers for Disease Control and Prevention recommend listening to music as a way to prevent or treat motion sickness without using medications. Going a step further, a study conducted by Ohio State University in 1998 showed that playing music for patients during high-dose protracted chemotherapy administration resulted in a significant reduction in nausea. Music is thought to provide a compelling distraction from those queasy physical sensations that might otherwise be felt in full force.

Can game music and sound combat VR sickness?

We know that pleasant music can make players feel less queasy. But pleasant music isn't going to always meld well with a game's environment and activities.

However, the Toronto study is especially interesting because it specifically targeted visually induced motion sickness, as it is experienced in virtual environments and simulators. The effectiveness of pleasant music as a remedy for this specific type of nausea is significant for us as game composers, because it suggests a possible course of action. Whether we actually follow this course of action is a more complicated issue.

Should we be composing pleasant music for VR? That's a tough question to answer. We know that pleasant music can make players feel less queasy. But pleasant music isn't going to always meld well with a game's environment and activities.

Can game music and sound combat VR sickness?

For instance, we wouldn't expect pleasant, happy music during a first person shooter. On the other hand, first person shooters are among the biggest culprits when it comes to VR sickness. Owen O'Brien, executive producer of the EVE Valkyrie VR game from CCP games, does a good job of explaining the problem with first person shooters in VR. "The problem with first-person shooters is that you're running or crouching or jumping in the game but not in the real world, and because it's so realistic it can make some people (not everybody) feel nauseated if they start doing it for extended periods of time."

So, if we can't compose the music to be outright pleasant, perhaps we should think about lightening it up a bit? At the very least, we can try to adjust our grim, dirge-like musical atmospheres to feel a bit more neutral. And when we have the choice between an agreeable musical score and a dour one, we might want to let our music occupy the sunnier side of the street.

Now on to the bad news

We previously looked at a research study showing that sound, in a general sense, does not contribute to visually induced motion sickness. However, there is one very specific aspect of the aural environment that can exacerbate VR sickness: low frequency sound.

As it relates to noise pollution and unhealthy environments, people have been aware of the low frequency sound problem for a long time. In 1973, the first global seminar was held in Paris to discuss infrasound (which are sounds that are lower in frequency than the threshold of human hearing). The first international conference on low frequency noise took place in 1980. In 1982, a paper published in the Journal of Low Frequency Noise and Vibration detailed the problem in depth. The paper described the ordeals endured by clerical workers whose offices were unfortunately in close proximity to a testing site for aircraft engines. The low-pitched humming soon caused fatigue, headaches, nausea, disorientation... all the classic symptoms of motion sickness. These symptoms also showed up in lots of other contrasting situations, including:

  • Communities situated near power stations with gas turbines that emitted low vibrations,
  • Occupants in buildings with large ventilation fans that produced a low hum,
  • Factory workers in close proximity to the deep throb of reciprocating air compressors, and
  • Homeowners plagued with low frequency vibrations from their hot-water heaters.

Can game music and sound combat VR sickness?

All these miserable people shared the common experience of woozy nausea induced by their aural environment. This effect, however, was brought about by a specific cause: an ever-present, consistent humming or throbbing in the low frequency range. In the aforementioned paper from the Journal of Low Frequency Noise and Vibration, the offending frequency range was defined as any frequencies lower than 261 Hz.

So, do we completely avoid any low frequency sound below this threshold? Well, that would certainly make for a depressingly treble-dominant mix. Sound effects would have no whoomph, music would have no bass, and most gamers and game audio pros would be very sad. However, when composing music or designing sound for VR, we can try to avoid consistent, ever-present, throbbing sounds in this frequency range. For instance, in a VR game set inside a space station, perhaps it might not be absolutely essential to hear the low thrum of the engines and ventilation systems all the time. Likewise, a musical score for a VR game may want to avoid using a consistently low and sustained bass note over an extended period.

The paper from the Journal of Low Frequency Noise and Vibration also offers an alternative solution: masking the low noise with a simultaneous high one. "It is not unreasonable to expect that a higher frequency noise of even moderate level may effectively mask either low frequency or infrasonic noise," the paper suggests. "Practical experience shows that this is indeed so." So, when it is possible to finely control the mix and ensure that high frequencies are consistently masking lower ones, then we can include those low sounds without worrying about their queasy side effects.

Some interesting theories to consider

We've taken a look at some well-documented effects that music and sound can have on players, and how those relate to the issue of VR sickness. Now, just for fun, let's take a look at a few miracle devices that were designed to be motion sickness cures:

The VR Sickness Device

In 1999, inventor Bruce Kania filed for a patent described as an "Apparatus and method for relieving motion sickness." The device was described in the patent as an invention "to relieve motion sickness which may occur during video games including virtual reality games." In theory, the device would have included a sensor that could detect motion and then deliver "sensory signals" to the user that corresponded with that movement. Theoretically these signals could have served to resolve the conflict between visually perceived movement and the corresponding motion detected physically by the user, thus preventing motion sickness. If this patent had been realized as an actual device, then our imaginations can conjure up some sort of built-in component in all VR headsets, delivering these sensory signals and solving the VR sickness problem once and for all.

Can game music and sound combat VR sickness?

How does this relate to game audio? Because some of the sensory signals would have been aural. The patent describes these signals as "white noise signals, pink noise signals, brown noise signals, and audio tone signals." Would these audio signals have interfered with the sounds of a game, or could they have been subliminally buried inside a mix? Would they even have worked? Who knows. The device was never actually produced. It's an intriguing idea, though. This brings us to another technology that promises to cure motion sickness ... and this one is actually available to the public:

You're never sick when you use Nevasic

Can game music and sound combat VR sickness?

In a 2003 study conducted by several psychology professors from the University of Westminster and the Imperial College School of Medicine in London, 24 volunteers were placed on a rotating turntable and spun around like tops. The control group were left to twirl around and turn green with no additional help. The rest of the subjects were split into two groups, and these suffering souls were given one of two remedies. Half were told to focus on their breathing. The other half listened to a special music recording. According to the research paper that was published in the Journal of Travel Medicine, "subjects were played a commercially available music audiotape (Travelwell) of “specific frequencies and rhythms blended with orchestral music,” which are claimed by the manufacturers to reduce motion sickness."

The study found that the Travelwell music helped the musically-accompanied spinners to not feel motion sick for a longer period of time than those who had been concentrating on their breathing, or who had just been doing nothing while helplessly whirling around. So, is this Travelwell music the answer? And what exactly is it doing that's alleviating the queasiness?

Well, we can hear it for ourselves, because now the Travelwell music is available as an app called Nevasic. The makers of Nevasic proclaim on their website, "We have demonstrated that issuing the specifically identified and constructed tones, frequencies and pulses in this programme to the ear in a direct mode we disrupt the normal signal chain at the Vestibular level and therefore affect the chain. By affecting the chain in this manner we have also demonstrated the ability to stop and prevent emesis or sickness."

This solution, as described on the web site, seems to have elements in common with Bruce Kania's proposed device for relieving motion sickness. Both approaches refer to audio signals that affect the inner ear and alleviate the conflict between actual and perceived motion. So what are the audio signals that Nevasic uses? We don't know. No information about this is offered on the company web site or within the application itself.

If the Nevasic folks have the answer to the motion sickness problem, they aren't sharing their secrets.

In the interest of science, I purchased the Nevasic app and listened to the 27 minutes and 10 seconds of audio content that is designed to relieve and prevent motion sickness. Here's a brief description of what I heard:

The recording begins with a series of bell-like tones. Then waves crash against a shoreline. After a while, a walking synthetic bass takes over, accompanied by some synth chords, high-pitched mallet accents and a snare/kick drum combo. This goes on for some time, and then the ocean waves return. The waves recede in favor of another synth-driven musical groove with a bell-like melody. By the time the Nevasic program reaches 12 and a half minutes, the music has introduced some kind of synthetic drum sound in a high pitch with a randomized rhythm that is alternatively hard panned to stereo right or stereo left. This continues for quite a while. Finally, when we've listened to 27 minutes and 15 seconds of the Nevasic program, the rhythmic elements disappear and we again hear the bell-like warbling tones from the beginning of the recording, which fade to silence once we reach the full 27 minutes and 10 seconds of the Nevasic audio program.

From my listening experience, I couldn't identify the "specifically identified and constructed tones, frequencies and pulses" that were being used, though it seemed clear that pink noise was being delivered via the ocean waves (and perhaps the snare drums as well). The hard panning and stereo effects seemed deliberate, and I imagine that the randomized quality of these sonic events was designed according to a plan... but I couldn't discern it, and Nevasic isn't offering any information through their site. If the Nevasic folks have the answer to the motion sickness problem, they aren't sharing their secrets. That's a shame, because if the solution to VR sickness is a special cocktail of audio signals, then we game audio pros would definitely want to know about it.


In this article I've tried to gather together some useful thoughts about the relationship between VR sickness and game music/sound. Some of the concepts in this article are practical and can be applied to our projects, while others are strictly theoretical. However, all of these ideas outline the significance of audio in attaining an honest-to-goodness VR sickness solution. Game audio has a role to play, and I hope we'll continue experimenting and thinking about how game audio can improve the experience of our VR gamers, and keep their stomachs feeling happy and strong. Thanks for reading! If you've heard any ideas that relate to this blog, please feel free to share them below in the comments!

About the author

Can game music and sound combat VR sickness?

Winifred Phillips is an award-winning video game music composer. Her credits include five of the most famous and popular franchises in video gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER'S GUIDE TO GAME MUSIC, published by the Massachusetts Institute of Technology Press. Follow her on Twitter @winphillips.

<![CDATA[NewtonVR: Physics-based interaction on the Vive (Part 2 + Github)]]>Tomorrow Today Labs is working on an unannounced VR game for the HTC Vive in Unity and we've spent a lot of design and development time trying to find a method of interacting with objects that feels good to us. Using a mouse to move a box on a screen

http://www.vrinflux.com/newtonvr-a-free-physics-based-interaction-system-for-unity-and-the-htc-vive/fcc304a0-6514-4830-ac2f-3d535b051b37Thu, 07 Jan 2016 17:00:46 GMT

Tomorrow Today Labs is working on an unannounced VR game for the HTC Vive in Unity and we've spent a lot of design and development time trying to find a method of interacting with objects that feels good to us. Using a mouse to move a box on a screen is a pretty straight forward process. You've only got two axes of input to worry about. But we're in VR now, we've got all three positional axes, plus rotation. This requires a new approach to object interaction.

There are so many amazing experiences yet to be created for VR and we'd like to help accelerate their development by releasing our interaction system for other developers to use - free of charge (MIT license).

Newton VR

Our system allows players to pick up, drop, throw, and use held objects. Items never change their parent during this process and are never set to kinematic. This means that items won't pass through other items (rigidbodies), or the environment (non-rigidbodies). Held items interact with other rigidbodies naturally - taking mass into account.

For example, if you have two boxes of the same mass they can push each other equally, but a balloon, with considerably less mass, can't push a box. For more information on this style of mass based interaction see this post by Nick Abel.

NewtonVR: Physics-based interaction on the Vive (Part 2 + Github)

By default, items are configured to be picked up at any point. But, if you have something you want to hold at a certain point, items can be setup to rotate and position themselves to match a predefined orientation. This lets you pick up a box from any corner as well as pick up a gun and have it orient to the grip. Again, we let physics do the heavy lifting here, so items don't clip through walls or the ground while reorienting.

NewtonVR: Physics-based interaction on the Vive (Part 2 + Github)

We've created a few physical UI elements to help with basic configuration and menu type scenarios. We also give you the option to dynamically let the controllers turn into physical objects on a button press. This lets you interact with the world as if your controllers were physical objects. Which means that in this mode they are no longer a one to one representation of your real world controllers. I know this may sound sketchy, but in practice it's awesome.

NewtonVR: Physics-based interaction on the Vive (Part 2 + Github)

Grip buttons

A hotly debated issue is whether or not to use the grip buttons to pick things up. We feel like the benefit gained by using the grip buttons outweighs the trouble users can have with them. One of the benefits of releasing the code with this system is that if you disagree you're welcome to change the mappings. But, if you use the system with the defaults, then pressing grip button(s) will let you pick something up and releasing it will drop (or throw) the item.

Using the grip buttons to hold an item frees up other buttons on the Vive controller for items that are designed to be used while held (for example holding a gun and then pressing the trigger button to fire). If your controller is not hovering over an interactable object, and you hit the grip button, your controller becomes a physical object that you can use to interact with the world. This mode can be used to press buttons on a control panel or push objects out of your way.


Clone or download our repo here: https://github.com/TomorrowTodayLabs/NewtonVR/

We've included SteamVR so the project compiles and will try and keep the version updated. The meat of the project is in the NewtonVR folder. I recommend you clone the repo locally and create a symbolic link to your project so you can get updates and merge changes cleanly.

You can get the desktop github client here: http://desktop.github.com. On windows, open a command line as administrator and use the following command to create a link: mklink /D c:\git\MyProject\Assets\NewtonVR c:\git\NewtonVR\Assets\NewtonVR The first parameter is the location you want to put NewtonVR and the second parameter is the location of your local NewtonVR repo. This is not required, just recommended.

After you've got the project you can check out our example scene in NewtonVR/Example/NVRExampleScene. We've got everything scaled up by a factor of 10 because PhysX seems to work more reliably with larger colliders. The scene includes one of each of everything:


There's some stacked boxes which have NVRInteractableItem components on them. There's a tiny box on top that you can use to try and push over the stack of boxes to see the mass based system in action. In the drawer there's a gun that has a configured NVRInteractableItem.InteractionPoint set to the handle. When you pick it up the system tries to rotate and position the gun in your hand, and keep it at that orientation.


The door is an example of an object with a hinge that has a static position but that you want to rotate by dragging a specific point. We've got the interaction script on just the door knobs, and NVRInteractableRotator.Rigidbody is set to the door's rigidbody. You could also just stick the actual script on the whole door if that makes more sense for your application. To get the currently selected angle (from a zeroed rotation) you can access NVRInteractableRotator.CurrentAngle.


There's a letter selection spinner that inherits from NVRInteractableRotator. You can grab and spin it to select a letter. This isn't necessarily the best text input method for VR, but it is a fun one. You can get the currently selected letter by calling NVRLetterSpinner.GetLetter().


There's a slider example that lerps the color of a sphere between black and yellow. To get the slider's value you can check NVRSlider.CurrentValue. To setup this slider outside of the example you need to set the transforms NVRSlider.StartPoint to the slider's starting location, and NVRSlider.EndPoint to the slider's ending location. Like a lot of these UI Elements we've got a Configurable Joint attached to it to handle the limits and lock position / rotation.


The interactable item class can also be used to create dial or knob type elements. There's an example of this that reports the current angle of the knob. You can get the current rotation from simply accessing the local euler angles NVRInteractableItem.transform.localEulerAngles.y.


To interact with a button you can either enable NVRPlayer.PhysicalHands and then press the grip buttons to turn your controllers physical, or put pressure on it with another object. The button in the example scene here has a script on it called NVRExampleSpawner which will spawn a cube when the button registers as pressed. Button presses are based off NVRButton.DistanceToEngage. If you move a button far enough from it's initial location then NVRButton.ButtonDown will trigger for a single frame. NVRButton.ButtonIsPushed will be true for as long as the button is down. Then, when the button moves back into its initial position, NVRButton.ButtonUp and NVRButton.ButtonWasPushed will trigger for that frame.


Like with NVRButton, NVRSwitch requires either physical hands, or another physical object to interact with it. The switch example in the scene controls a spot light next to it. On Awake() it will set it's rotation to match the value of NVRSwitch.CurrentState.


There's a gun in the drawer that is a nice example of how to use pickup points with NVRInteractableItem as well as how to get input from that component. You can pick up the gun with the grip buttons and shoot with the trigger.

Basic Integration

To integrate NewtonVR into a project you can use our included player prefab in NewtonVR\NVRCameraRig. This is a copy of the SteamVR camerarig prefab with the NewtonVR scripts added. Specifically, there's a NVRPlayer component on the root, a NVRHead component on the head, and NVRHand components on both hands. Alternatively, you can just add those components to your player. Take note though, if you're not using the standard controllers in your project then the physical hand option will not work correctly.

When you've got an item you'd like to pick up, simply drop a NVRInteractableItem component on it. You'll need to give it a Rigidbody (and ideally set the mass) if you haven't already. If the item has a specific point that you'd like to pick it up at you can create a new GameObject, parent it to your item, and position it in the location and at the rotation that you'd like the controller to be. Then set NVRInteractableItem.InteractionPoint to that new gameobject.


We hope that this system can help you make an awesome VR experience! If you do, be sure to let us know. Anybody is free to use it for basically any purpose: game jams, commercial games, educational apps, etc. Check the license for more info.

We are actively using this system in our game and plan to update it as development continues. If you have questions or comments about this system you can contact us by leaving a message below, on twitter at @TTLabsVR, or creating issues on the github.

Development: Keith Bradner, Nick Abel
UX: Adrienne Hunter

<![CDATA[NewtonVR: Physics-based interaction on the Vive (Part 1)]]>http://www.vrinflux.com/newton-vr-physics-based-interaction-on-the-vive/0e6b713c-2bca-4ff6-a13c-7048457fde2dMon, 28 Dec 2015 21:39:35 GMT

Hey everyone, my name’s Nick, and I’m a Virtual Reality Developer at Tomorrow Today Labs. We’re currently working on an unannounced title using the Unity3D game engine. Like most everyone working in Virtual Reality, I'm constantly running into unique and new challenges and experimenting to find the best solutions. When I think it's a really interesting or useful experiment, I'll be sharing it here with the community.

We’ll be following up this post in just a few days: the Tomorrow Today Labs interaction system is going up on GitHub later this week, free to use under a Creative Commons license as a token of our appreciation to the community.

** Quick edit: Part 2 is located here with a Github release and brief readme.


We recently decided to rebuild our interaction system, which acts as the glue connecting player input from the Vive controllers to objects in game. For an interaction system, our initial methods felt very… un-interactive. Objects under interaction were free from physics. External forces were meaningless. Position was set absolutely. We wanted players to feel like they were using their hands to pick up objects, and instead it felt like players were replacing their hands with objects. It became obvious that if we wanted to create a really dynamic VR experience, physics based interaction was the way to go.

Under the old interaction system, when a player tried the pick up or interact with an object with the Vive controller, that object would be parented to the controller and made kinematic. Through parenting, the position and rotation of the object were matched to the position and rotation of the player’s controller. When a player dropped an object it would be re-parented to the original parent. This led to a number of problems.

Old System - Position and Rotation Set Via Parenting NewtonVR: Physics-based interaction on the Vive (Part 1)

Since picked up objects were kinematic, forces from gravity, collisions, and hinges no longer had any effect. The held object could still apply forces to other rigidbodies, but if it hit a wall or any object without a non-kinematic rigidbody, the held object would clip right through. And if you dropped the interactable object while it was clipping through a wall, it would end up stuck there. The world was suddenly non-Newtonian. Equal and opposite reactions were not observed. The mass and velocity of an object under interaction meant nothing.

Until we have perfect, constant force feedback (hello holodeck), convincing the player that objects in game have actual mass and presence will be challenging. The easy way to deal with the technical limitations is to say: “Ok player, we can’t prevent you from breaking our game by walking through walls or putting items in walls, but we can pretend you didn't do it”. However, I believe a game world where physics are always present and always working is much more compelling, even critical, in VR.

Creating a functional (and believable) virtual reality is about consistency. We want the player to put on the headset and immediately be able to unconsciously understand the laws that govern that world. Game physics don’t necessarily need to be identical to earth physics, but if we want the player to think critically about these virtual laws, to use and subvert them to solve problems, then we have to make sure we apply these laws consistently across the game.

Old System - Inconsistent World Interaction
Notice how the tennis racket is interacting with the ping pong balls. Sometimes the racket correctly knocks the balls around, other times the rackets seems to pass right through the balls.

NewtonVR: Physics-based interaction on the Vive (Part 1)

protected void OnAttach(VRHand hand)
    this.GetComponent<Rigidbody>().isKinematic = true;
    this.transform.parent = hand.transform;
    this.transform.forward = hand.transform.forward;
    this.transform.position = hand.transform.position;

Wanting a physics based interaction system is all good, but there are some serious hurdles to consider:

1) As objects under interaction would no longer clip through colliders, we had to deal with the scenario of a virtual object being blocked while a player’s real hand continues moving. Suddenly the player’s hand is disconnected from the object. And what about external forces on the object?

  • Do we provide a visual clue to remind the player that they are still holding the object? Like a skeletal hand with the items outline, à la Surgeon Simulator?
  • Or do we immediately knock the object out of the player’s hand, like what happens in Job Simulator?

2) What if the object under interaction is attached by a hinge (like a switch, door handle, knob)? What happens if the player moves the controller in a way that would be impossible for an object to follow?

  • Either the player remains interacting with the control at a distance and we provide a visual clue.
  • Or the player loses interaction, as though the object slipped from their grasp.

Physics is Hard

Even though we wanted a physics based interaction system, we didn’t limit ourselves when experimenting. We tried both physics based and non-physics based solutions for a large variety of scenarios.

  • Unattached Interactables: Our game is full of interactable items that can be picked up and moved freely.

Our solution was to create a new position or rotation every update and use interpolation to smooth out the movement.

Position and Rotation via Interpolation NewtonVR: Physics-based interaction on the Vive (Part 1)

protected void UpdatePosition(VRHand hand)
    Vector3 toHandPos = Vector3.MoveTowards(this.transform.position, HeldHand.transform.position, 1);
    Quaternion toHandRot = Quaternion.RotateTowards(this.transform.rotation, HeldHand.transform.rotation, 1);
    RigidBody.MovePosition(Vector3.Lerp(RigidBody.transform.position, toHandPos, Time.deltaTime * AttachedPositionMagic));
    RigidBody.MoveRotation(Quaternion.Slerp(RigidBody.transform.rotation, toHandRot, Time.deltaTime * AttachedRotationMagic));
  • Attached Interactables: Some interactables were attached in place using a hinge (doors, hatches, windows, etc).

Since these objects follow a single path, if you change the position programmatically, it might cause the object to be moved to an invalid position. On the flip-side, if you know the path, magnitude, and starting point, it’s relatively easy to figure out the end point. By measuring the relative position of the hand to the starting point, we can figure out what percentage of the total path the object has traveled, and the interpolation does the rest of the work.

Fixed Object Interaction via Interpolation NewtonVR: Physics-based interaction on the Vive (Part 1)

float currentAngle = Vector3.Angle(startDir, lockedHandDir);
float pull = currentAngle / DegreesOfFreedom;
sphericalMapping.value = pull;
if (repositionGameObject)
    if (inBounds)
        RotationPoint.rotation = Quaternion.Slerp(start.rotation, end.rotation, pull);

In the animation above, when the player started interacting with the hatch, it was turned into a kinematic object, and the hatch position was set through the code segment. Upon ending interaction, the hatch once again became non-kinematic and physics took back over. That way, the player could slam a hatch closed, or let go and watch physics drag it down.

There is a caveat: We haven’t used configurable joints yet, or joints whose rotation is not limited to a single axis, and I’m not sure how, or if, slerp could be used for that purpose.

Use Physics

Our preferred solution was always an entirely physics based approach, during which the object under control always remains non-kinematic, but this brought significant challenges of its own.

First, if we are going to move the object only through kinematics (encompassing force, velocity, acceleration), which kind of concept do we use and how do we apply it?

Unity gives the user multiple options:

1) I can change the velocity of any Rigidbody directly. Unity’s documentation does discourage this, as it bypasses acceleration and can result in unnatural looking motion, but it was important for us to try every possible solution.

2) I can also apply a force or torque to a Rigidbody with AddForce(). Unity will apply this force vector according to the chosen ForceMode:

  • Force (Mass * Acceleration)
  • Acceleration (Velocity / Time)
  • Velocity (Position / Time)
  • Impulse (Mass * Velocity)

Second, if we use forces, how do we go about measuring and calculating the vectors? If we can’t come up with a magnitude and direction each frame that matches the player’s intention, then the game is unplayable.

Our Solution

In the end, we came to a surprising solution. Instead of applying forces, we found that directly changing the velocity of the interactable was the best way to simulate picking up an object and still allow that object to be affected by outside forces.

We originally tried to change the object position and rotation via AddForce() with ForceMode=Force. The base vector was the difference between the position of the object under interaction and the hand. That way the direction of the vector always pointed at the hand.

Position and Rotation via AddForce() w/ ForceMode.Force
(How a Jedi plays tennis) NewtonVR: Physics-based interaction on the Vive (Part 1)

protected void UpdateForceOnControl(VRHand hand)
    Vector3 toHandPos = (hand.transform.position - this.transform.position);
    AttachedRigidbody.AddForce(toHandPos * AttachedPositionMagic, ForceMode.Force);

The problem with this approach was that the magnitude of our vector was very small, because players only ever try to pick up objects they already can reach. The object should usually be overlapping with the hand. As a workaround we used a multiplier. I’m not a fan of magic numbers, but it was the fastest solution in this case.

This did not generate the desired results. Items would fly around as the position of the object converged with the hand. Gravity would pull the object down, increasing the distance difference, until the distance was great enough to slingshot the item back toward the hand.

Then we tried applying the forces with the ForceMode equal to acceleration.

protected void UpdateForceOnControl(VRHand hand)
    Vector3 force = hand.GetComponent<Rigidbody>().mass * this.GetComponent<VelocityEstimator>().GetAccelerationEstimate();
    AttachedRigidbody.AddForce(force * AttachedPositionMagic, ForceMode.Acceleration);

The problem here was that forces were only applied when the players controller was moved. If the player moved quickly, the resulting force would be large, if the player moved slowly, resulting force would be small.

These results make sense, but what happens if player is not moving at all? In these cases no force is applied, so the object will simply fall under the effect of gravity. And the vector was still too small, we still needed a magic number.

And just to see what the result would be, we decided to try changing the velocity direction. The vector would still be the difference between the intractable position and the hand position, but we would not be applying any forces using AddForce().

Position and Rotation Set via Rigidbody Velocity NewtonVR: Physics-based interaction on the Vive (Part 1)

PositionDelta = (AttachedBy.transform.position - InteractionPoint.position);
this.Rigidbody.velocity = PositionDelta * AttachedPositionMagic * Time.fixedDeltaTime;

This turned out to be a huge success. Items picked up would quickly move towards the controller, and would follow the controller with a slight delay. When the item position matched the controller position, the velocity was set to zero. There was no need to continually add forces, and then counteract those forces with additional forces (causing the item to shake).

All Unity GameObjects have velocity, which should usually not be modified directly, but because we needed the object under interaction to move slowly towards the player controller and stop when it got there, modifying the velocity made sense. It also meant that the object could be affected by external forces, but would continue to move towards the player unless the controller had moved outside of the interaction trigger faster than the velocity could be modified.

Although a direct velocity change is unrealistic because it does not take mass into account, this makes perfect sense from the player's perspective. A player never feels the weight of a virtual object, and the speed they can move an object is determined only by how fast they can swing the controller.

Quaternion Quagmire

Rotations are quaternions, and quaternion arithmetic is not handled through operators in Unity, so we cannot simply take the (Hand Rotation - Object Rotation) and end with the rotational difference. The other issue is that angular velocity is a vector quantity, not a quaternion. Meaning that we actually have to calculate Angular Velocity.

Our solution to the problem of rotation finally came from representing the quaternion rotation (from our starting orientation to our ending orientation) in an angle-axis format.

We knew that we could find the rotation quaternion between two orientation quaternions (GameObject.transform.rotation is an orientation quaternion) by multiplying the the ending orientation by the inverse of the current orientation (RotationDelta = AttachedHand.transform.rotation * Quaternion.Inverse(this.transform.rotation);).

If we did this every frame, we would end up with a small rotation delta, a quaternion representing the rotation of that frame. We could get the angle-axis representation from here (RotationDelta.ToAngleAxis(out angle, out axis)).

Using the Angle and the Axis, we could calculate the three dimensional angular velocity. Axis was our unitary vector, Angle was the angular displacement, and our time was the time the frame took to complete (Time.fixedDeltaTime * angle * axis).

At this point we still had major problems.

NewtonVR: Physics-based interaction on the Vive (Part 1)

1) The calculated rotations would sometimes be the "long way around". That is, if the desired rotation was only 10 degrees off the starting rotation, we saw situations where the object would rotate 350 degrees to get there.

2) If we increased the speed of rotation, we saw a corresponding increase in this really terrible vibration that would occur right at the end of the rotation, right when the orientation of the object had nearly matched the orientation of the controller.

It took a while to figure it out, but when we got there, the solutions were not complicated.

Our first problem was related to the fact that our rotation only ever occurred in one direction. Technically, angular velocity is a pseudovector, with direction of rotation specified by the right hand rule. This meant that while the magnitude of our calculated angular velocity was correct, the sign was not. The solution was to look at the magnitude of the angular displacement. If the angle was greater than 180 degrees, we knew that it would be faster to have the direction of rotation reversed, so we simply subtracted 360 degrees from the angle, flipping the sign of the vector and causing the rotation to occur in the other direction.

The second problem was related not to our math, but to Unity. All rigidbodies have a max angular velocity. In order to have our rotation occur fast enough, we used a large multiplier, but this also caused the angular velocity to go far beyond the default max angular velocity for that rigidbody. The solution was to simply raise the max angular velocity during the instantiation of an interactable item.

RotationDelta = AttachedHand.transform.rotation * Quaternion.Inverse(this.transform.rotation);
PositionDelta = (PickupTransform.position - this.transform.position);
RotationDelta.ToAngleAxis(out angle, out axis);
if (angle > 180)
    angle -= 360;
this.Rigidbody.angularVelocity = (Time.fixedDeltaTime * angle * axis) * AttachedRotationMagic;
this.Rigidbody.velocity = PositionDelta * AttachedPositionMagic * Time.fixedDeltaTime;
Equal and opposite

More important is that interaction between objects still take mass into account. If the player wishes to push a large mass in game, they’ll either have to use a mass of equal or greater value, or quickly accelerate a small mass into the larger one. Pushing a box with a balloon is not nearly as effective as using another box. None of these interactions had to be manually implemented; they’re handled by Unity and PhysX because we use a physics based interaction system.

Mass and Interaction - Hitting a Box with a Balloon NewtonVR: Physics-based interaction on the Vive (Part 1)

Mass and Interaction - Hitting a Box with a Box NewtonVR: Physics-based interaction on the Vive (Part 1)

Final Feelings

When we started down the path of an interaction system fully grounded in physics I don't think any of us on the team realized how much time and effort it would require. It was also surprising just how right using velocity to control objects in VR feels. Ultimately, having VR interaction done through physics has given us the freedom to move forward in implementing interactables in the game knowing that no hacks or gimmicks will be required for them to act in a believable and consistent way in the game world.

Interaction plays such a huge role in virtual reality, and can be such a gripping experience, that we think it can be an integral part of many virtual reality titles.
We'll be posting our interaction code to GitHub in the near future so that other people can use it freely. We very much believe in the promise of Virtual Reality to change the way people interact not just with games, but with technology. And we believe that, as a community, we succeed together.

<![CDATA[Using the Vive controllers for Unity's UI (UGUI)]]>While doing rapid prototyping at VREAL it has been really nice to be able to interact with Unity UI elements with the vive controllers. Displaying a lot of debug information or just trying to make quick interactions is really complex if you're trying to do it in physical space. So

http://www.vrinflux.com/using-the-vive-controllers-for-unity-4-6-ui-ugui/6eed4a9c-701b-49d7-ab3e-fb699d3326ebSun, 20 Dec 2015 22:31:01 GMT

While doing rapid prototyping at VREAL it has been really nice to be able to interact with Unity UI elements with the vive controllers. Displaying a lot of debug information or just trying to make quick interactions is really complex if you're trying to do it in physical space. So I created a Unity UI input module for the Vive that lets you interact with UGUI elements in world space. It works for most of the basic use cases and is easily expandable.

To help get other people started and so we're not duplicating work we've decided to release it on github here.

The meat of the project is ViveControllerInput.cs in the root directory of the project but you'll also need UIIgnoreRaycast.cs. I've included the standard 4.6 UGUI example as well to start you off. In the scenes folder you'll find the unmodified scenes and in the root is Menu 3D which I've added the vive camera rig to and the input module as an example.


Carefully consider whether using a 2D UI is the best input method for your VR game before you use it for a consumer facing project. We have the opportunity to invent new and interesting ways to interact with our worlds, and we should do so. But, there is the occasional situation where interacting with a 2d plane using point and click UI is the best route. And sometimes, we just don't have time to reinvent human to computer interfaces.


If you generate canvases at runtime you will need to set their Canvas.worldCamera to ViveControllerInput.Instance.ControllerCamera. The script does this on start for existing canvases (line 81)

Using two Input Modules in an event system doesn't seem to be supported at time of writing.


Start off by downloading the project off github at this link: https://github.com/VREALITY/ViveUGUIModule

If you are using the valve camera rig this is very straight forward. On the camera rig create a child object called Input Module. On that gameobject add a ViveControllerInput component to it. Setup the sprite you want to use as a cursor, and optionally give it a material. You can modify scale later if it's too big / small. That's it. Everything else should be automatic.

How it works

With this system I wanted to allow players to be looking at something else but still hit buttons on a UI. The unity UI event system is relatively straight forward. You tell it when and where a "pointer" is and it will do the rest. The tricky bit was the UI raycasting, which seems like it can only be done from a camera. On start I create a camera, set it to not actually render anything, then on each frame I set its position to each controller, and ui raycast from there. It's a bit hacky but it looks like it works well and has a tiny overhead.

<![CDATA[Vergence-accommodation conflict is a bitch—here’s how to design around it.]]>I really do enjoy a good design challenge, regardless of medium. But there’s challenges, and then there’s the vergence-accommodation conflict (VAC), which is the single most shitty, unavoidable side-effect of how VR headsets are designed.

In a nutshell: the way that the lenses of your eyes focus on

http://www.vrinflux.com/vergence-accommodation-conflict-is-a-bitch-heres-how-to-design-around-it/25883cb4-0c4e-4aa4-b88f-8ccdb3a6362aMon, 09 Nov 2015 01:39:00 GMT

I really do enjoy a good design challenge, regardless of medium. But there’s challenges, and then there’s the vergence-accommodation conflict (VAC), which is the single most shitty, unavoidable side-effect of how VR headsets are designed.

In a nutshell: the way that the lenses of your eyes focus on an object is totally separate from the way that your eyes physically aim themselves at the object you’re trying to focus on. In the image below, A & B are depictions explaining what our weird human eyeballs are doing when we're in VR, and C & D are simulations of how we experience depth when in the real world vs in a VR HMD:

Vergence-accommodation conflict is a bitch—here’s how to design around it. Above image as found in "Vergence-accommodation conflicts hinder visual performance and cause visual fatigue."

What complicates things further is how long it takes your eyes to adjust their vergence in order to look from near-field to far-field. You can actually watch how slow vergence and accommodation take to work together. Try it out for yourself and see how long it takes your eyes to adjust:

  1. Go outside or sit in front of a window, anywhere you’ll be able to look out to the horizon (infinity).
  2. Hold your index finger up 3–4 inches from your eyes and focus on it — this should cross your eyes considerably.
  3. Quickly shift your gaze from your finger and look out far away into the distance.

It should have taken your eyes a noticeably long amount of time to adjust to the new focal point — hefty vergence movements take the eyes a second or more, much longer than the fractions of a second it takes our eyes to make saccade movements (looking back and forth, scanning the environment).

So what’s the VR problem? Current-gen HMDs for virtual reality are a flat square screen inside a pair of goggles that simulate depth of field. There is a disparity between the physical surface of the screen (vergence) and the focal point of the simulated world you’re staring at (accommodation).

Virtual reality headsets like the Rift or the Vive ask you to stare at a screen literally inches from your eyes, but also focus on a point in the simulated world that’s much further away, when human eyes normally do both to one single fixed point. In fact, your brain is so used to performing both of these physiological mechanisms at the same time toward the same fixed points that activating one of them instinctively triggers the other.

All kinds of bad, nasty things happen when people are asked to separate the two: discomfort and fatigue which can cause users to end their sessions early, headaches that persist for hours afterward, even nausea in certain people (though it is oftentimes hard to separate out VAC from all of the other things that make people sick).

Quickly moving objects closer to or further away from the user can fatigue eyes faster.

So it’s in everyone’s best interests to figure out how to solve this problem, whether it’s hardware changes, light fields, or writing code to account for these kinds of effects. I’m not qualified to go into much detail on the programming side of things, but there are some pretty clever developers & academics actively trying to solve this on the software side.

Since we’re stuck with the hardware we’ve got for now, and the software approach is still TBD, we have a band-aid solution in design. Fortunately, there are some good best practices in virtual reality UX design that we can use to reduce or avoid VAC-induced discomfort. The following are some solutions presented by Hoffman et al in their paper located here, combined with my own experiences:

  1. Use long viewing distances when possible — focus cues have less influence as the distance to the display increases. Beyond 1 meter should suffice.

  2. Match the simulated distance in the display and focal distance as well as possible. That means placing in-game objects that are being examined or interacted with in the same range as the simulated static focal point.

  3. Move objects in and out of depth at a pace that gives the user’s eyes more time to adjust. Quickly moving objects closer to or further away from the user can fatigue eyes faster.

  4. Maximize the reliability of other depth cues. Accurate perspective, shading realism, and other visual cues that convey realistic depth help take a cognitive load off brains already coping with VAC. Note: simulated blur falls into this category, but current approaches to blur effects tend to exacerbate the negative impacts of VAC, ymmv.

  5. Minimize the consequences of VAC by making existing conflicts less obvious. Try not to stack multiple smaller objects at widely-varying depths overlapping each other, especially not when your users will be viewing them head-on for an extended period of time. Also try to increase the distance to the virtual scene whenever possible.

These best practices won’t solve the problem entirely, but they’ll definitely make a difference in the comfort and stamina of your users.

Further reading:

Resolving the Vergence-Accommodation Conflict in Head Mounted Displays by Gregory Kramida and Amitabh Varshney

<![CDATA[The user is disabled: solving for physical limitations in VR]]>Motion tracking has several accessibility challenges, and not just because of physical disabilities, but because we currently can't do a good job with complex movements involving body parts that can't hold controllers. We are doing a good job on the head tracking front because, duh, we're wearing those goggles on

http://www.vrinflux.com/the-user-is-disabled-solving-for-physical-limitations-in-vr/7d60a55b-68a5-4e60-b0f2-44e7fe72dc48Tue, 11 Aug 2015 00:38:00 GMT

Motion tracking has several accessibility challenges, and not just because of physical disabilities, but because we currently can't do a good job with complex movements involving body parts that can't hold controllers. We are doing a good job on the head tracking front because, duh, we're wearing those goggles on our heads. But when it comes to the 1:1 translation of movement into VR, we're fairly limited (for now).

As some people may have experienced in the past with the Kinect, if the optics software was designed to expect you to have four limbs clearly visible, or to be standing, your user experience sucks.

Consider optic motion tracking, where we already have rudimentary full-body tracking, or finger tracking — but only within certain bounds as dictated by the limitations of how optics tracking fundamentally works.

Optics tracking has real problems like occlusion, or being designed for people who fit a certain body stereotype. As some people may have experienced in the past with the Kinect, if the optics software was designed to expect you to have four limbs clearly visible, or to be standing, your user experience sucks. You might not even be able to use the technology at all!

Present-day optics do a bad job of detecting where you're located or what your limbs are doing if you don't match what the software expects your body to look like.

Defining the design challenges

So, we don't have good 1:1 full-body tracking for everyone, or anyone, whether it's with Lighthouse-style lasers or the more traditional optics solution. And based on the designs of the Vive and Rift controllers, we also know that most(?) of our finger interactions are going to be abstracted through button inputs.

These two design factors means that every single user, disabled or otherwise, will be unable to perform many basic physiological human functions within virtual reality, simply because we have no way of translating those actions into the game world. One of the obvious ways the user is limited in this way would be like how eye tracking isn't even really a thing (yet), but also how certain actions like anything to do with finger gestures or moving your feet won't be possible inputs.

In essence, there are two challenges here:

  1. People who are fully or mostly able-bodied in real life won't have full-body functionality in virtual reality due to hardware limitations

  2. People who have some kind of physical limitation in real life will struggle with assumptions that the user is able-bodied, to the point where certain VR experiences are impossible (thanks optics)

In a bad-good way, having to accommodate #1 will level the playing field for lots of people who are impacted by #2. Because we can't run around freely with our VR goggles on, walls be damned, everyone is limited to the physics and geography of real-space, not just people with physical disabilities.

What works well?

The Lighthouse system in particular is actually great for accessibility all-around because there's no assumption being made about the completeness or orientation of the user's body, like what optics has to rely on. All that Lighthouse cares about is how high your head is from the ground and whether you can hold and manipulate the controllers.

But, since Lighthouse is made for room-scale VR, we also need to consider how the user is being asked to move around in real-space in order to manipulate themselves in VR-space. This is where content creators come in!

Sitting VR experiences are perfectly suited to anyone who can sit and has a place to do so comfortably, but denies locomotion and severely restricts anything that can't be done from one's seat. These are great for games that keep your user in a cockpit, or sitting at a control panel where every interactive piece is easily within reach. Lots of accessibility needs met, since there are few physical demands.

Standing-only VR experiences start allowing for full-body movements like ducking and dodging, but also make an assumption that the user is able to stay standing the entire time.

...we start making assumptions that the user can walk, crouch, kneel, and sometimes do so repeatedly over a long period of gameplay. You don't have to be physically disabled to understand how this might not be the best user experience.

When we get out as far as room-scale, where the user is able to move freely around within the bounds of the active tracking space, we start making assumptions that the user can walk, crouch, kneel, and sometimes do so repeatedly over a long period of gameplay. You don't have to be physically disabled to understand how this might not be the best user experience.

Anyone with bad knees or an old injury who is asked to repeatedly crouch down and pick things up off the ground can quickly hit their limit and put your game down simply because they can't meet the physical demands. Even reaching for the floor can be difficult, depending on the user's range and flexibility.

The user is disabled: solving for physical limitations in VR Pictured above: Brian reaching down to pick an object up off the floor inside a Vive demo

Accessibility options for UI design

Right now, virtual reality is primarily limited to seated or standing experiences, with a growing space for room-scale thanks to the Vive, and content creators have a big design challenge on their plate to maximize awesome interactions while minimizing physical discomfort and dissonance between user input and game output (e.g. pressing a button to walk forward, instead of just, you know, using your legs).

Players who have trouble making use of VR controllers can be offered the option to use gaze as an alternative UI to interact with the environment.

We can also provide alternative interaction models or user interfaces within the VR experiences we're creating. This could be something as simple as providing a Seated Mode of gameplay, where all of the interactive objects in the game are within arm's reach while sitting down, or maybe it changes player movement in-game to suit users who can't stand or move around easily.

Players who have trouble making use of VR controllers can be offered the option to use gaze as an alternative UI to interact with the environment. This is something that we can do on any VR headset, not just the Vive. A little Gear VR demo called Bazaar makes great use of this model of interaction along with nodding to confirm. The win for accessibility here is the happy side effect of being designed for a VR headset that can't rely on the user having a controller at all.

And if you think you can make a safe assumption that most people will be able to play your game comfortably, check your assumptions with tons of user testing and feedback. Almost 33% of adults in the U.S. have at least one basic action difficulty or complex activity limitation. That's a huge chunk of VR's potential consumer market, so interaction designs should be evaluated for any experiences that require uncomfortable or even impossible interactions from users.

Another smart consideration of this design need is a game coming out later this year for the Vive called The Gallery, which puts the player on a platform that moves both vertically and horizontally, serving as a way to guide the player around the game world while keeping interactive controls close-by. This is great for users who can't continually crouch, reach, stoop down, or other types of physical activity that cause discomfort.

By defining user input and interface limitations on both the hardware side and the human side, we can better tailor VR experiences to specific degrees of physical demands and limitations for everyone, and also offer accessibility settings similar to a more real-life type of "difficulty" setting that matches the VR interactions and UI to the capabilities of the end user. Designing with accessibility in mind from the beginning truly makes it possible for everyone to experience the magic of VR.

<![CDATA[Virtual gateways and event boundaries]]>Gateways are quickly becoming a common method of transitioning users from one scene to the next. You can build portals, doors, or windows, and the idea is the same for each: being able to move from one side of a gateway to the other marks a transition from one environment

http://www.vrinflux.com/virtual-gateways-and-event-boundaries/b2ae506a-e047-40bd-9139-9a32f9829bf3Sat, 13 Jun 2015 00:34:00 GMT

Gateways are quickly becoming a common method of transitioning users from one scene to the next. You can build portals, doors, or windows, and the idea is the same for each: being able to move from one side of a gateway to the other marks a transition from one environment or viewpoint to the next.

Using gateways comes with some interesting psychological impacts that help define why they work so well as trasitional elements. Aside from gateways being a pretty cool feature in general — stepping through a literal doorway into another world is the tangible act of what VR is metaphorically providing to humankind — our working memory is impacted in a cool way as well.

When you walk through a doorway, it serves as a trigger for the mind to file memories away.

A psychologist named Gabriel Radvansky performed a series of studies on the cognitive impacts of passing through a doorway, or "event boundary". Radvansky found that subjects in his experiment had forgotten more after walking through a doorway as opposed to staying in the same room, suggesting that the event boundary itself actually inhibits a subject's ability to retrieve thoughts or decisions made in the room prior.

The impact of passing through an event boundary manifests as a release of working memory load. When you walk through a doorway, be it real or simulated, the act serves as a trigger for the mind to file memories away, freeing up space for us to process the new environment we're in. All of us have experienced this in realtime at some point, getting up to go take care of something in a different room and then finding ourselves standing in that room with no idea why we're there or what we were intending on doing.

The most compelling aspect? Radvansky did his doorway experiments in both the physical world and a virtual one. Physical or simulated, the results were the same: people who experience moving through a doorway make memory adjustments, discarding information gathered in the previous room.

"When we walk from one room to another, information about people and objects that we were dealing with in the old room is less likely to be relevant, and it appears our memory machinery is optimized to take advantage of this, releasing that old information to make information about the new situation more accessible."
Jeffrey M. Zacks, psychology professor and director of The Dynamic Cognition Laboratory

The human brain holds onto a limited model of the world that stays active in our awareness, relying on the fact that we can always check in with reality if we need a quick update. Whether you see it as lazy or efficient, brains are great at making decisions for us about what's worth expending cognitive resources on in order to recreate the real world inside our heads. When reality is constantly available to perceive through our senses, why waste the effort and energy on modeling a detailed high-fidelity version of reality when we can perpetually check in with it whenever we want to?

But if the environment changes, or if we move to a new environment say, through a doorway, then we may lose information that we really only built in broad strokes inside our heads and subsequently fail to notice changes in the environment. Noticeable ones, even. There's been plenty of studies, cousins to the doorway one Radvansky did, on how people react to changes in environment. They all strike upon the numerous failures that can occur when our brains fail to recognize discrepancies between the real world and our cognitive working model of it.

VR designers can use gateways as a method of supporting the cognitive transition happening in the between-space.

It doesn't matter if you're designing a lobby for launching applications, a communal space for socializing, or a game with multiple discrete locations. VR designers can use gateways as a method of supporting the cognitive transition happening in the between-space. Freeing up mental resources can be a great way to visually punctuate changes in location, time, scenery, or state. We can also make educated choices about what features or events to keep in the same "room" so we don't fight against the user's brain wanting to wipe the working memory slate clean if we have them traverse through a gateway in the middle of a mentally-taxing activity.

Of course, this type of mental state change can also be used against the user's brain: a mechanic that centers around changing details of the environment when you're not paying attention to them, like the VR experience Sightline uses, forcing the user to pay attention to what they would otherwise miss and put even more pressure on their working memory. It can also be used to create a powerful sense of dread/uneasiness, like we saw in the Silent Hill PT demo, since the world around us is behaving in an unexpected way. We could also use a strategically-placed gateway to help the user take a cognitive load off and mentally prepare for the next experience or task.

Above: a demo playthrough of Sightline for the Rift

How the brain processes & stores information about the reality it experiences offers a new lens through which to examine the work being done in virtual reality. Insights from studies like Radvanksy's help work toward best practices and the optimal user experience within VR design.

Further reading:

"Walking through doorways causes forgetting: Further explorations." by Radvanksy et al

<![CDATA[Oculus Touch / Lighthouse emulation with a Kinect V2 and Xbox controllers]]>About a month ago I got a chance to try the HTC/Valve Vive. It is completely amazing. Room scale VR is super awesome and half of the reason for that is input. The controllers themselves are so-so. I'm not yet totally sold on these funky touchpad wiimote things. But

http://www.vrinflux.com/oculus-touch-lighthouse-emulation-with-a-kinect-v2-and-xbox-controllers/76a87751-4702-4354-81a2-80b8f29d9db8Tue, 02 Jun 2015 21:27:00 GMT

About a month ago I got a chance to try the HTC/Valve Vive. It is completely amazing. Room scale VR is super awesome and half of the reason for that is input. The controllers themselves are so-so. I'm not yet totally sold on these funky touchpad wiimote things. But having any sort of three dimensional input solution for VR worlds is no longer a nice to have, it's a requirement. The difference between a keyboard/mouse and high precision tracking of head/hands is roughly the same as the difference between an LCD monitor and a HMD. The Vive really seems like the full package we need for consumer grade VR. I want one, and I want one now. I tried the Phillip J. Fry approach - but it was not very effective.

Oculus Touch / Lighthouse emulation with a Kinect V2 and Xbox controllers

After realizing it was going to take more than half an hour to get my hands on a Vive setup of my own I quickly set out to create an intermediary setup that I could use to test with until I get my hands on the real thing. I'm using a Kinect v2 to do the positional tracking of my head and hands, letting the rift do rotation, and using an xbox 360 controller in each hand for buttons. This setup will cost you a few hundred dollars for a solution that maybe a dozen developers will support and will be irrelevant in six months. So if you're a consumer this is not for you. If you're a developer looking to start working on concepts for the Vive before you can actually get one - here's what you'll need to make one yourself:

Parts List

Software List

Kinect Setup

I recommend clearing out a roughly 10 foot by 10 foot space. I'm currently working at a rad VR startup and we have the room but if your space is a bit smaller that's okay. Just keep in mind that the bigger space you have the more awesome it's going to be. The Kinect has a minimum range of about 2 feet and maximum of about 12. The tracking does get a little janky at either extreme.

It's also a good idea to have a solid wall as the backdrop. At first I had it pointed at my computer desk and was getting a lot of jitter. Once I moved it so it was pointed at a wall the jitter significantly decreased.

Place the Kinect at about waist height to get the best accuracy at the widest variety of ranges.

You can easily check to see what the range of the Kinect's tracking is by opening up Kinect Studio v2.0 which should have installed with the Kinect for Windows SDK. (Mine's at C:\Program Files\Microsoft SDKs\Kinect\v2.0_1409\Tools\KinectStudio)

Oculus Touch / Lighthouse emulation with a Kinect V2 and Xbox controllers

Computer Location

Placing your computer at the far reach of your Kinect seems to work the best for me. That way the cord for the DK2 is usually behind or to one side of me instead of coming over my shoulder. Ideally, we'd install some sort of cable boom solution so we never trip over the cord. But for now this is ok.

Known Issues

The Kinect is an amazing piece of technology, but optical tracking is not ideal for VR. The main issue is called occlusion. If you're facing away from the camera, then it can't see your hands, and doesn't know where they are. This would be a bigger issue if we were trying to use this for consumer grade VR, but for early development it doesn't cause too many problems.

It's hard to tell which avatar has the headset on. I've had the best luck with attaching it to the first avatar the kinect picks up and sticking with that until it loses tracking. Then reassigning to the next avatar that's closest to the camera. That seems to account for most of the scenarios I run into.

The version of InControl included with this package is the free version which is a bit old now. It's throwing an error for me about having multiple XInput plugins (which I don't see). That error doesn't seem to be causing any actual problems though, so I'm ignoring it for now. You also have to have the game window selected for InControl to recognize controller input. (Had fun troubleshooting this while making the gifs!)

Rotation doesn't work real well. Trying to parse out joint rotation from depth data is not a trivial problem. The Kinect does pretty well at it given the data it gets, but it is not super reliable.

The Kinect is built to do motion tracking - not position tracking. So you're going to get a bit of jitter trying to do positional tracking with it. The common solution to this problem is to just average out the location over multiple frames. This has a latency cost, but you get the benefit of not vomiting all over your equipment.

Oculus Touch / Lighthouse emulation with a Kinect V2 and Xbox controllers

Unity Quickstart Template

I've taken the Oculus SDK, the Kinect V2 SDK, and an input mapper called InControl (which is awesome) and edited them a bit to create a template to easily get started on new projects. The Kinect V2 Unity plugin is (as of this posting) in the very early days of development. So I've made some improvements and added some functionality that has been common to most of the prototyping I've been doing.

  • Tracks the first avatar who enters the space and assigns the OVR headset to them. When tracking is lost on that avatar it is reassigned to the next closest avatar (KinectManager.Instance.CurrentAvatar)
  • Stubs for avatar initialization and destruction (KinectAvatar.Initialize / KinectAvatar.Kill)
  • Kinect joint to Unity transform mapping (KinectAvatar.JointMapping)
  • Common joint transform shortcuts (KinectAvatar.LeftHand, .RightHand, .Head)
  • A shortcut for getting a Ray for where the user is pointing (KinectAvatar.GetHandRay)
  • Position / Rotation smoothing. Higher numbers means faster (KinectAvatar.PositionSmoothing, .RotationSmoothing)
  • Avatar Scale (KinectAvatar.Scale)
  • "Chaperone" type wall warning system (WallChecker.cs / Walls.prefab)
  • Recenter OVR camera with Start button (look at the kinect camera when hitting this)

Template Example

In the template I've included an example scene that lets you do simple building with cubes, color them, and shoot them. You can use one or two controllers, they both do the same things right now.

  • Start - Recenters OVR camera
  • Bumper - Hold to aim, release to shoot a bullet that adds a rigidbody to blocks you've created
  • Trigger - Hold inside a block to move. Press outside a block to create. Release to place.
  • Stick - Press down to bring up color menu. Move stick to select color. I took this concept (poorly) from the absolutely amazing TiltBrush.

Oculus Touch / Lighthouse emulation with a Kinect V2 and Xbox controllers


Set up your bounds first or you will run into things! I included WallChecker.cs with this package that I use to do this. It will fade in all the walls when you get close to one to account for backing into things. But you have to manually position the walls to match your space. This could technically be accomplished with the Kinect using the raw depth data but I haven't had a chance to do it.


I've got the project up on GitHub here: https://github.com/zite/ViveEmuTemplate

Let me know if you have any questions or issues with this setup and I'll do my best to help. Also shoot me a message here or on twitter (@zite00) if you end up building something cool! This is quite a lot of fun to play with and has definitely made me a believer in room scale vr.

<![CDATA[The critical problem with optical body and hand tracking in VR]]>There is a critical problem with optical body and hand tracking in VR that has so far gone unaddressed - perspective. I'm not talking about how tired we all are of watching fish-eye lens videos. I mean we're only using one point of view. Current tracking solutions can't see around

http://www.vrinflux.com/the-critical-problem-with-optical-body-and-hand-tracking-in-vr/2d7c7061-feda-4a75-ba38-77aa38879274Fri, 13 Feb 2015 19:47:00 GMT

There is a critical problem with optical body and hand tracking in VR that has so far gone unaddressed - perspective. I'm not talking about how tired we all are of watching fish-eye lens videos. I mean we're only using one point of view. Current tracking solutions can't see around things.

Let's go back in time a bit to the announcement of the Kinect at E3 2009. As you may remember, we were initially promised one-to-one body tracking. People were really excited. If you punched, your avatar in the game would punch. If you dodged, your avatar in game would dodge.

The critical problem with optical body and hand tracking in VR

But games that "harnessed the full power of the Kinect" never came. Partially because the resolution of the IR cameras on the Kinect was not high enough. With the Kinect 2 Microsoft has significantly increased that resolution to provide much higher fidelity and that has helped a lot. But there is still the problem of perspective.

If I have my body facing the Kinect it can get a pretty accurate picture of where all my joints are. In this gif the blue colored lines are the Kinect 2's tracking of my joints overlaid on top of the Kinect 2's video feed.

The critical problem with optical body and hand tracking in VR

As you can see, it does a very good job of tracking your body if you are directly facing it. That does allow for the kind of direct one to one body to avatar tracking experience shown in the initial demo. For a traditional gaming experience you're generally going to be facing the screen. But generally is not always, and that's where we run into trouble.

If I turn with the left side of my body facing the Kinect, it can no longer see the right side of my body. The Kinect is a stationary device. This means that it has a static view of the user. If you turn your body it can't circle around you to maintain an ideal tracking position. It has to attempt to use fancy prediction algorithms to guess where the parts of your body are that it can't see.

With VR this becomes a much larger problem. The beauty of virtual reality is that we get whole environments to explore and be immersed in. If the user turns around to look at something cool suddenly the Kinect can't see the front of their body. For the user this manifests as arms / hands freezing in a static location in the best case, or flopping around randomly in the worst case. Here I've got the Kinect's view and tracking on the left. On the right is another camera pointed at me from the side to show you what I'm actually doing.

The critical problem with optical body and hand tracking in VR

You'll notice the Kinect barely recognizes a change at all. It has some nice prediction with my left arm, correctly guessing that it's idle position is in front of my body. But when I move my arms up it has no way to track that and guesses that they stay stationary.

This kind of behavior in game would completely and immediately break immersion. And that's not even the worst of it. If you've got hand/body tracking then you're likely using it as some form of input too. So now, not only is the user violently aware of the limitations of their simulation, but they can't interact with it either.

Dynamic directional tracking with the Leap Motion

The Leap Motion is basically a tiny Kinect, except instead of doing whole body tracking, it specializes in finger tracking. The Leap Motion launched in early 2013, initially developed as a gesture-based system for anything from games to spreadsheets. The idea being that developers would invent whole new user interfaces for computing to capitalize on this new form of input. You set it up by placing it on your desk in front of the keyboard and plugging it in to USB.

The critical problem with optical body and hand tracking in VR

Developers quickly discovered that interaction with a 2D screen using a 3D input device made zero sense. There were some cool demos but it I never saw any applications that amounted to much more than fancy gimmicks. Conveniently, as the Leap Motion was failing to deliver us into the "future of computing" - VR started its comeback.

Having finger tracking in VR is absolutely amazing. I can't stress this enough. As our lord and savior Palmer Luckey mentions every time he gets a chance, the first thing users do when they put on a headset is look for their hands. Seeing them there, is just awesome.

Like most of VR it's one of those things that is hard to express if you haven't tried it. But for as awesome as having your hands in VR is, having to keep your hands positioned over a tiny device - that you can't see - is terrible. One of the other drawbacks of the Leap Motion is that it has a very small range. This limits the position of your hands to directly in front of you in the environment. You have to be very conscious of where the sensor is so your hands maintain tracking. If you move your hands outside of these bounds then suddenly they disappear.

Some developer had a stroke of genius and realized that the majority of the time you're using your hands - you are also looking at them. And an elegant solution was born.

The critical problem with optical body and hand tracking in VR

There are actual elegant solutions but they require a 3d printer or cost money. Duct tape works okay, though it does mess up the DK2's positional tracking.

Attaching the Leap Motion to the Rift greatly improves the perspective problem. Now, instead of being a static perspective you get a dynamic directional perspective. The device can virtually see what you see.

The critical problem with optical body and hand tracking in VR


The Leap Motion seeing what you see isn't good enough. Even if we imagine the other issues of the Leap Motion (resolution, field of view, range) get resolved - there is still the issue of singular perspective. Prediction algorithms could be significantly improved, but they can still only do so much.

If I close my fist with the back of my hand facing the leap motion it cannot see any of my fingers. If I then extend my fingers it can, at best, see little slivers of one section of my fingers. That's simply not enough data to create a full 3d model from.

In the gif below you can see what I'm talking about. The left is the leap motion's IR camera's view with the fingers it is tracking overlaid on top. The right is a side view of what my hand is doing. As I move my fingers the Leap Motion cannot see them and so does not track the changes in their position. It just leaves my fingers as a clenched fist. Bad news if I'm in VR trying to let go of an object I'm holding.

The critical problem with optical body and hand tracking in VR

Supplementing the dynamic-directional camera of the leap motion with the DK2's position tracking camera would in theory give you more data. Adding another perspective would significantly help the tracking issues. However the DK2's positional camera has a static position. Once you turn to the side, or your hand goes below the level of that camera, you run into the same issue as before - lack of data. Even in the example gif above, it's hard for the human eye to detect I'm extending my fingers one by one, an IR camera isn't going to do any better.

Just add lots of cameras!

As simple as this is, it's not a bad idea, technically. With 3 Kinect style cameras positioned triangularly around you now you have three separate perspectives to look at the user from. If one camera can only see the back of your palm then it's nearly assured that at least one other camera can see the front. Combining the data from multiple perspectives makes for very accurate models.

Though, this is pretty cumbersome for a user.

Those Kinects have to be setup at about waist height. They also need power running to them. Then they need cables running to 3 USB3 ports on a desktop computer. You also have to position the Kinect 2 a minimum of 3 feet away from you for proper tracking. Imagine a 6 foot tracking radius for the user to be able to extend their arms inside of and you've got a space requirement of about 10 foot square. And that's not accounting for the user stepping forward at all. Tack on the $600 (pre tax) price tag of those three devices (without mounts) and you're arguably out of reach of the average VR enthusiast.

No other solution

There really is no other solution to this issue. The data simply isn't there and the only way to get it is to add more sensors and average the data. Ignoring this problem or trying to get around it with shmancy algorithms is just going to make for crappy user experience due to the inevitable loss of tracking.

The good news is that the concept of this type of multi sensor setup makes a lot of sense for a system like the Virtuix Omni or the Cyberith Virtualizer. Though, there is still the issue of added cost to an already expensive product. And they may not look quite as slick with a few sensors sticking out on all sides. But it could provide real, 360 degree, full body and finger tracking. No other solution out there can claim that.