Best Practices Guide http://developer.oculusvr.
January 9, 2015 version A note on Gear VR: Welcome to the Oculus Best Practices Guide! This guide describes guidelines for developing content for the Oculus Rift Development Kit 2. At this time, the guide does not explicitly address the Samsung Gear VR. Although many of the same best practices apply across the entire medium of VR, please keep in mind the following key differences between the two products: • • • • • 1 The DK2 has six-degree-of-freedom position tracking, but the Gear VR does not.
January 9, 2015 version The goal of this guide is to help developers create VR content that promotes: ! Oculomotor Comfort - avoiding eye strain. ! Bodily Comfort - preventing feelings of disorientation and nausea. ! Positive User Experience - providing fun, immersive and engaging interactions.
January 9, 2015 version the eye-render buffer resolution while maintaining display resolution can improve performance with less of an effect on visual quality than doing both. ● ● ● ● ● ● ● ● ● ● ● Head-tracking and Viewpoint Avoid visuals that upset the user’s sense of stability in their environment. Rotating or moving the horizon line or other large components of the user’s environment in conflict with the user’s real-world self-motion (or lack thereof) can be discomforting.
January 9, 2015 version ● ● ● ● ● ● ● ● ● ● ● any change in the motion of the user. Slowing down or stopping, turning while moving or standing still, and stepping or getting pushed sideways are all forms of acceleration. Have accelerations initiated and controlled by the user whenever possible. Shaking, jerking, or bobbing the camera will be uncomfortable for the player.
January 9, 2015 version ● ● ● ● ● ● ● ● ● ● ● ● Consider implementing mechanisms that allow users to adjust the intensity of the visual experience. This will be content-specific, but adjustments might include movement speed, the size of accelerations, or the breadth of the displayed FOV. Any such settings should default to the lowest-intensity experience.
January 9, 2015 version ● ● ● ● ● ● ● ● ● ● ● ● ● Controlling the Avatar User input devices can't be seen while wearing the Rift. Allow the use of familiar controllers as the default input method. If a keyboard is absolutely required, keep in mind that users will have to rely on tactile feedback (or trying keys) to find controls. Consider using head movement itself as a direct control or as a way of introducing context sensitivity into your control scheme.
January 9, 2015 version ● ● ● ● ● ● ● ● whether or not they wish to experience it. Don’t rely entirely on the stereoscopic 3D effect to provide depth to your content; lighting, texture, parallax (the way objects appear to move in relation to each other when the user moves), and other visual features are equally (if not more) important to conveying depth and space to the user. These depth cues should be consistent with the direction and magnitude of the stereoscopic effect.
January 9, 2015 version Appendices for further reading and detail Appendix A - Introduction to Best Practices Appendix B - Binocular Vision, Stereoscopic Imaging and Depth Cues Basics Monocular depth cues Comfortable viewing distances inside the Rift Effects of Inter-Camera Distance Potential Issues with Fusing Two Images Appendix C - Field of View and Scale (0.
January 9, 2015 version Appendix H - User Interface Heads-Up Display (HUD) Avatars Weapons and Tools Appendix I - User Input and Navigation Mouse, Keyboard, Gamepad Alternative input methods Navigation Appendix J - Content Creation Novel Demands Art Assets Audio Design User and Environment Scale Appendix K - Closing thoughts on effective VR (for now) Appendix L - Health and Safety Warnings 9 ©January 2015, Oculus VR, LLC
January 9, 2015 version Appendix A - Introduction to Best Practices ● This guide will help you make comfortable, usable VR content. Visit http://developer.oculusvr.com/best-practices for the most up-to-date information. These appendices serve to elaborate on the best practices summarized above for producing Virtual Reality (VR) experiences for the Oculus Rift.
January 9, 2015 version Appendix B - Binocular Vision, Stereoscopic Imaging and Depth Cues ● ● ● ● ● The brain uses differences between your eyes’ viewpoints to perceive depth. Don’t neglect monocular depth cues, such as texture and lighting. The most comfortable range of depths for a user to look at in the Rift is between 0.75 and 3.5 meters (1 unit in Unity = 1 meter). Set the distance between the virtual cameras to the distance between the user’s pupils from the OVR config tool.
January 9, 2015 version stereoscopic 3D. Comfortable viewing distances inside the Rift Two issues are of primary importance to understanding eye comfort when the eyes are fixating on (i.e., looking at) an object: accommodative demand and vergence demand. Accommodative demand refers to how your eyes have to adjust the shape of their lenses to bring a depth plane into focus (a process known as accommodation).
January 9, 2015 version world in focus when the lenses of their eyes are accommodated to the depth plane of the virtual screen. This can potentially lead to frustration or eye strain in a minority of users, as their eyes may have difficulty focusing appropriately. Some developers have found that depth-of-field effects can be both immersive and comfortable for situations in which you know where the user is looking.
January 9, 2015 version Figure 1: The inter-camera distance (ICD) between the left and right scene cameras (left) must be proportional to the user’s inter-pupillary distance (IPD; right). Any scaling factor applied to ICD must be applied to the entire head model and distance-related guidelines provided throughout this guide. Potential Issues with Fusing Two Images We often face situations in the real world where each eye gets a very different viewpoint, and we generally have little problem with it.
January 9, 2015 version Appendix C - Field of View and Scale (0.4 SDK) ● The FOV of the virtual cameras must match the visible display area (abbreviated cFOV and dFOV here). In general, don’t mess with the default FOV. Field of view can refer to different things that we will first disambiguate. If we use the term display field of view (dFOV), we are referring to the part of the user’s physical visual field occupied by VR content. It is a physical characteristic of the hardware and optics.
January 9, 2015 version impact visual-motor functioning after removing the Rift. The SDK will allow manipulation of the cFOV and dFOV without changing the scale, and it does so by adding black borders around the visible image. Using a smaller visible image can help increase rendering performance or serve special effects; just be aware that if you select a 40° visible image, most of the screen will be black—that is entirely intentional and not a bug.
January 9, 2015 version Appendix D - Rendering Techniques ● Be mindful of the Rift screen’s resolution, particularly with fine detail. Make sure text is large and clear enough to read and avoid thin objects and ornate textures in places where users will focus their attention. Display resolution The DK2 Rift has a 1920 x 1080 low-persistence OLED display with a 75-hz refresh rate. This represents a leap forward from DK1 in many respects, which featured a 1280 x 720, fullpersistence 60-hz LCD display.
January 9, 2015 version Understanding and Avoiding Display Flicker The low-persistence OLED display of the DK2 has pros and cons. The same mechanisms that lead to reduced motion blur—millisecond-scale cycles of lighting up and turning off illumination across the screen—are also associated with display flicker for more sensitive users.
January 9, 2015 version stereopsis might allow you to tell which of two objects on your desk is closer on the scale of millimeters. This becomes more difficult further out; if you look at two trees on the opposite side of a park, they might have to be meters apart before you can confidently tell which is closer or farther away. At even larger scales, you might have trouble telling which of two mountains in a mountain range is closer to you until the difference reaches kilometers.
January 9, 2015 version Appendix E - Motion ● ● ● ● ● ● ● The most comfortable VR experiences involve no self-motion for the user besides head and body movements to look around the environment. When self-motion is required, slower movement speeds (walking/jogging pace) are most comfortable for new users. Keep any form of acceleration as short and infrequent as possible. User and camera movements should never be decoupled. Don’t use head bobbing in first-person games.
January 9, 2015 version speed of forward movement,” acceleration can also refer to decreasing the speed of movement or stopping; rotating, turning, or tilting while stationary or moving; and moving (or ceasing to move) sideways or vertically. Instantaneous accelerations are more comfortable than gradual accelerations. Because any period of acceleration constitutes a period of conflict between the senses, discomfort will increase as a function of the frequency, size, and duration of acceleration.
January 9, 2015 version In general, you should respect the dynamics of human motion. There are limits to how people can move in the real world, and you should take this into account in your designs. Moving up or down stairs (or steep slopes) can be discomforting for people. In addition to the unusual sensation of vertical acceleration, the pronounced horizontal edges of the steps fill the visual field of the display while all moving in the same direction.
January 9, 2015 version Appendix F - Tracking ● ● ● ● ● ● The Rift sensors collect information about user yaw, pitch, and roll. DK2 brings 6-D.O.F. position tracking to the Rift. ○ Allow users to set the origin point based on a comfortable position for them with guidance for initially positioning themselves. ○ Do not disable or modify position tracking, especially while the user is moving in the real world.
January 9, 2015 version at the pivot point of the user’s head and neck when they are sitting up in a comfortable position in front of the camera. You should give users the ability to reset the head model’s origin point based on where they are sitting and how their Rift is set up. Users may also shift or move during gameplay, and therefore should have the ability to reset the origin at any time.
January 9, 2015 version the environment. On the other hand, users may be able to uncover technical shortcuts you might have taken in designing the environment that would normally be hidden without position tracking. Take care to ensure that art and assets do not break the user’s sense of immersion in the virtual environment. A related issue is that the user can potentially use position tracking to clip through the virtual environment by leaning through a wall or object.
January 9, 2015 version At Oculus we believe the threshold for compelling VR to be at or below 20ms of latency. Above this range, users tend to feel less immersed and comfortable in the environment. When latency exceeds 60ms, the disjunction between one’s head motions and the motions of the virtual world start to feel out of sync, causing discomfort and disorientation; large latencies are believed to be one of the primary causes of simulator sickness.
January 9, 2015 version Appendix G - Simulator Sickness ● ● ● ● ● ● “Simulator sickness” refers to symptoms of discomfort that arise from using simulated environments. Conflicts between the visual and bodily senses are to blame.
January 9, 2015 version Simulator sickness is comprised of a constellation of symptoms, but is primarily characterized by disorientation (including ataxia, a sense of disrupted balance), nausea (believed to stem from vection, the illusory perception of self-motion) and oculomotor discomfort (e.g., eyestrain). These are reflected in the subscales of the simulator sickness questionnaire (SSQ),11 which researchers have used to assess symptomatology in users of virtual environments.
January 9, 2015 version extended, gradual acceleration to the same movement velocity. Discomfort will increase as a function of the frequency, size, and duration of acceleration. Because any period of visually-presented acceleration represents a period of conflict between the senses, it is best to avoid them as much as possible. (Note that the vestibular organs do not respond to constant velocity, so constant visual motion represents a smaller conflict for the senses.
January 9, 2015 version Binocular Display Although binocular disparity is one of the Rift’s key and compelling depth cues, it is not without its costs. As described in Appendix C, stereoscopic images can force the eyes to converge on one point in depth while the lens of the eye accommodates (focuses itself) to another.
January 9, 2015 version also confer a similar benefit for the same reasons. Note also that the smaller the user’s view of their environment, the more they will have to move their head or virtual cameras to maintain situational awareness, which can also increase discomfort.
January 9, 2015 version uncomfortable process that leads to further discomfort when the user adjusts back to the real world outside of the Rift. The experience is similar to getting on and off a cruise ship.
January 9, 2015 version Experience The more experience a user has had with a virtual environment, the less likely they are to experience simulator sickness.24 Theories for this effect involve learned—sometimes unconscious—mechanisms that allow the user to better handle the novel experience of VR. For example, the brain learns to reinterpret visual anomalies that previously induced discomfort, and user movements become more stable and efficient to reduce vection.
January 9, 2015 version brain to form an interpretation in which the visual and vestibular senses are consistent: the user is indeed stationary with the background environment, but the foreground environment is moving around the user. Our particular implementation has used a player-locked skybox that is rendered at a distance farther away than the main environment which the player navigates.
January 9, 2015 version move them through a space. Although this method can be effective at reducing simulator sickness, users can lose their bearings and become disoriented.27 Some variants attempt to reduce the amount of vection the user experiences through manipulations of the camera. An alternative take on the “teleportation” model pulls the user out of first-person view into a “god mode” view of the environment with the player’s avatar inside it.
January 9, 2015 version Appendix(H(*(User(Interface ● ● Heads-Up Display (HUD) ○ Foregoing the HUD and integrating information into the environment would be ideal. ○ Paint reticles directly onto targets rather than a fixed depth plane. ○ Close-up weapons and tools can lead to eyestrain; make them a part of the avatar that drops out of view when not in use.
January 9, 2015 version Figure 3: Example of a very busy HUD rendered as though it appears on the inside of a helmet visor. Instead, consider building informational devices into into the environment itself. Remember that users can move their heads to glean information in a natural and intuitive way that might not work in traditional video games.
January 9, 2015 version Figure 4: A user avatar, seen at the bottom of the screen. An avatar can have its pros and cons. On the one hand, an avatar can give the user a strong sense of scale and of their body’s volume in the virtual world. On the other hand, presenting a realistic avatar body that contradicts the user’s proprioception (e.g., a walking body while they are seated) can feel peculiar.
January 9, 2015 version There are some possible “cheats” to rendering weapons and tools in the player’s view, and although we do not endorse them, your content might require or be suited to some variation on them. One possibility is to render weapons in 2D, behind your HUD if you have one. This takes care of some of the convergence and fusion problems at the expense of making the weapon look flat and artificial. Another possible approach is to employ multi-rigging, so that close-up objects (e.g.
January 9, 2015 version Appendix(I(*(User(Input(and(Navigation ● ● ● ● ● No traditional input method is ideal for VR, but gamepads are currently our best option; innovation and research are necessary (and ongoing at Oculus). Users can’t see their input devices while in the Rift; let them use a familiar controller that they can operate without sight. Leverage the Rift’s sensors for control input (e.g.
January 9, 2015 version instance, if a tooltip appears peripherally outside a menu that is navigated by raycasting. User testing is ultimately necessary to see if ray-casting fits your content. The Rift sensors use information on orientation, acceleration, and position primarily to orient and control the virtual camera, but these readings can all be leveraged for unique control schemes, such as gaze- and head-/torso-controlled movement.
January 9, 2015 version user’s head and the direction of locomotion can become misaligned—a user who wants to move straight forward in the direction they are looking may actually be moving at a diagonal heading just because their head and body are turned in their chair. Anyone using this method for navigation should therefore include an easy way for users to reset the heading of the “tank” to match the user’s direction of gaze, such as clicking in an analog stick or pressing a button.
January 9, 2015 version Appendix((J(*(Content(Creation ● ● ● ● ● ● Keep in mind that users can and should be able to look in any direction at any time; doing so should not break immersion. Beware of limitations in pixel density when creating detailed art assets. Low-polygon “cheats” (like bump mapping or flat objects) can become glaringly obvious in stereoscopic 3D, particularly up close. Sound is critical to immersion; design soundscapes carefully and consider the output devices users will use.
January 9, 2015 version Most real-time 3D applications, like games, use a number of techniques that allow them to render complex scenes at acceptable frame rates. Some effects that effectively accomplish that goal look obviously fake in stereoscopic 3D. Billboard sprites can look very obviously flat, particularly when viewed up close, particularly if they have sharp detail on them (e.g. lightning, fire). Try to only use billboards for hazy objects, such as smoke or mist, or distant background elements.
January 9, 2015 version size of objects in relation to their own body, and it’s easy to tell when any object (or the entire world) is set to the wrong scale. For most games, you’ll want to make sure to get everything scaled correctly. The Oculus Rift software, which handles inter-camera distance and field of view, expects everything to be measured in meters, so you’ll want to use the meter as your reference unit. As mentioned elsewhere, 1 unit of distance in Unity is roughly equal to 1 meter.
January 9, 2015 version scale up the user’s head inside the virtual environment. Just be aware that recommendations about object distance above must then be scaled accordingly; if you double the size of the user’s head, you must ensure that objects on which the user will focus are 1.5 to 7 meters away. It is important to make sure you have properly scaled all three vectors in the user’s head model.
January 9, 2015 version distinct cue of how high your eyes are off the ground. For further discussion of this and related issues, we refer you to Tom Forsyth’s GDC 2014 talk available online.31 31 http://www.gdcvault.
January 9, 2015 version Appendix(K(*(Closing(thoughts(on(effective(VR((for(now) ● With the Rift, you are taking unprecedented control over the user’s visual reality; this presents an unprecedented challenge to developers. The question of “What makes for effective virtual reality?” is a broad and contextual one, and we could fill tomes with its many answers. Virtual reality is still a largely uncharted medium, waiting for creative artists and developers to unlock its full potential.
January 9, 2015 version Appendix L - Health and Safety Warnings * These health & safety warnings are periodically updated for accuracy and completeness. Check oculus.com/warnings for the latest version. HEALTH & SAFETY WARNINGS: Please ensure that all users of the headset read the warnings below carefully before using the headset to reduce the risk of personal injury, discomfort or property damage.
January 9, 2015 version symptoms described below, and should limit the time children spend using the Headset and ensure they take breaks during use. Prolonged use should be avoided, as this could negatively impact hand-eye coordination, balance, and multi-tasking ability. Adults should monitor children closely during and after use of the headset for any decrease in these abilities.
January 9, 2015 version increase your susceptibility to adverse symptoms. ● Do not use the headset while in a moving vehicle such as a car, bus, or train, as this can increase your susceptibility to adverse symptoms. ● Take at least a 10 to 15 minute break every 30 minutes, even if you don’t think you need it. Each person is different, so take more frequent and longer breaks if you feel discomfort. You should decide what works best.
January 9, 2015 version use symptoms can include the symptoms above, as well as excessive drowsiness and decreased ability to multi-task. These symptoms may put you at an increased risk of injury when engaging in normal activities in the real world. ● Do not drive, operate machinery, or engage in other visually or physically demanding activities that have potentially serious consequences (i.e.
January 9, 2015 version implanted medical device, do not use the headset without first consulting your doctor or the manufacturer of your medical device. Electrical Shock: To reduce risk of electric shock: ● ● Do not modify or disassemble any of the components provided. Do not use the product if any cable is damaged or any wires are exposed. If a power adapter is provided: ● ● ● ● Do not expose the power adapter to water or moisture.