您的位置:首页 > 其它

VR系列——Oculus最佳实践:六、追踪

2017-03-02 20:40 211 查看
VR相机的视角必须和可见区域匹配。一般来说,Oculus建议不跟随基本视角而改变。

Rift传感器收集用户偏移,倾斜和翻转的信息。

DK2向传感器提供6-D.O.F.位置追踪

允许用户基于一个对他们来说舒适的方位来设置初始点,并且以他们最初的方位做为引导。

不要禁止或改变方位追踪,特别是用户在真实世界中移动的情况下。

如果用户即将离开相机的追踪范围,给用户警告提示;在追踪丢失之前调暗屏幕。

利用位置追踪,用户可以定位VR相机几乎所有的位置,确保他们在不会看到任何技术按键或者被固定在一个环境。

当位置追踪不可用时, 我们的sdk示例里有实现”头部”模式的代码。

优化你的整个引擎管道来减少延迟.

实现Oculus VR的预测跟踪脚本(在SDK demo中可找到的)来进一步降低延迟。

如果延迟确实不可避免,变动的延迟时间比固定的延迟更糟糕。

方向追踪

The Oculus Rift 头戴式设备包括陀螺仪、加速计和磁力仪。我们通过一道叫做传感融合的程序来合并来自这些传感器的信息,从而确定用户在真实世界中头部的位置,并且实时同步这个用户的虚拟视角。这些传感器提供数据来精确定位和描绘偏移运动、倾斜和翻转运动。

我们发现了一个关于用户头颈的非常简单的模型,在将头部运动精确转换到相机运动时很有用。我们简称这个为一个头部模型,它反映了这样一个事实:头部在任意这三个方向的一种运动都是大致以一个点为轴心,基于你的颈部——接近你的喉部。这意味着头部转动也在你眼睛视线产生一个转换,,产生运动视差, 这是深度感知和舒适性的一个有力线索。

位置跟踪

工具包2 介绍了Rift 6个自由度的位置追踪。在DK2的红外半透明外壳下是一系列在真实世界中被内置红外相机跟踪的红外微型LED。位置跟踪应要与用户的移动进行1:1的对应,只要用户还在相机的跟踪体内。加大对使用者的位置跟踪响应会让人感到不适。

这个sdk 通过一组点和向量给出了一个大致的用户头部的空间模型。这个模型被定义在一个起始点,他们在相机前的一个舒适位置时, 这个起始点大概集中在用户头部和颈部的轴心点。

你应该使用户能够基于他们所在位置和他们的Rift如何启动来重置头部模型的起点。用户在体验时也有可能会移位和移动,因此他们应有能力随时重置原点。不管怎样,你应该在内部为用户提供最好的指导,帮助他们在没有离开追踪体时,在镜头前更好的定位从而在体验时自由的移动。否则,用户也许会在不知道的情况下设置一个在相机跟踪范围边缘的一个点为初始点,导致他们在移动时失去他们的位置跟踪点。这个可以采用与游戏体验分离的安装或校准工具方式。

头部模型主要是由三个向量组成。一个向量大致指向用户的脖子,起始于原点的位置追踪空间并指向一个大约在用户鼻梁的中心眼。两个向量来源于这个中心眼,一个指向左边眼睛的瞳孔一个指向另外一边。更详细的关于用户位置信息的文档可以在SDK中找到。

位置跟踪为更舒适、身临其境的体验和游戏元素带来新的可能性。玩家可以倾身靠在一个驾驶舱控制台,通过身体微妙的转移四下张望,通过闪避的方式躲避子弹,等等。

虽然位置跟踪拥有很大的潜力,但它也引入了新的挑战。首先,用户可以离开跟踪相机的视线区域并且失去位置跟踪,这会是一个非常不协调的体验。(相机跟踪范围内外的方向追踪功能是基于专有的IMU技术,这个技术已经从DK1引入来完善新的相机定向定位功能。)为了保持一致和不间断的体验,当用户即将接近相机追踪范围边缘时你需要在跟踪消失前提醒用户。他们也应该得到某种形式的反馈,帮助他们更好地在相机前定位。

我们推荐使画面渐渐变成黑色,这种方式会比在移动中看到环境却没有位置跟踪来的更少一些迷茫和不适感。当位置追踪消失时,这个SDK默认使用方向追踪和头部模型。虽然这只是模拟体验DK1,但用户移动时有位置跟踪预期,却没有相应的渲染场景反应会令人不适。

位置追踪带来的第二个挑战是用户可以将虚拟摄像机移动到一个不寻常的位置,这在之前也许是不可能的。例如,用户可以在传统视频游戏中移动摄像机去看对象下方, 或者绕过障碍物去看被遮挡住的部分来寻找隐藏物品。一方面,这开启了新的交互方式,像是用身体的移动去查看和检视环境中的对象。另一方面,用户也许可以借助位置跟踪发现一些预先设计的隐藏捷径。注意确保画面不要破坏用户在虚拟环境的代入感。

一个相关的问
cd54
题是,用户可以潜在的使用位置跟踪来穿过虚拟环境,比如通过倾斜穿过一堵墙或者一个物品。一种方法是设计你的环境,阻止用户在相机的追踪范围内穿过一个物体。依据上述建议,在用户穿过任何物体之时使屏幕渐渐变黑色。类似于阻止用户靠近小于光学舒适距离0.75-3.5米的物体,然而,这可能会让观众感觉远离一切,仿佛被一种无形的障碍物包围。实验和测试需要找到一个平衡可用性和舒适性的理想解决方案。

虽然我们鼓励开发人员去探索解决这些位置跟踪挑战的新方案,但是我们不建议当虚拟环境还在视线中时用任何方式取消用户的位置追踪, 或其他改变它的行为的方式。看到虚拟环境位置追踪停止响应(或者响应不一致), 特别是当在真实世界移动时,这将会给用户带来不舒适的感觉。任何方式去应对这些问题都应该给用户充分的反馈,告诉用户现在发生了什么并且如何恢复到正常的交互。

延迟

我们将延迟定义为用户头部运动和更新图像显示到屏幕上(物理运动到光学显示)的时间间隔 ,它包括传感器响应时间,融合,渲染,图像传输和显示响应。

最小化延迟对沉浸和舒适的虚拟现实是至关重要的,并且低延迟的头部追踪是Rift有别于其他技术的所在。在你的游戏中,延迟越小,给用户的沉浸感和舒适度就越大。

应对这种延迟带来的影响的一种方法是预测跟踪技术。虽然它没有实际减少motion-to-photon流程的长度,但它利用当前流程管线中现有的信息来预测用户将会看哪里。这个延迟的补偿与读取传感器然后通过预测用户将看到哪里并渲染绘制这部分环境到屏幕上的这个进程有关,而不是在读取传感器时用户在看的地方。我们鼓励开发人员实现SDK 提供的 预测跟踪代码。有关它是如何执行运作的,可以查看史蒂夫·拉瓦的《The Latent Power of Prediction》博客文章以及相关的sdk文档。

在Oculus 我们相信这个引人注目的虚拟现实门槛会在或者小于20ms的延迟上。超过这个范围,用户会感觉虚拟环境不够沉浸和不舒适。当延迟超过60ms 那么头部动作和虚拟世界动作的分离会开始变成不同步,导致不舒适和方向障碍性;大的延迟被认为是模拟器眩晕症[1]的主要原因之一。撇开舒适的问题,延迟会破坏用户的交互和参与感。很明显,在理想世界中,我们希望越趋向于0ms延迟越好。如果延迟是不可避免的,它的变动会使得感官上更加的不舒适。你应该争取达到最低延迟和最低变动延迟。

[1]Kolasinski E.M.(1995)。VR的模拟器眩晕症(ARTI-TR-1027)。弗吉尼亚州亚历山大市:军队行为和社会科学研究所。来源于http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA295861

原文如下

The FOV of the virtual cameras must match the visible display area. In general, Oculus recommends not changing with the default FOV.

The Rift sensors collect information about user yaw, pitch, and roll.

DK2 brings 6-D.O.F. position tracking to the Rift.

Allow users to set the origin point based on a comfortable position for them with guidance for initially positioning themselves.

Do not disable or modify position tracking, especially while the user is moving in the real world.

Warn the user if they are about to leave the camera tracking volume; fade the screen to black before tracking is lost.

The user can position the virtual camera virtually anywhere with position tracking; make sure they cannot see technical shortcuts or clip through the environment.

Implement the “head model” code available in our SDK demos whenever position tracking is unavailable.

Optimize your entire engine pipeline to minimize lag and latency.

Implement Oculus VR’s predictive tracking code (available in the SDK demos) to further reduce latency.

If latency is truly unavoidable, variable lags are worse than a consistent one.

Orientation Tracking

The Oculus Rift headset contains a gyroscope, accelerometer, and magnetometer. We combine the information from these sensors through a process known as sensor fusion to determine the orientation of the user’s head in the real world, and to synchronize the user’s virtual perspective in real-time. These sensors provide data to accurately track and portray yaw, pitch, and roll movements.

We have found a very simple model of the user’s head and neck to be useful in accurately translating sensor information from head movements into camera movements. We refer to this in short as the head model, and it reflects the fact that movement of the head in any of the three directions actually pivots around a point roughly at the base of your neck—near your voice-box. This means that rotation of the head also produces a translation at your eyes, creating motion parallax, a powerful cue for both depth perception and comfort.

Position Tracking

Development Kit 2 introduces 6-degree-of-freedom position tracking to the Rift. Underneath the DK2’s IRtranslucent outer casing is an array of infrared micro-LEDs, which are tracked in real space by the included infrared camera. Positional tracking should always correspond 1:1 with the user’s movements as long as they are inside the tracking camera’s volume. Augmenting the response of position tracking to the player’s movements can be discomforting.

The SDK reports a rough model of the user’s head in space based on a set of points and vectors. The model is defined around an origin point, which should be centered approximately at the pivot point of the user’s head and neck when they are sitting up in a comfortable position in front of the camera.

You should give users the ability to reset the head model’s origin point based on where they are sitting and how their Rift is set up. Users may also shift or move during gameplay, and therefore should have the ability to reset the origin at any time. However, your content should also provide users with some means of guidance to help them best position themselves in front of the camera to allow free movement during your experience without leaving the tracking volume. Otherwise, a user might unknowingly set the origin to a point on the edge of the camera’s tracking range, causing them to lose position tracking when they move. This can take the form of a set-up or calibration utility separate from gameplay.

The head model is primarily composed of three vectors. One vector roughly maps onto the user’s neck, which begins at the origin of the position tracking space and points to the “center eye,” a point roughly at the user’s nose bridge. Two vectors originate from the center eye, one pointing to the pupil of the left eye, the other to the right. More detailed documentation on user position data can be found in the SDK.

Position tracking opens new possibilities for more comfortable, immersive experiences and gameplay elements. Players can lean in to examine a cockpit console, peer around corners with a subtle shift of the body, dodge projectiles by ducking out of their way, and much more.

Although position tracking holds a great deal of potential, it also introduces new challenges. First, users can leave the viewing area of the tracking camera and lose position tracking, which can be a very jarring experience. (Orientation tracking functions inside and outside the camera’s tracking range, based on the proprietary IMU technology which has carried over from DK1 to complement new camera-based orientation and positional tracking.) To maintain a consistent, uninterrupted experience, you should provide users with warnings as they begin to approach the edges of the camera’s tracking volume before position tracking is lost. They should also receive some form of feedback that will help them better position themselves in front of the camera for tracking.

We recommend fading the scene to black before tracking is lost, which is a much less disorienting and discomforting sight than seeing the environment without position tracking while moving. The SDK defaults to using orientation tracking and the head model when position tracking is lost. While this does merely simulate the experience of using the DK1, moving with the expectation of position tracking and not having the rendered scene respond accordingly can be discomforting.

The second challenge introduced by position tracking is that users can now move the virtual camera into unusual positions that might have been previously impossible. For instance, users can move the camera to look under objects or around barriers to see parts of the environment that would be hidden from them in a conventional video game. On the one hand, this opens up new methods of interaction, like physically moving to peer around cover or examine objects in the environment. On the other hand, users may be able to uncover technical shortcuts you might have taken in designing the environment that would normally be hidden without position tracking. Take care to ensure that art and assets do not break the user’s sense of immersion in the virtual environment.

A related issue is that the user can potentially use position tracking to clip through the virtual environment by leaning through a wall or object. One approach is to design your environment so that it is impossible for the user to clip through an object while still inside the camera’s tracking volume. Following the recommendations above, the scene would fade to black before the user could clip through anything. Similar to preventing users from approaching objects closer than the optical comfort zone of 0.75-3.5 meters, however, this can make the viewer feel distanced from everything, as if surrounded by an invisible barrier. Experimentation and testing will be necessary to find an ideal solution that balances usability and comfort.

Although we encourage developers to explore innovative new solutions to these challenges of position tracking, we discourage any method that takes away position tracking from the user or otherwise changes its behavior while the virtual environment is in view. Seeing the virtual environment stop responding (or responding differently) to position tracking, particularly while moving in the real world, can be discomforting to the user. Any method for combating these issues should provide the user with adequate feedback for what is happening and how to resume normal interaction.

Latency

We define latency as the total time between movement of the user’s head and the updated image being displayed on the screen (“motion-to-photon”), and it includes the times for sensor response, fusion, rendering, image transmission, and display response.

Minimizing latency is crucial to immersive and comfortable VR, and low latency head tracking is part of what sets the Rift apart from other technologies. The more you can minimize motion-to-photon latency in your game, the more immersive and comfortable the experience will be for the user.

One approach to combating the effects of latency is our predictive tracking technology. Although it does not serve to actually reduce the length of the motion-to-photon pipeline, it uses information currently in the pipeline to predict where the user will be looking in the future. This compensates for the delay associated with the process of reading the sensors and then rendering to the screen by anticipating where the user will be looking at the time of rendering and drawing that part of the environment to the screen instead of where the user was looking at the time of sensor reading. We encourage developers to implement the predictive tracking code provided in the SDK. For details on how this works, see Steve LaValle’s The Latent Power of Prediction blog post as well as the relevant SDK documentation.

At Oculus we believe the threshold for compelling VR to be at or below 20ms of latency. Above this range, users tend to feel less immersed and comfortable in the environment. When latency exceeds 60ms, the disjunction between one’s head motions and the motions of the virtual world start to feel out of sync, causing discomfort and disorientation; large latencies are believed to be one of the primary causes of simulator sickness.[1] Independent of comfort issues, latency can be disruptive to user interactions and presence. Obviously, in an ideal world, the closer we are to 0ms, the better. If latency is unavoidable, it will be more uncomfortable the more variable it is. You should therefore shoot for the lowest and least variable latency possible.

[1] Kolasinski, E.M. (1995). Simulator sickness in virtual environments (ARTI-TR-1027). Alexandria, VA: Army Research Institute for the Behavioral and Social Sciences. Retrieved from http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA295861
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: