White Papers

Extending the example to broader use cases, not all data or information must
conform to this extreme limit. In fact, various levels of processing and latency
requirements may exist within the same ecosystem.
For example, the closed loop processing required of 20msultra low latency,
or ULL-- as discussed for AR/VR and other interactive devices, may need to be
maintained between the headset and the end user system, or alternatively, may
be offloaded through acceleration within the headset.
Other services, however, may need real time service but at slightly higher
latency. For example, even if tracking latency for a headset is handled through
offload, the need for a constantly updated graphics stream in addition to other
services, would likely require a low latency (LL) pathgreater than 20
milliseconds, but likely less than 50 milliseconds between local devices.
In addition, the end user system may have access to edge services, such as
machine learning algorithms to support object recognition or other services, that
support tasks at the end user, but are offloaded from the end user system.
These services may be required to support an intermediate or mid-range
latency (ML), possibly greater than 50 milliseconds, but still less than 200ms,
given they may be related to specific user interactions, but may be allowed to
update in the background.
Finally, the user may have access to cloud services that are completely
offloaded from the end user system and may tolerate higher latencies (HL) of
greater than 200ms.
Backchannel requirements may also support this model, given the amount of
video capture, motion-tracking, context and control information backchannel to
various levels of compute need to support the latency requirements at each
level of processing. E.g. the closed loop ultra-low latency tracking discussed
above can be handled locally within the headset through offload or between the
end user system and the head mounted device.
Other control and video capture information that is streamed real time back to
the end user system may withstand a slightly high latency in support of the 20-
50 milliseconds low latency round trip responses. Detailed mapping including
depth mapping or upload of video capture in support of edge services such as
detail SLAM maps or video archiving may be supported at midrange latencies
to the edge. Finally, larger datasets uploaded in support of cloud services may
not be sensitive to latency and can be supported by higher roundtrip latencies
greater than 200 milliseconds.
The hierarchy described in the examples above is represented below as a
multi-prong latency model.