Strasbourg | July 8–10, 2019
Author: Greg Gutmann
Affiliation: Tokyo Institute of Technology
Predictive Simulation: Negating Latency for Networked Interactive VR
For a Larger View of the Poster
Right click anywhere on the image and select “Open image in a new tab”
Extended Abstract for HPG:
Current VR systems are typically limited either by performance or usability (cost, size) or a combination of both. By using a networked environment, we are aiming to alleviate both of these limitations by offloading computation to enable the use of lightweight clients. While this will help with the two common limitations with VR, it introduces a new problem, increased latency. Interactive networked virtual environments such as games  and simulations  have been around since nearly the birth of the internet and have consistently faced latency issues. We are proposing a solution for negating the effects of latency for networked interactive virtual environments with lightweight clients, with respect to the server being used. Our method extrapolates future client states to be incorporated in the server’s updates, which helps to synchronize actions on the client side with the results from the server when there is latency. We are calling this approach predictive simulation. In this work we will begin to examine the strength of our proposed predictive simulation method using regression methods as a predictor; however, in practice, any prediction method could be used.
We have been developing a coarse-grained VR molecular simulation environment for the use in developing molecular robots and artificial muscles. So far we have been working with microtubule gliding assay and molecules such as tubulin and DNA. Our motivation for the work covered in this poster is to maintain and or scale up our interactive VR simulation while also making it more accessible, by using a client and server approach to offload the simulation to a server; increasing the available computing power for the simulation and reducing the performance demand on the VR client.
Past works have used various methods such as the following: client-side predictions, interpolation & local perception filter, lag compensation or time warp, and dead reckoning. Most of these include client-side simulation or interpolation of object updates from the server, which would create a sizable computational or memory demand when working with a large scale dynamic environment. Instead of locally taking measures to negate latency, our approach utilizes the server for negating the effects of latency.
Our method keeps track of user motion and uses a predictor to extrapolate the user’s future position to use in a simulation step. The distance in the future predicted is based on the user’s round trip latency so that by the time the simulation results are presented to the user they are in sync with the point in time the user’s motion was extrapolated to.
Our results have shown that, even when the user regularly makes changes in velocity, our predictive simulation method using 2nd order polynomial regression can achieve about 3x lower error rates and 2x lower standard deviations versus having no predictor. While utilizing in the range of 100 to 300 ms of time course data, enough to cope with fluctuating network conditions. And most importantly for us, the client does not need to do extra work for negating latency. Qualitatively, when looking at the visual results of our method the user action and simulation result match up. This work has enabled us to interact with large scale simulations using a multi-GPU server and a gaming PC as a client. Our near future goals/uses of this method will be for and interacting with moderate scale systems using a gaming PC as a server and a phone as a client, and testing in WAN environments. Server sided rendering remains as future work, as compression rates suffer when viewing 100s of thousands of numbers of fast-moving particles, and VR HMD resolution is constantly increasing.
A continuation of this work will be submitted for publication.
Copyright © 2019 by Gregory Gutmann