The VR industry is changing with each passing day. Now it seems that it is only a matter of time before VR equipment can be mass produced. Of course, in the early stage of development, VR will mainly appear in the form of high-end game equipment. But its application field will certainly expand rapidly. Before VR becomes known to all, let’s slow down and discuss what problems it is facing.

First of all, we need to be clear about the definition of delay: it refers to the time when the system converts the actual movement of the head into the image you see on the screen of the VR helmet. The occurrence of these two events must be quite close, so that you can’t perceive the time difference like the real world; if the delay time is too long or changeable, the immersive experience will be very unnatural, and the brain will activate the antagonism mechanism to make you feel sick or dizzy – this feeling is not good. Industry research shows that the “motion to picture” delay time must be less than 20 milliseconds (MS), otherwise it will not be able to create a smooth and natural VR experience. Since the standard refresh rate is 60Hz, it means that the delay should be 16ms. Although this goal is not easy to achieve, it is not impossible to achieve with the right method.

To reduce delay, you need these tips

When some specific technologies are combined together, VR system with low latency can be built successfully. First, let’s talk about the previous buffer rendering. Including Android devices, image applications usually use double buffering or triple buffering technology, so that the GPU can map pixels to the off screen copy rotation buffer, and exchange with the on screen buffer at the end of each refresh of the display, so as to achieve smooth experience. This process can make the time difference between adjacent frames more uniform, but also increase the delay – the opposite of what VR wants to achieve. In the process of front-end buffer rendering, GPU can bypass the off screen buffer and render the on screen buffer directly, so as to reduce the delay. The front-end buffer rendering needs to be accurately synchronized with the display screen to ensure that the GPU writing is always before the display reading. The environment priority extension function of Mali GPU can realize the fast scheduling of GPU tasks, so that the priority of front-end buffer rendering process is higher than that of tasks with lower urgency, which can improve the user experience.

How to realize the integration of GPU and display in VR design

Eliminate extra buffer rendering to reduce latency

The second important secret is to choose the right display type for your VR device. Organic light emitting diode (OLED) display is a powerful tool to improve VR experience. Its working principle is quite different from the familiar and mature LCD display. Using the thin film transistor array at the back end, each pixel on the OLED display can act as a light source, while the LCD uses a white LED backlight. The brightness of OLED display is determined by the current intensity flowing through the film, and the color management is realized by independently adjusting the red, green and blue LED lights behind the screen. Therefore, OLED can present high brightness, high contrast and high saturation colors. In addition, as long as you turn off a few parts of the screen, you can see a deeper black than the LCD screen that blocks the backlight. Although this is usually the selling point of OLED screen, it is also very important for VR, because some lighting can achieve lower persistence more easily. The full afterglow display means that the screen lights up continuously, and the view is only correct for a short time, but it will soon expire; while the low afterglow display lights up the image only when the view is correct, and then goes out. This process is difficult to detect at a very high refresh rate, resulting in the illusion of continuous images.

This principle is very important to reduce the image blur. Low afterglow can allow higher flexibility, that is to say, the display can display multiple partial images in one refresh, and adjust the middle frame according to the change data collected by the helmet sensor. Therefore, when the user’s scene sweeps the screen, the head position in the system will also change; and the LCD screen with panoramic backlight can’t do this. Therefore, the key to low latency VR experience is to use time warped processes to render the front-end buffer in the form of blocks or strips and drive the OLED screen. In this way, the image seen on the screen can adapt to the head rotation very quickly, and there is no other way to compare with it.

Asynchronous time warping technology

The next key technology is asynchronous time warping. Because the scene change of immersive VR application is relatively gentle, the image change between scenes is smaller and easier to predict. Warping refers to shifting the previously rendered image of the head position to match the new head position. This process can separate the relationship between the application frame rate and refresh rate to a certain extent, so as to achieve low latency and meet the specific application scenarios. This kind of displacement only responds to the rotation of the head, but is insensitive to the change of the head position or scene animation. Although time warping is also an expedient measure, it can be regarded as an effective security guarantee, and it can make the devices running at 30fps frame rate (at least some of them) present the experience of tracking the user’s head movement at 60fps or above.

The secret weapon of VR technology

In this article, we discussed how to achieve deep integration between GPU and display, but this is only the tip of the iceberg. If we want to play video (possibly DRM protected video) and integrate system notification, the problem becomes much more complicated. High quality VR support requires multimedia products to have strong synchronization ability and efficient use of bandwidth communication ability, not only to create the best experience for end users, but also to maximize power efficiency and performance. With the help of efficient tools such as arm frame buffer compression (AFBC) and arm TrustZone, arm Mali Multimedia Suite (MMS) can realize the deep integration of GPU, video and display processor, which is the leading tool for VR device development.

Editor in charge: CT

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *