The blog of Oliver kreylos, a research computer scientist at the University of California, Davis, discusses all aspects of immersive computer graphics from the perspective of developers (not users), and provides some reference for the industry. A few days ago, krelos wrote an article on the quantitative comparison of VR head display field of view.

Quantitative comparison of VR head display market

Although I have used calibrated wide-angle cameras to take pictures behind the mirror of several common VR head displays, I have been trying to figure out how to quantitatively compare the final field of view, how to put them into specific scenes for understanding, and how to properly visualize them. When trying to answer “which VR head display has a larger field of view?” How much larger is the field of view of head display a than that of head display B Or “how does the field of view of head display C compare with that of human eye?” One of the basic problems you face is how to compare the field of view in a fair way and in a range of possible sizes. If someone puts forward a single angle, how should we measure it? Like a diagonal field of view? What if the field of view is not rectangular? If someone puts forward two angles, such as horizontal ⨉ vertical? Similarly, what if the field of view is not rectangular?

Then, if we measure the field of view from a single angle or two angles, how can we fairly compare different fields of view? If the field of view of one head display is 100 degrees and that of the other is 110 degrees, does that mean that the latter displays 10% more virtual 3D environment? If the field of view of one head display is 100 degrees ⨉ 100 degrees and that of the other is 110 degrees ⨉ 110 degrees, does that mean that the latter displays 21% more virtual 3D environment?

In order to find a reasonable answer, we start with the basic definition: what is the field of view actually measured? Basically, the field of view is a measure of how much virtual 3D environment the observer can see at any given moment. A larger field of view value should indicate that more can be seen. Ideally, a double field of view value should mean that twice the field of view can be seen.

What does “being able to see” mean? If the fiber from something reaches our eyes, through the cornea, pupil and lens, and finally to the retina, then we can see it. In principle, light will travel to our eyes from all possible directions, but due to various obstacles (for example, we can’t see the back), only light from some directions will reach the retina. Therefore, a reasonable field of view measurement (an eye) should be the sum of the light coming from different 3D directions and reaching the retina of the eye. The problem is that light can come from countless different directions, so simple counting doesn’t work.

1. Solid angle

Another way of thinking is to place an imaginary sphere of any radius around the observer’s eye, the center of which coincides with the pupil of the eye. In this way, there will be a one-to-one correspondence between the 3D direction and the points of the imaginary sphere: each ray enters the sphere through exactly one point. As a result, instead of calculating the 3D direction, you can turn the field of view into the total area of the set of all the points of the sphere corresponding to the 3D direction that the eye can see.

As it happens, if any radius of the imaginary sphere is set to 1, this is the definition of solid angle (solid angle is the angle of an object to a specific point in three-dimensional space, which is the analogy of plane angle in three-dimensional space. It refers to the scale of the size of an object measured by an observer standing at a certain point. If a unit sphere is constructed with the observation point as the spherical center, the projection area of any object onto the unit sphere is the solid angle of the object relative to the observation point. If nothing can be seen, that is, the set of all “visible” points of the sphere is empty, the area of the set is zero. If you can see everything, the set of visible points is the entire surface of the sphere, with an area of 4 π. If only half of all the content can be seen (for example, because the observer is standing on an infinite plane), the observer’s field of view is 2 π, and so on. By the way, if the radius of a sphere is 1, its surface area is unitless (there is no unit of measurement), but in order to distinguish the solid angle from other unitless values, we use the spherical degree (SR) as the unit, which is the same as the regular (2D) angle (basically unitless) given in radians (R) or degrees (°).

In a word, solid angle is a reliable method to measure the field of view: it can measure the field of view of any shape and size in a single number, and there is a direct linear relationship between the number and the “visible” quantity.

2. Stereo angle and visual field of VR head display

So far, we have discussed how much 3D environment can be seen by the horizon, that is, the “naked eye”. As shown below, the number itself is very important, but the real problem is how to measure the field of view of the VR head display. The general idea is the same: calculate how much virtual 3D environment users can see. But unlike the real 3D environment, the light from the virtual environment does not reach the observer’s eyes from all possible directions. Instead, the light comes only from the display. If the light is traced back from the eye, it will pass through a lens of the head display and finally reach the display screen behind the lens. The field of view is still calculated in the same way, but now it is backward: the field of view is the area of all points of the unit sphere around the user’s eyes, and the points correspond to the direction from the screen (assuming that the point on the screen is used to display image data by VR pipes, but this is another problem).

Fortunately, we can measure this area in a fairly simple way. All cameras project the 3D environment onto the imaging surface (light plate or photosensitive sensor), especially the 3D direction incident through the camera focus is assigned to each point of the imaging surface. In the calibrated camera, the mapping from image points to 3D direction is known precisely (the specific calculation method will be the subject of another article).

The specific method is to place the calibrated camera (ideally with a wide angle camera) in front of the head display lens, simulate the situation that the user wears the head display and perceives the virtual environment through the lens, and then take pictures of the picture seen through the lens. Next, look at each picture pixel, determine whether a pixel shows a certain part of the screen, and sum the solid angles of all pixels (the last point is a bit complicated and can be used as an exercise for the reader). But the most important thing is the pictures taken.

3. Visual field of view of VR head display

Calculating a single solid angle for a given head display is suitable for quantitative comparison, but a picture is worth a thousand words. So how to visualize the field of view in a fair way? After all, the field of view is part of the surface of the sphere, and anyone familiar with the world map knows that you can’t show the surface of the sphere in a flat image without introducing deformation. Fortunately, there is a kind of map projection that can preserve area. In other words, the area of an area in the projection map is proportional to the area of the same area of the sphere, or ideally the same. Since the solid angle or spherical area is a reasonable measure of the field of view, the projection using this reserved area should produce a clear visualization effect: if the field of view of one head display is twice that of the other head display, the image will be exactly twice that.

4. Summary and integration

For this article, I measured the field of view of three VR head displays I happen to have: HTC vive pro, oculus rift CV1 and PSVR. I also measured the “average” value of the human naked eye field of view in the same way and according to the commonly used table (Figure 1).

Quantitative comparison of VR head display market

Figure 1: field of vision of the right eye, from Chapter 1 of an introduction to clinical perimetry by H. m Traquair. The above image is isometric azimuth projection, which is different from the area preserving projection used in this paper.

I go back to the outer boundary of the field of view in the figure, re project the contour into the area preserving projection (see Figure 2), and calculate its solid angle, that is, the total solid angle of the two eyes (assuming that the two fields of view are symmetrical), and the solid angle of the intersection of the two eyes’ fields of view, that is, the overlap of the two eyes. The specific values are as follows: one eye is 5.2482 Sr (or 1.6705 π SR); two eyes are 6.5852 Sr (or 2.0961 π SR); overlap is 3.9112 (or 1.2450 π SR). In case the latter is easier to visualize: 2 π SR is a hemisphere and 4 π SR is a complete sphere. Interestingly, the combined field of view of the two eyes is slightly larger than that of the hemisphere. Although the field of view varies from person to person, this average measurement value can be used to specifically describe the field of view value of VR head display.

Quantitative comparison of VR head display market

Figure 2: the average human naked field of view in the area preserving projection, including eye movement; the number represents the azimuth of the positive distance; red is the left eye field of view, blue is the right eye field of view; purple is the binocular field of view (stereo overlap).

Next, I processed the images behind the mirror of the three head displays in the same way, traced back the outline of the visible part of the screen, re projected the outline with the same area preserved projection, calculated the monocular, binocular and overlapping solid angles (see Table 1), and superimposed the field of view on the field of view of the ordinary human eye (see Figure 3-5). As shown in the figure, for each head display, I only use the appropriate eye distance value to maximize the field of view. Since the field of view is very dependent on the distance between eyes, it is ideal to take a series of photos and list the distance between eyes.

Quantitative comparison of VR head display market

Figure 3: the maximum field of view of HTC vive pro (8 mm suitable eye distance) is superimposed on the average human field of view using area preserving projection; the number represents the azimuth of the positive distance; red is the left eye field of view, blue is the right eye field of view; purple is the binocular field of view (stereo overlap).

Quantitative comparison of VR head display market

Figure 4: the maximum field of view (suitable eye distance 15mm) of oculus rift CV1 is superimposed on the average human field of view using area preserving projection; the number represents the azimuth of the positive distance; red is the left eye field of view, blue is the right eye field of view; purple is the binocular field of view (stereo overlap).

Quantitative comparison of VR head display market

Figure 5: the maximum field of view (suitable eye distance 10 mm) of PSVR is superimposed on the average human field of view using area preserving projection; the number represents the azimuth of the positive distance; red is the left eye field of view, blue is the right eye field of view; purple is the binocular field of view (stereo overlap).

Quantitative comparison of VR head display market

Table 1: average human field of view (including eye movement) and field of view of three VR head displays. The measured values of each field of view are given in terms of sphericity and π power sphericity. The measurement of the head display field of view is also given as a percentage of the corresponding measurement of the average human field of view.

Leave a Reply

Your email address will not be published. Required fields are marked *