Bernard Kress is an optical Architect (partner) at Microsoft hololens. In the early 10’s, he was mainly the chief optical architect of Google glasses, but he came to Microsoft in 2015 and has been engaged in the research and development of hololens hybrid reality intelligent head display.
As an introduction to edo19 digital optical technologies, Kress has written an overview of the current AR / VR / MR digital optical components and technologies. The following is the specific arrangement of Yingwei
The topic of edo19 digital optical technologies, which ended on June 24-27, 2019, covers new optical elements for AR / VR / MR systems, such as free space optical elements, waveguide optical elements (waveguide combiner), diffraction, holographic and super surface optical elements, adjustable, switchable and reconfigurable optical elements. The next Congress will be held from June 21 to 24, 2021.
National defense has been the first application field of augmented reality and virtual reality, which can be traced back to the 1950s. Next, the first consumer VR / AR wave began to rise in the 1990s, but because it was too advanced and immature, the popularity of immersion field began to fade and shrink. However, due to the lack of available consumer display technology and related sensors, a series of new optical display concepts were born at that time, and they are still quite cutting-edge technologies today, such as “private eye” smart glasses of refection Technology (1989) and virtual boy of Nintendo (1995). Both are based on scanning displays, not flat panel displays. Although this kind of display technology was very advanced at that time, the lack of consumer grade IMU sensors, low-power 3D rendering cards, and wireless data transmission technology led to the extinction of the first VR wave in the 1990s. Another reason is the lack of digital content, or the lack of clear understanding of enterprise and consumer VR / AR content.
This is very similar to the early development of iPods by general magic. In the late 1990s, the company came up with the hardware concept, but there was a lack of wireless transmission technology and music library at that time. Ten years later, apple finally integrated these three elements (hardware, WiFi, online music library), and this iPod concept overturned an era. It has not only become an ideal music playing product, but also aroused strong resonance in the consumer market.
Similarly, regardless of the quality of consumer AR / VR head displays, the consumer market will not take off without the corresponding “itune store app”.
In the decade after the decline of the first wave, the defense industry continues to expand AR / VR technology, such as flight simulation and training (helmet mounted display for rotary wing aircraft and head up display for fixed wing aircraft). In the ’00s, effective consumer efforts were in the field of automotive head up displays and personal head up video players.
Today’s engineers have been exposed to the growing popularity of flat panel display technology since childhood, and they are used to such innovation compared with their peers who had to invent novel immersive display technology from scratch 20 years ago. Since 2012, we have seen the initial immersive AR / VR implementation based on ready-made smart phone display panel (ltps-lcd, IPS-LCD, AMOLED) or pico display panel (htps-lcd, Mu OLED, DLP, LCOS), IMU, camera, depth sensor (structured light and time of flight). At present, the head display architecture is slowly developing into a more specific technology, and it is more suitable for immersive experience with flat panel display. Just like the wave of invention (inorganic Mu LED panel, one-dimensional scanning array, two-dimensional laser / VCSEL MEMS scanner, etc.) that arose 20 years ago Now the industry is booming.
It shows that in terms of networking and sensors, the smartphone technology ecosystem has shaped the second wave of VR / AR, and has always been the most obvious target use case for early products. This traditional quota display technology will serve as the initial catalyst for the next wave.
However, AR / VR immersive display experience is a paradigm shift of traditional panel display experience that has lasted for more than half a century: from CRT TV to LCD computer display and laptop screen, to OLED tablet and smartphone, LCOS, DLP and MEMS scanner, digital projector, and then to LED smart watch (see Figure 1).
Figure 1: immersive near eye display: paradigm shift of personal information display
When using flat panel display technology and Architecture (smart phone or micro display panel) to realize immersive near eye display devices, the amount of optical expansion, fixed focus and low brightness become serious limitations. In order to support the near eye display matching the characteristics of human visual system, we need alternative display technology.
The second upsurge of virtual reality / augmented reality / smart glasses in the early 10’s of this century has brought a new naming trend: mixed reality, fusion reality, extended reality, or collectively referred to as XR. Smart eye wear devices (including digital information display and glasses) tend to replace the original naming convention of smart glasses.
2. Smart glasses / Ar / MR / VR Market
Unlike the first AR / VR boom, today’s investors, market analysts, AR / VR / MR developers and enterprise users all expect this unique technology to bring real return on investment in the next five years, as shown in the 2017 / 2018 Gartner technology maturity curve (see Figure 2).
Figure 2: Gartner technology maturity curve
Gartner chart in 2017 shows that AR and VR are expected to achieve stable development in two to 10 years, while VR is several years ahead of AR. This is a concept accepted by most AR / VR analysts. Interestingly, the revised 2018 chart no longer shows VR, but Mr. VR entered a more mature stage in 2018 and even became a commodity, so Gartner removed it from the emerging technology category.
But we have to be careful. At present, it has been proved that the sustainable market is corporate Mr, in which the rate of return on investment (ROI) is mainly cost aversion
New employees start faster, make fewer mistakes, and earn more.
Collaborative design, remote expert guidance, better service, enhanced monitoring.
Higher manufacturing quality assurance.
Improved product presentation and presentation, and provided a better end user experience.
Hybrid reality has shown significant ROI in manufacturing (automotive, avionics, heavy industrial products), power, energy, mining and utilities, technology, media and telecommunications, healthcare and surgery, financial services, and retail / hospitality / leisure.
The existing evidence of smart glasses / Ar / MR consumer market is relatively not very obvious. Smart glasses (Google glass, snap spectra, Intel vant or North “focals”) have been tested; the development of VR head display has slowed down recently (oculus / Facebook VR, Google daydream, Sony PSVR), while the VR projects of other manufacturers have been terminated, such as Intel’s alloy and Acer / starvr’s large field head display. But in the long run, the potential of Mr video use case is still great. In 2018, medium-sized enterprises such as metavision Corp (meta2), castar Corp and ODG Corp (ODG R8 and R9) experienced bankruptcy, despite strong initial products and strong venture capital support. Other companies, such as avegant, have undergone major restructuring. The rebirth of metavision and castar in mid-2019 shows that there is still great uncertainty and surprise in these areas. Other companies (Vuzix) have achieved sustained growth and digilens Corp support throughout 2019.
In terms of smart glasses, smart glasses focusing on audio have achieved a strong return on investment. Audio smart glasses that provide audio immersion and world locked audio (IMU based only) are not a new concept, but have recently ushered in advances such as surround sound and noise reduction. They can provide basic input and command for consumers and enterprises, and are an important component of enhancing the world experience. Huawei and other big companies have recently launched audio ar smart glasses. In addition, camera glasses such as snap spectra (generation 1 and 2) are just as difficult to get consumers’ approval as Google glass explorer.
In addition to the space world locked audio, if there is IMU (such as Bose ar frames), the device can detect a series of different head and body postures at the same time, including push ups, squats, nods, shakes, double clicks, looks up, looks down and turns head, etc.
Table 1: adoption rate of smart glasses / Ar / VR / MR head display in consumer / enterprise / national defense field.
Led by the Intel vant project, small smart glasses with smaller display screen (about 10 degree monocular field of view) and vision correction function were re emerged in 2018 (Google glasses failed in 2014). However, Intel’s project was discontinued in the second half of that year and instead invested in “focals” developed by North. However, the price of this product plummeted by nearly 50% in early 2019, while North also made a lot of layoffs. At present, the short-term prospect of consumer smart glasses is not very clear. The concept of smart glasses only for enterprises has a quiet but steady growth period, such as realwear head show (Vancouver, Canada) and more fashionable Google glass enterprise V2 glasses.
On the other hand, the current venture capital is frantically boosting the development of start-up unicorns, such as magic leap (with a total investment of more than $3 billion and a valuation of $7 billion). This reveals the “fear of missing out” mentality of investors in the later stage (Alibaba, Singapore Temasek and Saudi Fund), and they are eager to follow the early investment decisions of major technology venture capitalists (Google, Amazon and Qualcomm). It is also worth noting that the last two VC investments of magic leap (in late 2018 and mid-2019) were provided by major communication companies (at & T / amount unknown, DoCoMo / US $280 million). They hope that the AR market in the future will promote the development of bandwidth communication (5g, wigig, etc.), which will bring continuous return on investment (extremely low return on AR / MR hardware) for Mr cloud services.
No matter how much investment hype, a large consumer electronics company is likely to create the final consumer head display architecture (Solving visual / wearable comfort / immersive experience) and the subsequent consumer market at the same time. For the enterprise market, content is customized and developed according to the specific needs of each enterprise, while the consumer market needs to completely rely on the development of the entire Mr ecosystem, which includes from general hardware to general content and applications.
Although sales of smartphones and tablets fell globally for the first time in Q3 of 2018, which predicted a 30% decline in Apple’s stock in Q4 of the same year, it is unclear whether this Mr consumer hardware has the potential (or even willingness) to replace it Existing smart phones / tablets, or as accessories of smart phones, provide immersive experiences that traditional display concepts cannot.
In addition to the consumer and corporate markets discussed here, MR head shows great potential in the defense market. Microsoft has won a $480 million US Defense Contract in Q4 2018 to develop a special version of hololens: IVAS (integrated visual augmentation system) for the US Army. This is the largest contract in the history of AR / VR / MR and will promote the development of global Mr ecosystem.
3. MR is emerging as the next computing platform
Smart glasses (also known as digital eye wear devices) are mainly an extension of glasses to provide digital scene display for visual impairment (see Google glasses in Figure 3). This concept is very different from AR or Mr. Typical smart glasses have a very small field of view (less than 15 degrees diagonally) and usually deviate from the line of sight. The lack of sensors (except IMU) can only achieve about three degrees of freedom head tracking, while the lack of binocular vision can only display simple 2D text and images. Monocular displays do not require the frame stiffness of binocular vision systems (thus reducing the horizontal and vertical parallax that can cause eye fatigue). Most smart glasses developers also use vision correction as a standard feature (North’s focal or Google glass V2).
Figure 3: the rise of smart glasses, AR / VR and VR head display.
The combination of powerful connectivity (3G, 4G, WiFi, Bluetooth) and camera makes it a powerful accessory for smart phones, such as scene display function or virtual assistant, GPS and social network accessory (thanks to camera). Smart glasses are not intended to replace smartphones, but to complement them, just like smart watches.
VR head display is an extension of game console, which can be seen in major game suppliers such as Sony, ocuus, HTC and Microsoft WMR, as well as game content partners such as steam. The head display is usually equipped with a game controller (see Figure 3). Early outward and inward sensors (such as oculus CV1 and HTC vive in 2016) have evolved to today’s inward and outward tracking, thus providing more compact hardware (such as WMR and Samsung Xuanlong). Although this kind of high-end VR system still needs expensive PC or notebook computer equipped with advanced GPU, VR all-in-one machine has entered the consumer market in 2018, such as oculus go (3dof-imu) and HTC vive focus, and they can become the foundation of the booming VR consumer market. Recently, VR all-in-one with inward and outward sensors has further expanded this product category, such as oculus Quest (six degrees of freedom).
However, high end VR head displays equipped with inward and outward sensors have been launched, such as rift s and vive Pro launched by oculus and HTC in 2019. Other WMR head displays such as Samsung Xuanlong + doubled the resolution of the 2017 version.
AR / MR system will become the next computing platform, replacing the original desktop and notebook hardware, as well as the declining tablet hardware. Most of these hardware are in the form of wireless binding (see hololen V1 in Figure 3), requiring high-end optics, combiner components and sensors (depth scan camera, head tracking camera, precise eye tracker and posture sensor). They are the most demanding hardware, especially for optical hardware. Eventually, if technology permits, these three categories will merge into a single hardware concept. However, they need to improve connectivity (5g, wigig), visual comfort (display technology), and wearability (endurance, thermal management, weight / size).
The decline in global sales of smartphones and tablets in Q3 2018 is a major signal, which will prompt large consumer electronics companies and venture capitalists to focus on “the next epoch-making product”. No matter what it turns out to be, Mr is an excellent candidate.
4. The key to the ultimate Mr experience
For consumers or enterprises, the ultimate Mr experience is defined along two main axes: comfort and immersion. Comfort has two aspects: wearability and vision. From display to audio, to gesture and touch, immersion has many forms.
In the integration of comfort and immersion, a convincing Mr experience needs to do three things:
Less than 10 ms motion to photon delay (through optimized sensor fusion and low delay display).
Content locking is realized by continuous depth mapping and semantic recognition.
Fast eye tracking and universal eye tracking are necessary functions, which will realize many functions listed in this paper.
Most of them can be realized by global sensor fusion technology integrated into special chips, such as hololens HPU (holographic processing unit).
4.1 wearing comfort and visual comfort
For any MR head display architecture, wearing comfort and visual comfort are the important foundation to achieve mainstream popularity.
The wireless bound head display can achieve the best mobility (the future wireless connection brought by 5g or wigig will significantly reduce the on-board computing and rendering load).
Small size and light weight.
Thermal management of the entire head display (passive or active).
Skin contact management of pressure points.
Breathable fabric for sweat and heat control.
The center of gravity of head display is closer to the center of head.
Visual comfort includes:
Large windows, can achieve a wide range of IPD coverage. For consumers, optics may offer different SKUs. But for enterprises, because the head display supports employee sharing, the product needs to adapt to a wide range of IPD.
The angular resolution is close to 20 / 20 visual acuity (the fixation area is at least 45 pixels per degree), while the number of pixels per degree can be appropriately reduced in the peripheral field of view.
Zero screen effect (high pixel fill factor and high PPD), no Mira artifacts.
HDR (light-emitting displays such as MEMS scanners and OLEDs / LEDs vs non light-emitting displays such as LCOS and LCDs) is realized through high brightness and high contrast.
Artifact minimization (1%).
Unrestricted peripheral field of view of more than 200 degrees (supports outdoor use cases, which are particularly useful for national defense and Civil Engineering).
Active dimming (uniform shutter or soft edge dimming).
The visual comfort functions based on precise / Universal eye tracking include:
The convergence tracking based on differential eye tracking data can alleviate the conflict of visual convergence adjustment (because visual adjustment is the trigger of visual convergence) of close objects in the cone of fixation.
It is suitable for active pupil dissociation correction of optical devices with large field of view.
Active pixel occlusion (hard edge occlusion) for increasing hologram opacity (more realistic).
Additional visual comfort and enhancement features include:
Active vision correction with spherical and astigmatic diopters.
If the visual convergence adjustment conflict resolution architecture does not produce optical blur, rendering blur will be added to the 3D prompt and improve the 3D visual experience (such as chroma blur).
Super vision function when the monitor is off, such as magnifying glass or binocular vision function.
4.2 display immersion
Immersion is another key factor to realize the ultimate Mr experience, and it is not only based on the field of view: the field of view is a 2D angle concept, and the immersion field of view is a 3D concept, including the Z distance from the user’s eyes and the support arm display interaction).
User immersive experience includes many aspects
Wide angle field of view, including the peripheral display area with less pixels per degree and lower color depth.
Fixed fixation rendering and dynamic fixation rendering.
World locked hologram and hologram occlusion realized by depth map scanner.
World locked space audio.
Through the special sensor for accurate gesture sensing.
Figure 4 summarizes the main requirements that can make analysts optimistic about VR / Ar / MR / smart glasses market under the premise of immersion experience and wearability / visual comfort integration.
Figure 4: comfort and immersion requirements for the ultimate Mr experience
The dark gray item in Figure 4 is the key optical technology of the next generation of MR head display: fast, accurate and universal eye movement / pupil / fixation tracker.
We think of immersion as a multi sensory illusion (monitor, audio, touch, etc.). The presence of MR is a state of consciousness, that is, the wearer of head show really believes that he is in a different environment. Immersion can produce a sense of presence.
However, in order to achieve lifelike telepresence, we need to solve various key challenges faced by the head display, not only the display (refresh rate, field of view, angular resolution, visual convergence adjustment conflict, edge occlusion and HDR, etc.), but also various sensors (6-DOF head tracker, depth mapping sensor, spatial mapping sensor, eye movement / pupil / Note sensor, etc.) Eye tracker, gesture sensor, etc.). Mr’s goal is to create a high degree of telepresence, so that users are convinced that they are in another (virtual) environment.
5. Human factors
In order to design a display architecture that can achieve the ultimate Mr comfort and immersion experience described in the previous section, we need to consider optical design as a human centered task. This section mainly analyzes the specific details of the human visual system, and how to use it to reduce the complexity of optical hardware and software architecture without reducing the user’s immersion and comfort in any way.
5.1 human visual system
Because of the high cone density, it only covers 2-3 degrees and deviates from the line of sight by about 5 degrees, so the resolution of fovea is the highest. Fovea is the result of early visual experience, and has grown into a unique area since childhood.
Sight distance and optical axis
Human vision is based on the cone and rod density of the retina, as shown in Figure 5. The optical axis slightly deviates from the visual distance by about 5 degrees (close to the visual axis), and coincides with the position of the fovea. The blind spot (where the optic nerve is) is about 18 degrees from the center of the fovea.
Figure 5: cell density of cone and rod on the left, optical axis and sight distance on the right
It should be noted that fovea grows slowly in the early years according to certain human visual behaviors, which is different from the characteristics of our visual system at birth. Therefore, due to the novel visual behavior completely different from human evolution over thousands of years, the position of fovea may drift to the new position of retina, such as children holding a small digital display at close range. Another significant change is early childhood myopia caused by the same causes.
Lateral chromatic aberrations (LCA)
The color difference is caused by the color separation of Fresnel ring, grating or traditional reflective lens. The “transverse” in the transverse color difference includes “transverse” and “longitudinal”, in which different colors focus at different depths, depending on the Abbe V value of the lens. Reflective optics do not produce LCA, so they are widely used in AR display.
LCA is usually corrected by software. The method is to pre compensate each color frame in field order mode, or pre compensate the whole color image in RGB display panel (equivalent to three distortion compensation, one for each color). However, this may cause display artifacts, such as color aliasing, and requires high angular resolution to achieve reasonable results. Optical dispersion compensation is a better method, but it needs more complex optical devices (mixed refraction diffraction) or symmetrical coupling structure (visible in waveguide combiner using grating or holographic coupler), or reflective optical devices instead of refractive optical devices.
An important detector is the human eye, so it’s also interesting to analyze the natural LCA diffusion of the human eye, and the results are surprisingly strong. Figure 6 shows the measured LCA of human eye (the left is the sum of the measurements in the past 50 years), which produces two diopters in the visible spectrum; Figure 6 also illustrates the display mode of color image in retina, on axis and off axis (middle). On the right is what the user sees after the visual cortex is processed.
Figure 6: natural LCA of human eye
The natural LCA of the human eye is also the basis of digital rendering 3D depth cues (called “chromablur / color blur”). Depending on which side of the white object they appear on, blue and red blurring (optical or rendering blurring) can distinguish between far focus and near focus, thus providing the oculomotor with information about changes in visual adjustment, thus refocusing the image (that is, the green part of the image is focused on the retina).
The LCA of different people is slightly different: therefore, using external optical devices to increase or decrease spectral diffusion to slightly change the natural LCA will not significantly affect vision. However, if one part of the field of view has a specific LCA (perspective in AR), and the other part has a different LCA (digital image in AR), it may cause visual discomfort.
Visual acuity of fovea and peripheral areas
The measured MTF (modulation transfer function) is shown in Figure 7. The MTF mainly represents the bright vision of the axial field of view, which is close to the fovea centered fovea area (offset about 3 to 5 degrees).
Figure 7: human eye polychromatic modulation transfer function with different pupil diameters
We call the ability of the eye to distinguish small features “visual acuity.”. Young adults can distinguish patterns with alternating black and white lines, as small as one arc minute (30 cycles per degree or 60 PPD). This is also the definition of 20 / 20 vision. A few people can distinguish smaller patterns (with a higher MTF), but most of us will see them as gray shadows (with a lower MTF, below 0.1).
For all pupil sizes, light vision with MTF of 20 / 20 is higher than 30%, and only dark vision with MTF of more than 5mm can make MTF lower.
Please note that MTF50 (50% of MTF at low frequencies) or mtf50p (50% of MTF at peak frequencies) are excellent indicators, but are mainly used for cameras. Human eyes, especially the moving eyes, can still distinguish features well at the level of 30% MTF. Unlike a camera, the MTF of the eye drops at low frequencies (mainly due to the lack of movement at these frequencies).
Due to the high aberration of human eyes, the larger the pupil size is, the lower the MTF will be produced, which is different from the high-end camera limited by diffraction (MTF will increase with the aperture). Interestingly, when the pupil increases the light vision, the higher spatial frequency can actually be distinguished, so that more than 20 / 20 vision can be achieved in the dark field of view.
This means that the human visual system can not be simply limited to pure optical properties as the camera, but as a computational imaging system (where the CPU is the visual cortex). The LCA discussed in the previous section is proof.
Details of human visual field
Figure 9 illustrates the horizontal extension of different angle regions of the human binocular vision system. Although the overall horizontal field of view spans more than 220 degrees, in most cases the binocular range spans only 120 degrees (depending on the geometry of the nose). However, stereo vision (the fusion of left and right monocular vision providing 3D depth cues) is limited (+ / – 40 degrees).
Figure 8: human field of view (horizontal and vertical)
Figure 9: human binocular field of view with fixed fixation area, including unconstrained eye movement, can support continuous fixation and visual adjustment
The size of the vertical field of view is similar to that of the horizontal field of view, and it deviates downward from the standard line of view by about 15 degrees.
Human visual field is a dynamic concept, and the best description is when the range of motion of the eyes is constrained and unconstrained (unconstrained: it does not produce eye fatigue and allows stable fixation and subsequent visual adjustment reflex). Although the range of mechanical eye movement may be large (+ / – 40 degrees), the unrestrained eye movement that can be performed without causing head rotation reflex is much smaller and covers only about + / – 20 degrees of field of view. This in turn defines the static fixation area, with a field of view of 40 to 45 degrees. Fig. 9 shows the binocular field of view, that is, the overlap of left and right fields of view.
Figure 10: for the most advanced intelligent glasses, AR, Mr and VR head displays, their field of view generally covers the binocular vision and fixed gaze display area.
The binocular field of view is a large area, horizontally symmetric and vertically asymmetric, with a span of + / – 60 degrees in the range of + 55 degrees up and – 30 degrees down, while the middle and lower areas also reach – 60 degrees, but with a smaller horizontal span of 35 degrees. The white circle is the fixed fixation area, which also defines the diagonal view state of most high-end AR / MR devices today (providing 100% stereo view overlap). In addition, for a given gaze angle, color recognition spans a field of view of + / – 60 degrees, shape recognition spans a field of view of + / – 30 degrees, and text recognition spans a field of view of + / – 10 degrees.
5.2 display hardware adapted to human visual system
Fig. 11 shows the field of view of various head displays. The field of view of the standard VR head display (oculus CV1, HTC vive, PSVR, WMR) is all around the 110 degree diagonal, and the other (pie and starvr) can be expanded to 200 degree field of view. The large mobile phone panel display and free space combiner (meta2, dreamglass, Mira AR and leap motion) can achieve a large field of view up to 90 degrees, while the high-end AR / MR system uses micro display, such as Microsoft hololens V1 and magic leap one. The field of view of smart glasses is usually 10 to 15 degrees (Zeiss “tooz” smart glasses, Google glasses, North focals), 25 degrees and up to 50 degrees (Vuzix blade, digilens, optivent ora, lumus dk50, ODG R9).
Figure 11: display field and perspective field of view of VR, smart glasses and AR head display.
For VR system, because there is no perspective, the field of view can be quite large, from 110 to 150 to the largest 200 + field of view (pie or starvr). For smart glasses, the eccentric display field of view is 15 degrees due to the obstruction of the lateral display arm. For magic leap one, the tunneling effect caused by the round mechanical shell of the glasses greatly reduces the perspective field of view to a cone of about 70 degrees, while the field of view of the display is 50 degrees. For hololens V2, the horizontal perspective (or perspective) field of view is equal to the natural human field of view of 220 degrees, while the diagonal field of view is 52 degrees, covering most of the concave area. The upper field of view is used to fix the sensor strip. The laser / MEMS display engine and the mechanical shell of the system board are covered, and the lower and upper fields of view are unlimited. Another special feature of hololens V2 is that you can flip the display mask up and display a completely unobstructed view. In order to optimize the optical architecture of the head display to achieve a large field of view, we must consider the various field of view areas described in Figure 9, so as to avoid over designing the system. Through the implementation of “human centered optimization” in the optical design process, we can achieve a system that matches the human visual system in terms of resolution, MTF, pixel density, color depth and contrast.
5.3 perceptual resolution and field of view
The essence of AR / VR system is the perceptual field of view and perceptual resolution of human visual system: the resolution is the perceptual specification (subjective) rather than the scientific measurement specification (objective).
For example, one way to improve the perceptual resolution without increasing the burden of GPU rendering is to simply copy the pixels of the physical panel. The latest version of Samsung WMR (2018 version) adopts this method, in which the display pipeline renders and drives the display at 616ppi, and the final physical display reaches 1233ppi. It has been proved that this can reduce the screen effect and improve the user’s perceptual resolution.
The perceptual field of view span may also be subjective, especially for AR system. The quality of the display (high MTF, high resolution, no screen effect and mura effect, reduced aliasing and motion blur) helps to make people think that the perceptual field of view is larger than that of similar display architectures. Users’ perception of visual field is the combination of natural perspective visual field and virtual image quality.
This article is an introduction to edo19 conference “digital optics technology” (this conference is dedicated to digital optics of AR / VR and MR systems, including display, imaging and sensors). We introduce the field of AR / VR / MR optics, and review the most important standard of AR / VR / MR optical design: matching optical performance with human visual system. We can do this through all the digital optical elements presented at this conference: free space optical elements, waveguide optical elements (waveguide combiner), diffraction, holographic and super surface optical elements, adjustable, switchable and reconfigurable optical elements, and novel computing technologies for display and imaging.