## Research on an image-based virtual reality system - Dr. Wei Xijun

2021-08-03

Abstract: This paper briefly introduces the characteristics and theoretical basis of image rendering technology, and focuses on the method based on cylindrical panorama, and realizes a virtual reality system on this basis. With this method, the technical problems of automatic panorama generating tool, real-time scene display algorithm and panorama browser design are solved.

Key words: image based; Rendering technology; All-optical function; Panorama; Projection transformation

The traditional graphics rendering technology is oriented to the geometric model of the scene, and the rendering process involves the complicated process of hiding and brightness calculation. Therefore, there are still many problems to realize the rendering of realistic graphics on the computer with ordinary computing power.

In recent years, image-based Rendering Technology (IMAGe-based Rendering) has emerged, which uses pre-generated images (or environment mapping) to generate scenes from different viewpoints. Compared with traditional graphics rendering technology, it has obvious characteristics:

1) Graphics drawing is independent of the complexity of the scene and only related to the resolution of the picture to be generated, so it can be used to represent very complex scenes;

2) The pre-stored image (or environment map) can be either computer-generated or actual shot, and a mix of the two can be used;

3) The rendering technology has low requirements on computing resources, and can realize real-time display of complex scenes on ordinary workstations and personal computers, which can meet practical requirements.

1. Graphic rendering technology based on image

The theory basis of graphic rendering technology based on image is the Plenoptic Function. The all-optical function is a parameterized function, which defines all visible information at any time and in any wavelength range at any point of view in space. In terms of computer graphics, the set of all possible Environment maps in a given scene is described.

For any point V in space, any line of sight from that point of view can be defined by spherical angles θ and h. If the light wavelength is λ, then at time T, the all-optical function at the viewpoint V is defined as the all-optical function which describes the environmental reflection at any point in a given scene. Therefore, it gives an accurate description of the scene in the form of an image. Substituting viewpoints (Vx,Vy,Vz) and spherical angles θ,h and time t into the definition of the all-optical function, a frame of view with a given viewpoint along a specific direction can be generated. This process is actually a sampling of the all-optical function, and the resulting view is a sample of the all-optical function. Thus, the image-based graphing problem can be described as: the reconstruction of a continuous all-optical function from a set of discrete samples of a given all-optical function, and then re-sampling the function at a new viewpoint position to draw a new view. That is, the process of graph drawing based on image is actually the process of sampling, reconstruction and resampling of all optical functions. According to the definition of all-optical function, it can be known that all-optical function in general is 7-dimensional, and there is a large amount of image information to be sampled. Therefore, it is often difficult to construct all-optical function directly. In practical application, the all-optical function can be reasonably simplified according to specific application requirements, so as to achieve the required real-time image rendering effect.

By limiting the observer to observe the scene at some discrete given points and ignoring other parameters such as wavelength and time in the 7-dimensional all-optical function, a 2-dimensional all-optical function can be obtained:

P = p (theta, h)

The above formula is the simplest representation of all optical functions, also known as the method based on panorama set. This paper uses cylindrical panorama to realize a practical virtual reality system.

2. A virtual reality system based on cylindrical panorama

2.1 Scenario Indicates the selection of the mode

For image-based virtual reality system, it is very important and critical to construct appropriate scene representation mode. Scene representation mode is a formal system that can express scene information clearly. It includes the mathematical form and data structure used, and is the foundation of virtual environment construction algorithm and interactive algorithm. There are usually three modes to represent a scene in a panorama: cube surface, cylinder surface, and sphere surface. The difficulty of obtaining and controlling different panorama modes varies greatly. Obviously, a spherical panorama with its center at the viewpoint is ideal for describing a scene. However, spherical panorama is a non-uniform sampling representation, which will cause distortion of the scene, especially at the poles, and spherical projection lacks a representation suitable for computer storage. Although cube panorama overcomes some defects of spherical panorama, it is very difficult to obtain cube panorama for non-computer generated images. This is because the cube panorama is composed of 6 images with 90° wide Angle, and the splicing between them requires accurate camera positioning technology. Similarly, the over-sampling problem also exists at the boundaries and corners of the cube, because the plane projection is also not uniform. For this reason, the cylindrical panorama is used to represent the scene in the design system. The advantage of the cylindrical panorama is that it is easy to obtain and the scene of any line of sight can be easily generated. In practice, the use of bottomless finite cylinder will restrict the vertical view.

2.2 Key Technologies

To implement a virtual reality system using panorama, the following technical problems must be solved:

· Automatic panorama generation tool;

· Real-time scene display algorithm;

· Design of panorama browser.

2.2.1 Automatic panorama generation tool

The establishment of automatic panorama generating tool is the basis of the system implementation. In this system, a series of overlapping photos are taken by an ordinary camera and projected onto a standard projection plane -- a cylinder. Then these projections on the cylinder are automatically joined together seamlessly to form a panorama at the point of view. In this system, the seamless stitching of cylindrical projection adopts feature-based matching algorithm, that is, not directly using the image pixel value, but through the pixel value to export the corresponding symbol features to achieve matching, soTherefore, this algorithm is relatively stable for contrast and significant illumination changes. This algorithm can be realized by simple comparison of feature attributes, so it is fast.

2.2.2 Real-time display algorithm of scenarios

With the panorama at the view point, the scene generation process in any line of sight direction is a reprojection process from the panorama to the new view plane.

The pixel in the cylindrical panorama is represented by (u,v), the pixel coordinate of the projection of the apparent plane on the vertical plane is represented by (x,y), and the pixel coordinate of the apparent plane is represented by (x ',y ',z '). The radius of the cylinder is R, the distance from the center of the cylinder to the apparent plane is D, and the distance from the projection of the central axis of the cylinder to the apparent plane on the vertical plane is D '. Therefore, we can get the reprojection process of cylindrical panorama in any line of sight direction. Is the geometric relationship between the cylindrical panorama and the visual plane. The pixel in the cylindrical panorama is represented by (u,v), the pixel coordinate of the projection of the apparent plane on the vertical plane is represented by (x,y), and the pixel coordinate of the apparent plane is represented by (x ',y ',z '). The radius of the cylinder is R, the distance from the center of the cylinder to the apparent plane is D, and the distance from the projection of the central axis of the cylinder to the apparent plane on the vertical plane is D '. Thus, we can get the reprojection of the cylindrical panorama in any line of sight direction. For any visual plane, the derivation is as follows:

Step 1 First, the cylindrical panorama is projected onto the vertical plane. As can be seen from the geometric relationship, when the horizontal rotation is done around the viewpoint, if the x value is given (that is, for any vertical scan line), the U value remains unchanged. From the above equation, we know that V prime also remains unchanged. In this way, the value of v can be obtained by multiplying v prime by y. To speed up processing, you can store v 'in a table.

Step 2 Project the vertical plane to the desired visual plane and obtain the result by using geometric relations. Similarly, it can be seen from the geometric relation that z 'remains unchanged for any horizontal scan line when rotated vertically, so that the projection of the horizontal scan line on the vertical plane to the apparent plane is a first-class scaling transformation, that is, only one division is needed for any horizontal scan line. Thus, the processing speed of the system can be greatly improved.

2.2.3 Design of panorama browser

With the rapid development of Internet today, it is a trend to develop a panoramic browser suitable for in-ternet, which can meet the requirements of virtual environment roaming on the Internet. The whole system in the form of ACTIVEX control encapsulation,Internet users can easily add this control to the Internet browser, so that the Internet browser can browse the panorama, do not need to design a special browser. The transmission speed of panorama on Internet is the next problem to be solved. In the design of the system, it is very important to realize the real-time display of the scene. In the design of this system, in addition to using the cylinder inverse transformation acceleration algorithm, according to the invariance of the size of the display area and the invariance of the view-to-window mapping relationship, the pre-calculated projection transformation results are also stored in a matrix with the same size as the view-to-window. In the display, look up this matrix again, to update the display area, thus, greatly improve the display speed, to meet the practical requirements.

4. Conclusion

The system has realized the tool of panorama generation, which can automatically generate the panorama of a given viewpoint, and adopted the unique acceleration algorithm, so that the generation and display of the scene can meet the real-time requirements of the virtual reality system. In order to realize virtual environment roaming on Internet, ACTIVE X control and Internet browser are adopted

In this way, the network function of panorama browser is realized, and the invariance of view-area to window mapping relationship is used to improve the scene display speed, and the effect of real-time display is achieved.