In order to effectively improve the definition and resolution of multi screen display in the field of video surveillance application, a real-time video image processing algorithm based on FPGA is proposed. This paper introduces the overall structure of the system, then introduces the video image cache and image segmentation for FPGA module, and focuses on the implementation of bilinear interpolation algorithm according to the requirements of video output and display. The simulation results of Modelsim show that the algorithm meets the requirements of multi screen display system.

With the development of network informatization, display equipment, as a direct means to obtain information, plays an indispensable role. In order to meet the needs of users to watch larger screens and optimize information display, large screen splicing technology came into being. The increase of large screen image size exposes the details that are not easy to detect on the ordinary display, and improves the accuracy of visualization. High resolution graphics and image data processing and visualization are used to solve the high-resolution graphics and images that cannot be visualized based on a single hardware display device.

The splicing controller is the core display control equipment in the large screen system. The core of the splicing controller is its processing capacity and stability. The performance and stability of the traditional controller based on the computer architecture completely depends on the computer it depends on. No matter how the traditional controller is improved, it can only calculate the accumulation of quantity at most, and it is difficult to obtain qualitative improvement. Using large-scale FPGA array combined processing architecture and fully embedded hardware, the splicing controller integrates high-end image processing technologies such as video signal acquisition, real-time high-resolution digital image processing and two-dimensional high-order digital filtering, and has strong processing ability.

Under this background, this paper designs a video image processing algorithm based on FPGA to realize the segmentation, interpolation and amplification of real-time digital video. Through the parallel processing mechanism, it ensures the real-time processing and smooth picture of video. In this paper, the images collected by one surveillance camera are displayed in 2 × The LCD splicing screen of 2 displays the effect of a complete large picture.

Video image processing algorithm based on FPGA

After the video signal collected by the webcam passes through the DVI receiver, it sends data and control signals to FPGA. The input video signal is segmented, interpolated and amplified by FPGA main control chip; The video output module outputs the data processed by FPGA to the screen through DVI interface. As long as the data reading speed is higher than the writing speed, there will be no sudden change in the image, so as to achieve the purpose of real-time processing.

DVI interface combines the processed R, G and B digital signals to be displayed in the graphics card with horizontal synchronization signal (Hsync, line synchronization signal) and vertical synchronization signal (Vsync, field synchronization signal), encodes them according to the minimum non return to zero, converts each pixel point according to 10 bit (8-bit pixel data and 2-bit control signal), and converts the encoded R, G B digital stream and four groups of pixel clock signals are transmitted in the mode of transition minimized differential signal (TMDs).

When the signal control acquisition signal inside the FPGA is 0, the acquisition system stops working and the address generator does not count. When it is 1, the system is in acquisition state. First, Hsync is allowed to pass only after the rising edge of Vsync arrives, so as to ensure that the collected image is a complete frame image. When the rising edge of Vsync arrives, all counters and triggers are cleared. After the rising edge of Vsync, the field blanking delay is used to count Hsync. After the field blanking period, images can be collected. The line synchronization counter counts Hsync. In each row counted, when the rising edge of Hsync arrives, the row blanking delay counts it. After the line blanking, the point synchronization counter starts counting it. When the line synchronization counting stops counting, one frame of image acquisition is completed, waiting for the next Vsync to arrive.

The data buffer is two pieces of SDRAM, and the switching control is carried out in the unit of one frame image. Ping Pong storage mechanism is used to complete the seamless buffering and processing of data. Ping Pong operation can switch back and forth according to the beat and cooperate with each other through “input data selection control” and “output data selection control”, so as to send the buffered data stream to the “subsequent processing” module without pause.

Since the output value of a pixel only depends on the corresponding pixel value of the input image, the point operation of the image can be realized through the pipelined processing mode of processing each input pixel in turn. Since each pixel is processed separately, point operations are easy to implement in parallel. Therefore, the image can be divided into several parts, and then each part can be processed separately.

The video segmentation module realizes the segmentation and clipping of a single frame video image, obtains four sub video pixel streams in complete format, and controls the mutual timing relationship of the four sub videos. The scanning law of the pixels of each splicing screen is the same, which is in the form of progressive scanning, and the display of the sub video pixels is synchronized, that is, line synchronization and field synchronization.

Leave a Reply

Your email address will not be published. Required fields are marked *