Brother Dagu, the up Master of the popular B station, who repaired the old Beijing image with AI, came to take us through today!

This is the old Beijing in 1929. In addition to AI coloring, frame filling and expanding the resolution, it also fully restores the original sound of the times. The taste of old Beijing is too full!

The old Beijing bazaar is bustling with people, including those who shout, watch and chat.

Drum, flute, Sihu, Sanxian, a craftsman’s song “playing New Year”, has an internal flavor.

Spirit boy Barber

It seems that I am very satisfied with my hairstyle. In our age, we don’t need to pay attention to it. It will naturally form [proud and charming] in a few years

Can dinner be so lively? We are two meters apart now!

Startling “stall economy”

The precious image data of 3min can truly feel the living state of old Beijing 90 years ago. Although it seems that material life is not very rich, people feel the joy of that time across the screen. To be exact, this image records the old Beijing from 1927 to 1929, which is from the film collection of the image library of the University of South Carolina.

In addition, this image restoration video is still from the hand of up main valley of station B. previously, he repaired an old Beijing image from 1920 to 1927 with AI, which exploded in station B, with more than 2 million video hits, and was reported by CCTV news. However, from the perspective of image restoration effect, it is obvious that this time it is better in denoising and coloring, and it is still the image data with original sound.

The restoration work was jointly released with CCTV news. Less than 30 minutes after the video was uploaded, more than 300000 broadcasts were harvested at station B, and the bullet screen burst.

How is AI repair achieved?

According to Otani, this restoration work adopts the new AI technology deep Remaster. Compared with before, it has better performance in coloring, complementary frame and resolution. The researchers of this technology are Satoshi Iizuka of Japan Architecture University and Edgar Simo Serra of Waseda University. Their papers were also included in sigg2019, the top society of computer graphics.

We know that the previous image data are generally black and white, and the pixel and quality are relatively low. If it is repaired, it needs to improve the resolution, remove noise and enhance the contrast. Based on these tasks, the author proposes a deep Remaster model.

Based on the time convolution neural network, it trains the attention mechanism (source reference) on the video. This attention mechanism can process any number of color images without segmenting the long video, so it maintains the consistency of time. The quantitative analysis shows that the model performance of deep Remaster can be improved with the increase of video length and color image, which is much better than the existing repair models.

On the internal architecture of deep remater model

Internal architecture of deep remater model

Black and white images are input at the input end of the model. After the preprocessing of time convolution network and the in-depth training of source reference attention mechanism, any number of color images can be combined to generate the final chroma channel. In this process, the source reference attention mechanism allows the model to refer to similar areas in the reference images when recoloring the video.

The recurrent convolution neural network usually propagates information frame by frame, which can not be processed in parallel and form dependency. Therefore, when referring to the color image, the image will start again and again, so that the temporal correlation will be lost. The convolutional neural network based on source reference attention mechanism can use all reference information in parallel when processing any frame.

Comparison of repair methods

Zhang, Yu and vondrick have conducted AI repair experiments on world classic movies and Youtube Videos, and the results have achieved no results. In order to verify the repair performance of deepmaster, the author compares it with it.

The first is the comparison with Zhang and Yu. The author randomly selects an 8m video from the 300 video data set of YouTube as the repair target, in which the reference color image is taken from the source video and intercepted every 60 frames.

Noise treatment: from the repair results, the current method has obvious advantages in denoising. The first column is the original image with noise defects. The first two are almost not aligned for repair. The fourth column can see that the noise is processed well in the high fidelity state, which is almost no different from the real image in the fourth column.

Shading: the first column in the figure is the original image, the last three columns are the shading results of different methods, and the last column is the reference color graphics. It can be seen that the color treatment of the third column is almost the same as that of the fourth column. Therefore, the shading effect of the model based on source reference attention mechanism is better.

In addition, the author combines the repair methods of Zhang and vondrick and compares them. The upper image is the reference color image, and the images of frames 5, 85 and 302 are repaired respectively. The results show that the current method has better coloring effect.

        Editor in charge: PJ

Leave a Reply

Your email address will not be published. Required fields are marked *