The rapid development of artificial intelligence technology represented by deep learning has brought great changes to the fields of computer vision, natural language processing, autonomous driving, and life sciences. However, as the physical size of transistors approaches the limit, the improvement of electronic computing performance is facing a bottleneck, and new computing paradigms are urgently needed to run large-scale deep neural models.

Using photons as the carrier of computing has the advantages of high speed, high throughput and low power consumption. Computational processors that build photonic neural networks and realize photoelectric fusion, such as optical diffraction neural networks, photonic interference neural networks, photonic pulse neural networks, etc., have made important progress in artificial intelligence tasks such as speech recognition and image classification, and photonic computing is expected to lead the way The new artificial intelligence computing paradigm has aroused widespread interest in the frontier research field of information technology.

However,Existing Photonic Neural NetworksThey only explore data structures in regular forms such as vectors and matrices, but cannot deal withpicture(Graph) and other non-Euclidean spatial data structures. Many fields of science analyze data that go beyond the Euclidean space. Among them, graph-structured data, as a typical representative, can encode rich relationships between nodes in complex systems, and have been widely used in action recognition, recommendation systems, transportation networks, chemical and molecular property prediction and other fields.

Based on the research group's accumulation in the field of intelligent optoelectronic computing (Nature Photonics 2021 [4],Physical Review Letters 2019 [5]), June 15, 2022, Tsinghua UniversityDai QionghaiAcademician team and Shanghai Jiaotong UniversityXiong HongkaiTeaching teamwork inScience AdvancesThe magazine published an online article titled "All-opTIcal graph representaTIon learning using integrated diffracTIve photonic compuTIng units” research paper,Based on the Diffractive Photon Computing Unit (DPU) and the library of optoelectronic devices on silicon substrates, a Diffractive Graph Neural Network (DGNN) architecture is proposed. 

DGNN can fully optically learn graph node information and structural features, and achieve comparable performance to electronic graph neural networks on benchmark datasets such as Cora-ML, Citeseer, Amazon Photo, etc. new ideas.

PhD student at the Department of Automation, Tsinghua UniversityYan Tao, PhD student in the Department of Electronic Engineering, Shanghai Jiao Tong UniversityYang RuiCo-first author of the paper; Department of Electronic Engineering, Tsinghua UniversityLin XingAssistant Professor, Department of Electronic Engineering, Shanghai Jiaotong UniversityXiong HongkaiProfessor, Department of Automation, Tsinghua UniversityDai QionghaiThe professor is the co-corresponding author of the paper.


 

Based on the idea of ​​message passing, DGNN implements a trainable transformation matrix through DPU, and extracts node features to generate messages. Pass messages to adjacent graph nodes and aggregate node features through optical waveguides and waveguide coupling. Finally, the topological features of the graph are generated through the multi-head strategy, which is applied to tasks such as node classification and graph classification. Among them, the DPU module uses a one-dimensional diffraction line (Metaline) metasurface structure as a diffraction layer to modulate the input light field, and multiple diffraction lines are cascaded to form a diffraction network to achieve optical feature extraction.

Each diffraction line consists of an array of rectangular silicon dioxide grooves etched into the silicon of a silicon-on-insulator (SOI) substrate, each silicon dioxide groove is called a meta-atom, and its effect on the magnitude of the optical field is and the phase modulation factor is determined by the height and width of the rectangular slot. The DPU module can be extended in parallel in the horizontal direction to increase the receptive field and capture the complex features of any number of adjacent nodes, or it can be extended in parallel in the vertical direction to extract higher-dimensional node features and improve learning capabilities. In addition, compared with other on-chip photonic computing devices, such as interferometers, DPU modules based on one-dimensional metasurface structures can achieve higher integration.

Figure 1: Schematic diagram of the principle and structure of DGNN. (A) Graph structure with 6 nodes and 5 edges. (B) Graph node message passing mechanism. (C) All-optical graph feature learning based on the on-chip diffraction computing unit. (D) Graph node classification based on the multi-head strategy.

To verify the accuracy and reliability of the method, the authors first apply DGNN to a synthetic Stochastic Block Model (SBM) graph dataset. The DPU generates a two-dimensional neural message according to the three-dimensional node attributes of each target node, aggregates the features of different numbers of adjacent nodes through the waveguide to represent the target node, and then trains the output classifier to perform the semi-supervised graph node classification task. Using the finite difference time domain (FDTD) method and the angular spectrum analysis method, the authors verify that the classification performance of DGNN is better than the electronic graph neural network PPRGo and multi-layer perceptron (MLP) under the same network size.

Figure 2: DGNN applied to SBM semi-supervised node classification. (A) Synthetic Stochastic Block Model (SBM) graph dataset. (B) On-chip Diffraction Computing Unit (DPU) structure and optical graph node feature extraction process. (C) Meta-atom structure. (D) Comparison of classification accuracy between DGNN and electronic neural network.

Then, DGNN was applied to benchmark datasets such as Cora-ML, Citeseer, Amazon Photo, etc., and compared with electronic computing methods such as PCA, MLP, and PPRGo, and the results showed:

(1) The performance of the network model considering the characteristics of the graph structure greatly exceeds that of the model ignoring the graph structure;

(2) All-optical reasoning of DGNN achieves comparable performance to PPRGo;

(3) Compared with PPRGo, DGNN achieves a certain accuracy improvement on the Cora-ML dataset, indicating that the optical method is more effective than electronic methods to achieve feature extraction and message passing.



Figure 3: DGNN applied to semi-supervised graph node classification on standard datasets (Cora-ML, Citeseer, Amazon Photo) Further, DGNN has also been applied to human skeleton-based action recognition tasks, verifying that DGNN can learn and classify graph-level features performance. The authors use skeleton action videos in the UTKinect-Action3D dataset for evaluation, where the graph structure of the skeleton contains the position information of 20 joints.

DGNN learns and aggregates all node features, and connects all layer-level features of video subsequences into the classifier, and applies winner-take-all strategy to all video subsequences to obtain action recognition results. Finally, the DGNN architecture achieves 83.3% subsequence accuracy and 90.0% video accuracy, validating the effectiveness of the proposed method for graph-level learning.

Figure 4: DGNN applied to human skeleton action recognition It is worth noting that once the DGNN architecture design is optimized and physically fabricated, the on-chip optics for graph feature learning are passive, and the reasoning process for graph-based AI tasks can be Processing at the speed of light is only limited by the input data modulation speed and output detection speed.Theoretical calculation speedAble to achieve 82.6 TOPs⁻¹,energy efficiencyAble to reach 8.26 POPs⁻¹W⁻¹,Calculate densityAchieving 130 TOPs⁻¹mm⁻², an improvement of several orders of magnitude compared to electronic computing. In the future, the computing performance of DGNN can be further improved by improving the energy transfer efficiency of DPU.

DGNN provides an efficient graph-structured data processing method, which will provide inspiration for future artificial intelligence research based on integrated photonic computing, so that it can go beyond the Euclidean space category and be more widely used.


Reviewing Editor: Liu Qing

Leave a Reply

Your email address will not be published.