The improvement of edge computing performance brings challenges to memory design, type selection and configuration, which also leads to more complex trade-offs in different application markets. Chip architecture is developing with new markets, but it is not always clear how data moves between chips, devices and systems. The data of automotive and AI applications are becoming more and more complex, but the chip architecture sometimes does not know how to give priority to data processing.

This makes chip designers face a choice, whether to share memory to reduce costs, or increase different types of memory to improve performance and reduce power consumption.

All these are based on safety, and the requirements of different market design are also different. For example, a large amount of data from various types of image sensors (such as lidar and camera) in cars need to be processed locally. AI chips hope to improve performance by 100 times.

There are some methods to solve the memory problem. One of them is on-chip memory, that is, the memory is scattered and integrated next to the computing unit to minimize data movement. The goal of this method is to break through the memory bottleneck and reduce power consumption by reducing the load and the number of storage.

“In memory computing may be analog, digital, or both,” said Dave pursley, senior chief product manager of cadence Digital & signoff group. “Although the idea of computing in memory may be a growing trend, the actual situation in this kind of computing seems very different.”

SRAM and DRAM are still the mainstream

Despite new changes in the market, on-chip SRAM and off chip DRAM are still the mainstream. Some experts have predicted that DRAM will “die” many years later, but it is still the most economical and reliable choice. DRAM has the characteristics of high density, simple architecture, low delay and high performance, as well as the characteristics of durability and low power consumption.

The growth of DRAM density is slowing down, but new architectures such as hbm2 allow vertical density increase by stacking modules rather than using DIMMs. This method also makes DRAM closer to processing units.

In addition, SRAM is expensive and has limited density, but its high-speed performance has been verified for many years. The challenge of on-chip memory is whether it is distributed or shared. In some cases, redundancy needs to be added to ensure security.

“All these requirements will affect the choice of memory type and quantity, and also involve the trade-off between on-chip and off-chip memory, as well as the complexity of accessing each memory interconnect,” said Ryan Lim, senior Internet of things architect of arm.

Low power memory is the key

A key problem of memory is power consumption, in which many factors such as memory type and configuration will affect power consumption. For example, data access in a 7 nm memory may consume more power because of the RC delay in the line. Of course, this will also generate heat, which may destroy the integrity of the signal of the input and output memory.

However, using high bandwidth memory for slower off chip data can save power consumption and can be as fast as high-speed gddr6. How these decisions are made depends on a number of factors, including the average selling price of the device and the type of memory selected.

There is also very low-power memory for handheld mobile devices, including more and more edge devices using batteries.

“These memories have extremely high performance, which can improve the power consumption and data rate of battery powered equipment to a certain extent,” said Steven woo, an outstanding academician of Rambus. “They can also work in a variety of modes. When in standby mode, they can consume little energy to meet the needs of products such as mobile phones and tablets, and quickly switch to higher performance / higher power mode when processing is needed.”

Low power memory also supports a variety of packaging methods, allowing them to be stacked with mobile phone processors to meet the light and thin requirements of smart phones, and can also be integrated on PCB to support the high-capacity memory configuration requirements of tablets and other consumer devices.

There is no doubt that developing low-power memory is a challenge. “When designing low-power memories, they support a wide range of rates, and these data rates are often quite high compared to low-power memories,” woo said. “This is usually driven by one or two major application markets, so it must face an industry with a large market. Only an industry with a large enough market can give birth to new memory. Historically, the mobile phone market is a successful example. If you talk to different mobile phone manufacturers, they all want to get memory with higher performance and power efficiency, because they want to be able to delay Long battery life. For other companies that want to use low-power memory, they will be glad that others are helping them“

Generally, these qualified memories may operate at several different data rates, but the rates are very close. “One of these memories may have a rate of 4.2 gigabits per second and the other is 3.2 gigabits per second,” he explained, “This allows memory manufacturers to grade all these memories. This happens when some parts are not running at full speed, but manufacturers still sell these memories because some customers need to buy lower performance memories at a cheaper price. Binning (data consolidation) This is allowed. The performance of these products is within a certain range and they are all qualified products“

How does memory affect the development of artificial intelligence?

Artificial intelligence plays an important role in almost all new technologies, and memory plays an important role in artificial intelligence. Very high speed and very low power have always been pursued by chips, but this is not always effective because of limited space. But it can explain why the data center and AI chip for training are larger than the chip applied to terminal reasoning equipment. Another method is to reduce some off chip memory chips to improve data throughput, reduce the distance to memory through design, or limit off chip data flow.

In any case, the competition of off chip memory is largely attributed to dram-gddr and HBM.

“From an engineering and production point of view, gddr looks like other types of drams, such as DDR and lpddr,” woo said. “You can integrate it into a standard PCB and use a similar manufacturing process. HBM is a new technology that involves stacking and interpolators , because HBM has many slow connections. Each HBM stack will have 1000 connections, so high-density interconnection is required, which far exceeds the processing capacity of PCB. This is why some companies are using interpolators, because these wires can be etched very close, like on-chip connections, and more connections can be obtained“

HBM pursues the highest performance and best efficacy, but the cost is higher and requires more engineering time and technology. With gddr, there is less interconnection between DRAM and processor, but they run much faster, which will affect signal integrity.

Figure 1: characteristics of various types of DRAM. Image source: Rambus


Power, performance and area (power, performance, area) remain key drivers, despite architecture and changes and new technologies.

“These three are very important, but largely depend on the application,” said Farzad zarrinfar, general manager of Intellectual Property Department of Siemens mentor. “For example, if it is a portable application, power consumption is very important. The power supply itself is also divided into dynamic and static. One part is dynamic power and the other is static power. If it is applied to wireless communication, dynamic power is very important if there are a lot of calculations. However, if it is a wearable application, users will be in different states of sleeping, waking up, exercising, and then returning to sleep , static / leakage power is very important“

Functions such as light sleep enable designers to greatly reduce leakage. At this time, the inoperative memory enters the source offset mode to reduce leakage, while other directly accessed storage groups work. When designing deep sleep, data can be retained through power management, managing Vdd and minimizing leakage. If you do not need to retain data, using the off mode can further reduce leakage.

Everything related to power efficiency is also crucial in cars. “In electric vehicles, battery life is very important, so power consumption is very important,” zarrinfar said. “People want to have linear characteristics from – 40 ° C to 125 ° C, even at higher 150 ° C. they don’t want leakage to increase sharply at high temperature, and they want to keep it in the linear range as much as possible. Similarly, it is very important that we pay attention to power consumption and leakage throughout the temperature range“

Regardless of the application field, power is still the primary consideration. “We see that as SoC design moves towards smaller,” he said. “The consumption of memory is increasing, and the capacity of embedded memory is also increasing. Now, we see that more than 50% of bare chips are memory. Therefore, people must pay attention to the power consumption of memory“


Despite a large number of revolutionary technologies and innovative architectures, memory is still the core of the design. Although new memory types are about to appear, such as phase transition and spin torque, most of them are still used in markets under various conditions. The biggest change is how to prioritize, share, select in design, and ultimately use existing memory. Although this sounds like a simple question, it is not the case.

“Choosing the right memory solution is often the key to achieving the best system performance,” vadhiraj sankaranarayanan, senior technical marketing manager of Synopsys, pointed out in a recently published white paper.

Responsible editor: CT


Leave a Reply

Your email address will not be published. Required fields are marked *