In the technology industry, the word innovation is almost as prevalent as revolution, so it is difficult to distinguish those exaggerated things from those really exciting things. Linux kernel is called innovation, but it is also called the biggest miracle in modern computing, a giant in the micro world.

In the technology industry, the word innovation is almost as prevalent as revolution, so it is difficult to distinguish those exaggerated things from those really exciting things. Linux kernel is called innovation, but it is also called the biggest miracle in modern computing, a giant in the micro world. Apart from marketing and mode, Linux can be said to be the most popular kernel in the open source world. It has introduced some real rule changers in its nearly 30 years of life. Cgroups (2.6.24) as early as 2007, Paul menage and Rohit Seth added the abstruse function of cgroups to the kernel (the current implementation of cgroups was rewritten by Tejun HEO). This new technology was originally used as a method to ensure the quality of service for a specific set of tasks. For example, you can create a control group definition (CGroup) for all tasks associated with your web service, another CGroup for routine backup, and another CGroup for general operating system requirements. Then, you can control the percentage of resources in each group, so that your operating system and web services can obtain most of the system resources, and your backup process can access the remaining resources. However, cgroups has become so famous today because of its role as a driving cloud technology: container. In fact, cgroups were originally named process containers. This is not surprising when they are adopted by projects such as LxC, coreos and docker. Just as after the gate is opened, the word "container" seems to have become a synonym for Linux, and the concept of cloud based "application" in the style of microservice soon became the norm. Nowadays, it is hard to get rid of cgroups. They are so common. Every large-scale infrastructure (and possibly your laptop if you run Linux) uses cgroups in a reasonable way, which makes your computing experience easier to manage and more flexible than ever before. For example, you may have flathub or flatpak installed on your computer, or you may have used kubernetes and / or openshift at work. In any case, if the term "container" is still ambiguous to you, you can get a real understanding of the container from behind through the Linux container. Lkmm (4.17) in 2018, the hard work of jade alglave, Alan stern, Andrea Parri, Luc maranget, Paul McKenney and several others were incorporated into the mainline Linux kernel to provide a formal memory model. Linux kernel memory consistency model (lkmm) subsystem is a set of tools to describe Linux memory consistency model. At the same time, it also produces test cases (specially named klitmus). As systems become more and more complex in physical design (adding more CPU cores, cache and memory growth, etc.), it becomes more and more difficult for them to know which CPU needs which address space and when. For example, if cpu0 needs to write data to a shared variable in memory and CPU1 needs to read the value, cpu0 must write before CPU1 attempts to read. Similarly, if values are written to memory in a sequential manner, they are expected to be read in the same order, regardless of which CPU or CPUs are reading. Even on a single processor, memory management requires a specific sequence of tasks. A simple operation like x = y requires the processor to load the value of Y from memory and store it in X. The value stored in y cannot be put into the X variable until the processor reads the value from memory. In addition, there is an address dependency: x [n] = 6 requires that n be loaded before the processor can store the value 6. Lkmm can help identify and track these memory patterns in your code. It is partly implemented by a tool called herd, which defines the constraints imposed by the memory model (in the form of logical formulas), and then lists all possible results consistent with these constraints. Low latency patch (2.6.38) long ago, before 2011, if you want to work with multimedia on Linux, you must have a low latency kernel. This is mainly applicable to recording with many real-time effects (such as singing into the microphone and adding mixing, and hearing your voice in the headset without delay). Some distributions, such as Ubuntu studio, reliably provide such a kernel, so there is actually no obstacle. This is just an important reminder when artists choose a distribution. However, if you are not using Ubuntu studio, or you need to update your kernel before your distribution is available, you must jump to the RT patches page, Download kernel patches, apply them to your kernel source code, compile, and install them manually. Later, with the release of kernel version 2.6.38, this process ended. The Linux kernel suddenly magically built low latency code by default (the latency was reduced by at least 10 times according to the benchmark). No need to download patches, no need to compile. Everything went well because Mike Galbraith wrote a 200 line patch. For open source multimedia artists around the world, this is a rule changer. Since 2011, things have become so beautiful. In 2016, I made a challenge to build a digital audio workstation (Daw) on raspberry pie V1 (model B), and found that it works surprisingly well. RCU (2.5) RCU, namely read copy update, is a system defined in computer science, which allows multiple processor threads to read data from shared memory. It does this by delaying updates but also marking them as updated to ensure that the data is read as up-to-date. In fact, this means that the read and update occur at the same time. A typical RCU loop is a bit like this: 1. Delete the pointer to the data to prevent other read operations from referencing it. 2. Wait for read operations to complete their critical processing. 3. Reclaim memory space. Dividing the update phase into the delete and recycle phases means that the updater immediately performs the delete and postpones the recycle until all activity reads are complete (by blocking them or registering a callback to call when they are complete). Although the concept of RCU was not invented for the Linux kernel, its implementation in Linux is a defining example of this technology. Cooperation (0.01) the final answer to the question of Linux kernel innovation is always cooperation. You can say this is a good time, or you can call it technical advantage, hacker capability, or just open source, but the Linux kernel and many projects it supports are brilliant examples of collaboration and cooperation. It goes far beyond the kernel. People from all walks of life have made contributions to open source, which can be said to be because of the Linux kernel. Linux was and still is the main force of free software, encouraging people to bring their code, art, ideas or just themselves to a global, productive and diverse human community.

Apart from marketing and mode, Linux can be said to be the most popular kernel in the open source world. It has introduced some real rule changers in its nearly 30 years of life.

Cgroups(2.6.24)

As early as 2007, Paul menage and Rohit Seth added the esoteric function of cgroups to the kernel (the current implementation of cgroups was rewritten by Tejun HEO). This new technology was originally used as a method to ensure the quality of service for a specific set of tasks.

For example, you can create a control group definition (CGroup) for all tasks associated with your web service, another CGroup for routine backup, and another CGroup for general operating system requirements. Then, you can control the percentage of resources in each group, so that your operating system and web services can obtain most of the system resources, and your backup process can access the remaining resources.

However, cgroups has become so famous today because of its role as a driving cloud technology: container. In fact, cgroups were originally named process containers. This is not surprising when they are adopted by projects such as LxC, coreos and docker.

Just as after the gate is opened, the word “container” seems to have become a synonym for Linux, and the concept of cloud based “application” in the style of microservice soon became the norm. Nowadays, it is hard to get rid of cgroups. They are so common. Every large-scale infrastructure (and possibly your laptop if you run Linux) uses cgroups in a reasonable way, which makes your computing experience easier to manage and more flexible than ever before.

For example, you may have flathub or flatpak installed on your computer, or you may have used kubernetes and / or openshift at work. In any case, if the term “container” is still ambiguous to you, you can get a real understanding of the container from behind through the Linux container.

LKMM(4.17)

In 2018, the hard work of jade alglave, Alan stern, Andrea Parri, Luc maranget, Paul McKenney and several others were incorporated into the mainline Linux kernel to provide a formal memory model. Linux kernel memory consistency model (lkmm) subsystem is a set of tools to describe Linux memory consistency model. At the same time, it also produces test cases (specially named klitmus).

As systems become more and more complex in physical design (adding more CPU cores, cache and memory growth, etc.), it becomes more and more difficult for them to know which CPU needs which address space and when. For example, if cpu0 needs to write data to a shared variable in memory and CPU1 needs to read the value, cpu0 must write before CPU1 attempts to read. Similarly, if values are written to memory in a sequential manner, they are expected to be read in the same order, regardless of which CPU or CPUs are reading.

Even on a single processor, memory management requires a specific sequence of tasks. A simple operation like x = y requires the processor to load the value of Y from memory and store it in X. The value stored in y cannot be put into the X variable until the processor reads the value from memory. In addition, there is an address dependency: x [n] = 6 requires that n be loaded before the processor can store the value 6.

Lkmm can help identify and track these memory patterns in your code. It is partly implemented by a tool called herd, which defines the constraints imposed by the memory model (in the form of logical formulas), and then lists all possible results consistent with these constraints.

Low latency patch (2.6.38)

Long ago, before 2011, if you wanted to work with multimedia on Linux, you had to have a low latency kernel. This is mainly applicable to recording with many real-time effects (such as singing into the microphone and adding mixing, and hearing your voice in the headset without delay). Some distributions, such as Ubuntu studio, reliably provide such a kernel, so there is actually no obstacle. This is just an important reminder when artists choose a distribution.

However, if you are not using Ubuntu studio, or you need to update your kernel before your distribution is available, you must jump to the RT patches page, Download kernel patches, apply them to your kernel source code, compile, and install them manually.

Later, with the release of kernel version 2.6.38, this process ended. The Linux kernel suddenly magically built low latency code by default (the latency was reduced by at least 10 times according to the benchmark). No need to download patches, no need to compile. Everything went well because Mike Galbraith wrote a 200 line patch.

For open source multimedia artists around the world, this is a rule changer. Since 2011, things have become so beautiful. In 2016, I made a challenge to build a digital audio workstation (Daw) on raspberry pie V1 (model B), and found that it works surprisingly well.

RCU(2.5)

RCU, read copy update, is a system defined in computer science, which allows multiple processor threads to read data from shared memory. It does this by delaying updates but also marking them as updated to ensure that the data is read as up-to-date. In fact, this means that the read and update occur at the same time.

A typical RCU cycle is somewhat like this:

1. Delete the pointer to the data to prevent other read operations from referencing it.

2. Wait for read operations to complete their critical processing.

3. Reclaim memory space.

Dividing the update phase into the delete and recycle phases means that the updater immediately performs the delete and postpones the recycle until all activity reads are complete (by blocking them or registering a callback to call when they are complete).

Although the concept of RCU was not invented for the Linux kernel, its implementation in Linux is a defining example of this technology.

Cooperation (0.01)

The final answer to the question of Linux kernel innovation is always collaboration. You can say this is a good time, or you can call it technical advantage, hacker capability, or just open source, but the Linux kernel and many projects it supports are brilliant examples of collaboration and cooperation.

It goes far beyond the kernel. People from all walks of life have made contributions to open source, which can be said to be because of the Linux kernel. Linux was and still is the main force of free software, encouraging people to bring their code, art, ideas or just themselves to a global, productive and diverse human community.

Leave a Reply

Your email address will not be published. Required fields are marked *