In the past 20 years, CPU and network performance has improved 10000 times. According to Moore’s law, in the next 20 years, they will increase other factors by 10000 times. In the 40 year design life cycle of today’s industrial automation (IA) architecture, the function of computer will be increased by an amazing 100000000 times. It’s hard to exaggerate. More information is in the Zhengong chain.
Using this power will determine which companies, industries, and even economic winners or losers. For today’s designers of tomorrow’s long-life systems, enabling intelligent software is the only important factor.
In fact, this is already happening. In one industry after another, software is becoming the most valuable part of every system. IA has always been an exception to this rule. Nonetheless, like self driving cars and intelligent medical systems, IA can use sensor fusion, fast distributed response and artificial intelligence (AI) to replace rigid or manual processes with intelligent autonomy.
The architecture under development is designed to address the problems of the last 20 years, such as reconfiguration of units of work, small volume, flexible automation, and vendor interoperability. It is easier to solve these problems with flexible software than with strict specifications. The future belongs to software.
Today’s discrete automation systems use a simple hardware centric architecture. The programmable logic controller (PLC) connects the equipment through the fieldbus. The PLC controls the equipment and manages upstream connections to advanced software such as human machine interface (HMI) and historians. The underlying software of the factory reads the sensors, executes the logic and drives the actuators, thus performing repetitive operations in the “unit of work.”. The factory consists of a series of these work units, each with dozens of equipment.
There are not many ways to program work units. Manufacturing engineers or technicians use a range of devices to implement the functions in the unit. The purpose of this design is to make it easy to assemble work units without spending much software. Unfortunately, the goal of software in the smallest plant cannot use advanced computing and intelligent systems. As one industry leader said eloquently:
One of the hard facts about making software is that it wasn’t developed by software engineers or computer science professionals. We do not require electrical engineers to design mechanical systems on a regular basis, nor do we require chemical engineers to design electrical systems, but we often require mechanical, electrical and chemical engineers to design and develop software systems. ” －Brand，Dennis L．（ 2012－11－13T22：58：59）。 Factory it: integrating information technology into automated manufacturing. Momentum news.
Brand’s “harsh facts” are simply not sustainable. Excellent customized software will replace reliability, performance and interoperability as the key to the competition. This means that industrial companies will need to write their own code with highly competitive professional software teams. You can’t win a software war with someone else’s software.
How can we achieve this future? First, we need to understand the available industrial architecture frameworks. We can then put them together to enable software driven IA.
What are OPC UA and DDS?
The top industrial architecture frameworks are OPC unified architecture (OPC UA, managed by OPC Foundation) and data distribution service (DDS, managed by object management group). Both are widely used in industrial systems, although not in the same use case. DDS has traction in medical systems, transportation, autonomous vehicles, defense, power control, robotics, and oil and gas applications. OPC UA is also used in many of these industries, but not for applications. On the contrary, OPC UA is mainly used in discrete automation and manufacturing. In fact, there is almost no overlap of use cases.
1. OPC UA integrates software provided from external sources, such as devices with embedded software, HMI and history. By designating the standard interface as a “companion specification,” it focuses on vendor interoperability. There is no ability to write software for customization, so most systems have almost no end-user software. In contrast, DDS provides a general data model, which is a key requirement for writing distributed software. Therefore, it supports programmer teams to build large-scale distributed systems with extensive customization functions. Unlike OPC UA, there are few vendor software.
DDS supports publish and subscribe, and the new OPC UA specification “PubSub” also supports it. However, OPC UA will not (and will never) perform what DDS does. DDS is essentially a software development architecture. OPC UA is not. Therefore, the question is not whether to choose DDS or OPC UA. The problem is to understand what they do and determine which, or both, is needed for your design.
Of course, this raises the question: Why are they so different?
DDS has evolved into the software development framework of control system. The first application of DDS is a feedback control system for intelligent distributed robot technology communicating over Ethernet. They used a lot of custom software written by computer scientists. Almost all DDS applications integrate AI components or very intelligent algorithms. DDS is directly aimed at software teams that build intelligent distributed machines.
In contrast, OPC UA grows in a factory environment, and as brandell points out, there are few software engineers. Its main goal is to help PLC centered work unit design and select different suppliers’ hardware without writing software. Units of work repeat themselves endlessly, but they are not really “intelligent.”. OPC UA seeks to minimize rather than enable software development.
We have to distinguish between integrating existing software components and writing new software. OPC UA supports software integration of HMI and history recorder modules. However, it does not provide the function of composing intelligent software module. It is not a software development architecture for distributed applications.
What will OPC UA PubSub do?
OPC UA PubSub is an easy way to send information from publishers to many subscribers. Publishers collect datasets regularly and write them to their subscribers. The dataset is pulled out from the OPC UA information model, which is essentially a list of key value pairs. On the other hand, the subscriber decompresses it and pushes it into the UA information model. It also supports simple structured data types.
Most users plan to use OPC UA PubSub with UDP transport. It has some simple options. It can “pack” data that is too large to fit into a network packet (usually only 1.5 KB). It can resend each message a fixed number of times in order to improve reliability. However, it cannot guarantee the reliability by detecting and retransmission the lost messages. Moreover, it strictly locks execution; each subscriber must get the same data in the same format. Ampub and other opqptt middleware also support non real-time message delivery options.
Fundamentally, OPC UA PubSub provides a simple mechanism to connect variables on a tightly coupled set of devices. Each device gets the same data at the same time and at the same rate. The interoperability of the equipment can be ensured by using the correct matching specifications. UA PubSub is very new; very few applications are deployed.
What does DDS do?
Unlike OPC UA, DDS supports a modular software definition system with a simple concept: shared global data space. This simply means that all data appears to be “as if” in every device and algorithm in local memory. Of course, it’s a fantasy. All data can’t be everywhere. DDS works by tracking which application needs what data, knowing when that data is needed, and then delivering it. As a result, the data that any application actually needs is displayed in local memory on time. Applications communicate only with the “local” data space, not with each other.
That’s what data centric is all about: every device, every algorithm, at any level, in any way, anytime “instantly” access all data “locally.”. It is better to think of it as distributed shared memory, similar to the virtual implementation of distributed control system (DCS) sandbox ram.
Each DDS module specifies the schema (type) that can be exchanged in memory. DDS controls the flow into and out of the structured memory through QoS parameters, which specify the rate, delay and reliability of the data flow. There is no server or object or special location. Since DDS applications only interact with shared distributed memory, they are independent of how other applications are written, where they run, or when they are executed. It is a simple and natural parallel software architecture in the whole system.
DDS implements a series of functions to support software driven distributed control, including:
·Quality of service (QoS) control decouples software modules
·Redundancy management to improve parallelism
·Built in discovery finds the right data source and data source
·Type compatibility check and extensibility to support system evolution
·Scope and bridge to increase zoom
·Security for data centric architecture
·Transparent routing to achieve consistent data access from top to bottom
·Data persistence enables applications to join and exit at any time
·Reader specified content filtering improves efficiency
·Rate control can eliminate rate coupling.
Some key differences
OPC UA PubSub does not provide the data centric functions of DDS core. Let’s take a closer look.
OPC UA achieves interoperability in hundreds of cases through device models and supporting specifications. OPC UA devices packed the show, nailed to the wall as evidence of interoperability. Consistent message: plant engineers and technicians can use OPC UA to combine devices into work units without having to write code.
By contrast, no device is pre installed with DDS today. This is because DDS itself does not integrate devices. Instead, DDS integrates software modules. To add a device to a system, DDS users model the device as software.
Instead of specifying specifications for each permutation, DDS integrates everything through the system data model. It maps device functions from the vendor’s local API to the system data model. DDS vendors provide very complex bridging and “data routing” technologies. Therefore, the popular “layered data bus” architecture allows the system to be extended by connecting data models between layers. This also means that DDS systems can be connected to devices, web technologies and even OPC UA, despite the different interfaces.
2. self driving cars are revolutionized the automotive industry. Through a data centric approach, DDS can handle on-board and control room use cases. Data routing can provide a consistent data model throughout the system to build a reliable large-scale infrastructure. To adopt software driven, the IA industry needs a similar system approach.
Coupling is a method to measure the interdependence between software and system components. Coupling is obvious, just like coupling a client to a server. It can also be subtle, such as when software modules must be started in a specific order or executed at the same rate.
The data centric design of DDS makes all data appear local, so the application is not coupled with any other applications. DDS controls the interaction with data through 21 different QoS policies, including deadline, waiting time, budget, update frequency, history, liveness detection, reliability, persistence, ownership, sorting and filtering. As long as they are still close enough to be “compatible,” it will also transform evolving types.
DDS runs transparently on hundreds of platforms and dozens of networks. It is independent of language, operating system, chip architecture or network type used. As a result, data centric applications can work in parallel and share data transparently without interference. Coupling is only the result of design.
In contrast, OPC UA applications communicate directly with each other. For example, in PubSub, each subscriber gets exactly the same data from the publisher at the same rate. Each subscriber gets all the data streams, so the whole system also depends on having similar connection network properties and processor speed / load everywhere. All subscribers must have the same understanding of the data being sent; therefore, versions must match exactly. Each added participant adds a dependency to directly couple the system.
Fundamentally, decoupling helps applications and devices run independently. In the case of several subscribers in the unit of work, coupling may not matter. Coupling may be good for synchronous feedback control with minimum delay, for example. In larger systems, the coupling is not good. Loosely coupled systems are easier to scale, test, build, deploy, understand, and maintain with distributed teams. Each coupling source is a practical problem, and combining multiple sources can be debilitating. A good software architecture should avoid coupling unless it is absolutely necessary to affect the application.
No matter where the application is located, DDS automatically discovers named “topics” in the system. The application does not need to perform any operations or have any knowledge to find the required data.
In this sense, OPC UA does not support discovery. The subscriber queries the server for the configuration of the publisher that contains the data it needs. They can also introspect publishers to see what publishers can publish. Both are active queries; OPC UA does not perform system wide automatic discovery.
OPC UA and DDS use fundamentally different security methods. OPC UA protects the basic transmission. Various PubSub middleware options and client server require different security implementations (usually certificates). It is safe to establish a connection between the OPC UA client and the service, but there is no common way to indicate which data is allowed to flow to which client.
DDS can also protect the underlying transport, but the main method is to provide overall data stream security, regardless of the transmission. DDS is basically a kind of flow control technology. Therefore, DDS can protect and control the data flow itself by means of a signature authority document that describes the allowed read-write access rights. This doesn’t require any code. You can add security after the entire system is running.
DDS supports the domain that separates the system, partitions in the system, and transparent routing between subsystems and networks. With transparent routing, data sources can be far away. DDS systems can grow to thousands of applications. DDS provides a unified data model, a single security model, and consistent access to data.
The scalability of OPC UA PubSub has not been proven due to the low deployment. However, UDP based OPC UA PubSub is not suitable for multiple devices on a single network. Using mqtt or AMQP, publishers can talk to the cloud server, but not to other OPC UA PubSub subscribers. OPC UA client server is designed to aggregate units of work into larger plants. Therefore, OPC UA cannot provide unified system data access in the same way as DDS.
OPC UA subscribers can choose datasets with filters, but only to ensure that the correct data is received. They can also restrict access to individual publishers.
DDS has a wide range of filtering functions. QoS matching enables subscribers to receive information only from competent sources. Time based filters decouple the speed of the producer from that of the information user. Content filtering analyzes content and provides only data that meets specifications. Together, these filters ensure that the right data is delivered to the right location at the right time with minimal bandwidth waste.
TSN or time synchronization network (IEEE 802.1) is a set of standards under development, which is based on the existing Ethernet Design and can transmit data in limited time (also known as “synchronous” network).
With TSN, OPC UA PubSub can provide better real-time performance. TSN may be reliable, so the lack of UDP based reliability in OPC UA is not critical. However, because TSN is a kind of Ethernet and needs centralized configuration, it is limited to small single subnet system.
DDS was originally developed for real-time control over the network. QoS settings optimize the use of the underlying network and support everything from slow lossy networks (such as satellite links) to synchronous transmission (such as backplane buses and switching fabrics). On powerful hardware, it provides predictable and fault-tolerant one to many delivery with limited latency.
The combination of DDS and TSN can achieve more distributed certainty. DDS system on TSN can combine advanced data access with real-time performance. The current standard of OMG’s DTS over TSN is in progress and is scheduled to be released in the fourth quarter of 2020.
When should we combine capabilities?
The main use case of OPC UA is to help manufacturing engineers build units of work without writing software. It’s used to make “things,” not as software inside the “things” that are being made. In contrast, DDS users are software engineers who build applications. DDS is only used to operate things. So UA is making things. DDS is about making things work.
There is overlap in the things that make things (also known as intelligent manufacturing systems). These will soon require complex system software and custom programming.
OMG recently passed the standard of OPC UA / DDS gateway. At the most basic level, the gateway adopts OPC UA information model and makes it available in DDS global data space. The main use case is to convert OPC UA enabled devices to DDS devices. OPC UA equipment can simply join DDS network.
Therefore, in the large-scale intelligent system, the software environment based on DDS can be used with OPC UA equipment. In this way, both complex software and interoperable devices can be used.
Note that this design offers many opportunities to leverage AI and intelligent software. At a higher level, DDS and cloud based intelligent interface. The next generation of this design can also implement AI or intelligent algorithms to achieve device interoperability
The future of intelligent machines
Historians will look back on our time and wonder how we did without intelligent machines. The transition will not be smooth – product lines, companies and the entire national economy are in danger. Brand’s “hard facts” are probably the most dangerous part of automation. This truth will not survive in the age of intelligent machines.
Therefore, excellent software must be incorporated into the manufacturing technology portfolio. In the next 20 years, the performance of the manufacturing system will not increase by 10000 times. Interoperability will not become 10000 times more valuable. But software will become 10000 times more important. This is the inevitable result of exponential growth. Any architecture that doesn’t focus on computing power is out of date.
Does industrial automation need software development? Will future manufacturing systems compete mainly on a unique user code? The future of software is certain, more information is in the vibration chain.