When it comes to increasing the efficiency of server technologies or data center applications, the main approach is to focus on new solutions that can enable CPUs to make faster computing. It is mainly possible by offloading specific tasks from the CPU, allowing CPUs to focus on processing.
Similar to what happened in the 2010s, prior to DPUs, the industry started using GPUs, which were only used for rendering graphics since, to benefit from their parallel processing capabilities in high-performance computing and supercomputing field. GPUs have taken a central role in helping CPUs by taking care of artificial intelligence, deep learning, and big data analytics applications. Now, DPUs are becoming one of the three major pillars of computing, as Nvidia CEO Jensen Huang said, by offloading networking and communication workloads from the CPUs.
DPU is a relatively new technology, but it is already considered one of the three pillars of computing and is expected to be crucial for data centers. Thus, CPU manufacturers are working hard to develop their own solutions. Currently, Nvidia is the most popular DPU manufacturer, but Intel, AMD, and Marvell Technology are also developing their own DPUs. Also, Microsoft has announced the acquisition of Fungible, a DPU manufacturer, recently. Along with DPU vendors, software vendors utilizing DPUs include Cloudflare, Fortinet, Palo Alto Networks, and VMware.
What is a DPU?
In basic terms, a DPU, a data processing unit, is a programmable processor that specializes in offloading networking and communication tasks from the CPU. DPUs comprise a multi-core software programmable CPU, a high-performance network interface, and flexible and programmable acceleration engines. The multi-core CPU is typically based on Arm architecture, coupled to the other system-on-chip components. The network interface focus on efficiently parsing, processing, and transferring data at line rate or the speed of the rest of the network to GPUs and CPUs. Finally, acceleration engines offloads and improve applications performance for AI and machine learning, zero-trust security, telecommunications, and storage.
Although DPUs can be used as a stand-alone solution, mostly they are integrated with smartNICs, which is a network interface controller. In a system, while CPUs control path initialization and exception processing, the network interface should be able to handle all network data path processing.
What are the common uses of a DPU?
Large-scale projects are using DPUs in their computing clusters to improve their performance. In a large-scale server farm, delegating tasks like handling data transfers, data compression, data storage, data security, and data analytics to DPUs can free up the CPUs, making them available for other workloads. Thus, DPUs are specialized in these tasks to offload more from the CPUs.
Especially in data-intensive tasks, such as big data, artificial intelligence, machine learning, and deep learning, DPUs can make a huge difference in performance. Data centers are using DPUs to move data between processors to improve computing speed, availability, security, and shareability. Just like GPUs, DPUs are using the servers’ PCIe slots. Experts claim that 30% of CPU processing power is spent on handling network and storage functions, which can be offloaded to DPUs, allowing CPUs to solely focus on operating systems and system applications.
What are the most common features of DPUs?
Since DPUs focus on offloading networking tasks from CPUs, the most common features are mostly focused on those aspects. Most DPUs use PCIe Gen 4 ports to connect to the server. DPUs are also supporting high-speed connectivity, with at least one 100 Gigabit connection. Most DPUs also come with their custom operating systems, separated from the host system’s OS. Due to their nature, DPUs also feature high-speed packet processing, memory controllers offering support for DDR4 or DDR5 RAM, accelerators, and multi-core processing. Some DPUs also offer extra security features.
What are the differences between CPU, GPU, and DPU?
CPUs are the main components of a server, that are responsible for doing the main calculations. Thus, no server can operate without a CPU. CPUs are mostly designed to have the capacity and flexibility to be able to run any kind of calculation, including complicated instruction cycles. But CPUs can also be swamped when it runs into a high number of simple but time-consuming tasks.
A GPU helps the CPU in these situations. GPUs have smaller caches, simpler ALUs (arithmetic logic units), and control units, but provide higher throughput and cores. GPUs can complete easy but repetitive calculations quickly. That’s why GPUs are perfect teammates of CPUs when it comes to tasks like big data analysis, machine learning, and AI development.
On the other hand, DPUs only focus on networking and communication tasks for the CPU. With hardware acceleration and network interfaces, it specializes in data analytics, data compression, data security, data storage, and data transfers. Additionally, on larger projects with heavy networking and storage tasks, DPUs could reduce the total costs of the systems.