Inference Accelerator card R100
Inference Accelerator Card R200
Inference Accelerator card R420
PCIE
The R100 is equipped with the XPU-R architecture developed by the GPU customer, inheriting the mature software stack of the customer's second-generation chip, which greatly reduces the adaptation and use cost of users. As a low-power, small-size acceleration card, the R100 can be widely used in various edge reasoning scenarios such as smart retail, intelligent transportation, smart parks and intelligent manufacturing to achieve more efficient AI reasoning calculations.
The GPU customer 2 AI chip is suitable for high-performance inference scenarios in data centers, and fully supports various artificial intelligence tasks such as natural language processing, computer vision, voice, and traditional machine learning.
The GPU customer 2 AI chip is suitable for high-performance inference scenarios in data centers, and fully supports various artificial intelligence tasks such as natural language processing, computer vision, voice, and traditional machine learning.
Inference Accelerator card RG800
Inference Accelerator card PT200-M1
Inference acceleration card 106B
PCIE
Based on GPU-customer-developed XPU-R architecture, RG800 is an AI acceleration card for data center application scenarios, which can be used for routine model training, but also for multi-service concurrent high-performance AI reasoning applications, helping various industries to achieve cost reduction and efficiency, and promoting industrial intelligent upgrading.
PT200-M1 is a GPU customer can launch a 2-generation chip-based AI accelerator group, which can support PCIE5.0; AI acceleration cards located in data center application scenarios can be used for training conventional models.
The 106B is a PCIe board with a peak power consumption of 300W. It provides flexible deployment of powerful universal computing power for PCIe GPU servers widely used in data centers.