Next-generation standard high-end compact edge server "EDG-INT4-G2"
An advanced inference performance AI system with a compact chassis equipped with two NVIDIA T4s, serving as an edge AI computing platform.
A box-type computer equipped with two NVIDIA T4 GPUs, featuring a compact chassis measuring 225mm in width, 292mm in depth, and 140mm in height, and implementing 16GB of GPU memory (GDDR6). Each GPU board consumes 70W, providing excellent performance per watt, enabling high-speed processing with nanosecond-level precision required for edge computing. This system serves as an edge AI computing platform, processing data exchanged between cloud and industrial control system networks, reducing communication costs by transmitting only the necessary data, and allowing installation in environments such as factories due to its robust chassis. It achieves real-time processing through improved latency. The inference engine is equipped with the "NVIDIA T4 Tensor Core GPU," capable of supporting performance from FP32 to FP16, INT8, and even INT4 precision, delivering up to 40 times the performance of a CPU. This makes it suitable for AI systems that require advanced inference capabilities, such as security monitoring, digital transformation, and appearance inspection devices/systems.
basic information
【Basic Specifications】 - Equipped with 2 Turing generation GPUs "NVIDIA T4" - 6 PoE ports - 3 LAN ports - 6 USB 3.0 ports - 2 COM ports - 19V to 36V DC-in - Includes 280W (24V) AC adapter - Equipped with 16 DIO (8 DI, 8 DO) - Capable of mounting 1 x 2.5" removable disk 【Excellent Heat Dissipation Mechanism for Cooling High-End GPUs and Ensuring Stable Operation】 To implement the passively cooled NVIDIA T4 in a compact chassis, sufficient airflow for heat dissipation is necessary. When the GPU temperature rises, thermal throttling can occur, leading to a decrease in processing speed. The "EDG-INT4-G2" enables stable operation by incorporating a GPU blower fan and a mesh structure exhaust port that efficiently directs the heat generated by the NVIDIA T4 to the outside air.
Price range
Delivery Time
Applications/Examples of results
"Realizing an Edge AI Computing Platform" By performing primary processing on data exchanged between the cloud and industrial control systems and only communicating the necessary data, we reduce communication costs and enable installation in environments such as factories with a highly durable casing. Real-time processing is achieved through improved latency. The inference engine is equipped with the "NVIDIA T4 Tensor Core GPU," which supports performance from FP32 to FP16, INT8, and even INT4 precision, delivering up to 40 times the performance of a CPU. This performance is suitable for AI systems that require advanced inference capabilities, such as security monitoring, digital transformation (DX), and appearance inspection devices/systems.
catalog(3)
Download All Catalogs