アプライド株式会社 Official site

We will introduce various HPCs for different applications of LLMs (Large Language Models).

Introducing models for LLM (Large Language Model) training and models for inference!

For LLM (Large Language Model) training purposes, here it is! APPLIED HPC Deep Type-AS4UX2S8GP-AU019 ■ CPU: Xeon Gold 6530 (32 cores/64 threads/2.1GHz/tb4.0GHz) ■ Memory: 2,048GB (64GB×32) ■ Storage: 1.92TB SSD + 7.68TB SSD ■ GPU: NVIDIA H100 94GB ■ OS: Ubuntu 22.04 LTS ■ Framework: TensorFlow / Pytorch / Chainer (DockerDesktop) ■ Optical Drive: None ■ Power Supply: [4 units] 3,000W/200V - Redundant configuration (2+2) - 80 Plus Titanium certification ■ 3-year send-back hardware warranty 16,800,000 yen (tax included)

basic information

If you're looking for a model for LLM (Large Language Model) inference, here it is! APPLIED WST-XW93475XS3Q2TTNVM ■ CPU: Xeon W9-3475X ■ Memory: 256GB (32GB×8) DDR5-4800 ■ Storage: 2TB M.2 NVMe-SSD ■ GPU: [2GPU] NVIDIA RTX6000 Ada 48GB-GDDR6 ■ OS: Ubuntu 22.04 LTS ■ Power Supply: [2PSU] 1000W ■ 3-year send-back hardware warranty 5,998,000 yen (tax included) APPLIED WST-XW73465XS3Q2TTNVM ■ CPU: Xeon W7-3465X ■ Memory: 256GB (32GB×8) DDR5-4800 ■ Storage: 2TB M.2 NVMe-SSD ■ GPU: NVIDIA RTX6000 Ada 48GB-GDDR6 ■ OS: Ubuntu 22.04 LTS ■ Power Supply: [2PSU] 1000W ■ 3-year send-back hardware warranty 4,498,000 yen (tax included)

Price range

Delivery Time

Applications/Examples of results

Applications of LLM (Large Language Model)

Introduction to the use of LLM (Large Language Model) in HPC (High-Performance Computing)

PRODUCT

Applied Latest CPU Xeon6 Equipped HPC 2U Server New Product Catalog

PRODUCT

Distributors

Recommended products