Aiming to become the global leader in chip-scale photonic solutions by deploying Optical Interposer technology to enable the seamless integration of electronics and photonics for a broad range of vertical market applications

Free
Message: AMD Infinity Architecture & Infinity Fabric™. High Speed Chiplet Interconnect
(at least we know what a Pensando is now)

AMD Infinity Fabric™ Technology

AMD Instinct™ Accelerators

The AMD Instinct™ MI200 accelerators designed with the AMD CDNA™ 2 Architecture encompass AMD Infinity Architecture, offering an advanced platform for tightly connected GPU systems so workloads can share data fast and efficiently.

  • Advanced P2P connectivity with up to 8 intelligent 3rd Gen AMD Infinity Fabric™ Links
  • Up to 800 GB/s of peak total theoretical P2P I/O bandwidth6
  • 2.4x the GPU P2P theoretical bandwidth of previous gen AMD Instinct™ GPU compute products

https://www.amd.com/en/technologies/infinity-architecture

 

 AMD Infinity Fabric™ (IF) is a high-speed intra-host interconnect that can be used to connect
multiple AMD CPUs and GPUs. For scale-up, AMD Infinity Fabric will use a global memory pool
for inter-GPU communication. This gives massive bandwidth with smaller domains to run model
parallelism traffic. For scale-out, The AMD NIC will support multiple modes that can be used to
connect AMD Infinity Fabric nodes and clusters together over Ethernet to build large domains,
which can improve the performance of AI / ML training networks that requires both large data
and pipeline parallelism with AMD assets.

https://www.amd.com/content/dam/amd/en/documents/pensando-technical-docs/article/amd-ai-networking-direction-and-strategy.pdf

 

Bring Hyperscale DPU Technology to Your Data Center

At least a generation ahead of the competition

https://www.amd.com/en/accelerators/pensando

 

Share
New Message
Please login to post a reply