Aiming to become the global leader in chip-scale photonic solutions by deploying Optical Interposer technology to enable the seamless integration of electronics and photonics for a broad range of vertical market applications

Free
Message: Demo to Meta

I'm having trouble associating Poets relationship to Meta GenAI with the MTIA.

https://ai.meta.com/blog/meta-training-inference-accelerator-AI-MTIA/

closest I found below but no mentioning of any type of PIC integration or celestial AI. anyone want to shed some light?

The chip provides both thread and data level parallelism (TLP and DLP), exploits instruction level parallelism (ILP), and enables abundant amounts of memory-level parallelism (MLP) by allowing numerous memory requests to be outstanding concurrently...

...The servers that host these accelerators use the Yosemite V3 server specification from the Open Compute Project. Each server contains 12 accelerators that are connected to the host CPU and to one another using a hierarchy of PCIe switches. Thus, the communication between different accelerators does not need to involve the host CPU. This topology allows workloads to be distributed over multiple accelerators and run in parallel. The number of accelerators and the server configuration parameters are carefully chosen to be optimal for executing current and future workloads...

 

Share
New Message
Please login to post a reply