Aiming to become the global leader in chip-scale photonic solutions by deploying Optical Interposer technology to enable the seamless integration of electronics and photonics for a broad range of vertical market applications

Free
Message: GPUs and Photonic Fabric

A couple of comments about David Lazovsky's most recent interview:  https://www.youtube.com/watch?v=GMMGtw7FRM8  

He claims that the cost for a current generation of GPU to control memory is $480 per gigabyte. A GPU utilizing the Photonic Fabric would reduce that cost to $20. That is $460 per gigabyte saving.

An H100 DGX system: 8 x NVIDIA H100 GPUs provide 640 GB total GPU memory. An H200 DGX system: 8 x NVIDIA H200 GPUs provide 1,128 GB total GPU memory. 

640GB X $460 = $294,400 cost saving. 1,128 GB X $460= $518,880 cost saving. Those two units can be replaced by a new generation of GPU+Photonic Fabric combination. It may not translate exactly this way in real life but there is huge financial incentive for both Nvidia and AMD to be actively engaged with CAI and compete for the design and development of future AI systems.

------------

While I am at it, there was a post recently that showed three candidates (LWLG, Avicena and POET) for an upcoming long-titled award containing a keyword of "platform". LWLG has to come third because it has a material product that is not a platform. Avicena has an interesting platform that puts them in a platform competitor category for POET with scope in some application areas but only enough for second place in the voting. If the voters get it right, POET should be the winner.  

Share
New Message
Please login to post a reply