Aiming to become the global leader in chip-scale photonic solutions by deploying Optical Interposer technology to enable the seamless integration of electronics and photonics for a broad range of vertical market applications

Free
Message: Interesting read
Jacob VanWagoner, Former Intel yield engineer, now working with silicon x-ray detectors
Jacob has 100+ answers and 8 endorsements in Semiconductors.
This is a great question, thanks.

GaAs didn't go away. It just didn't go into mainstream cheapo electronics, it stuck with the niches that silicon just can't function in. And those niches are quite plentiful even in consumer devices.

Ultimately the argument in any large-scale manufacture of semiconductors is an economic one. GaAs is expensive, silicon is cheap. If you can make it good enough out of silicon, don't bother trying to do it out of GaAs.

The two biggest bonuses to Gallium Arsenide over Silicon in terms of device performance are:

1) Much higher Electron mobility -- that is, you can shoot electrons across a transistor faster, get higher gain and higher bandwidth out of transistors, and get better n-type conduction with a lower doping level.

2) Direct band-gap, allowing for direct on-chip generation of light for signaling and detection, which would have effectively allowed for all-optical buses instead of the criss-crossing network of metal lines.
Direct vs indirect band gap, by Silicon Nanocrystals
IBM's new chip using Silicon photonics:integrate optical communication technology
This stuff probably would have come at least a decade ago had GaAs won the semiconductor war.

Apply the right technology, and GaAs can whoop silicon's butt. And in fact there are a number of technologies where GaAs is applied that silicon technology just can't stand up to. For instance, RF/Microwave amplifiers. In consumer electronics, a GaAs-based Heterojunction bipolar transistor is likely at the heart of your high-end WiFi receiver and transmitter, and is almost certainly at the heart of your cell phone's receiver and transmitter.


However, there are a number of disadvantages compared to silicon in terms of device performance and ability to manufacture:

1) GaAs has really poor hole mobility. The key technology in computer chips is CMOS technology, where you have an n-type and p-type transistor together to form logic gates. The current through the n and p devices have to be balanced in order to have a well-functioning device, otherwise you get some difficult timing issues. And in order to balance the current, you need a somewhat decent balance of electron and hole mobilities, because you'll need to use the same proportions on gate widths in order to balance the current. In silicon, that ratio is about 2:1 -- that is, the p devices have to be twice as wide as the n devices. That's doable. In GaAs, that ratio is more like 9:1. That's awful.

2) GaAs does not have a native insulator oxide. Silicon does. You can literally just stick silicon in a furnace with oxygen flowing on it, and a layer of insulating glass will grow on it, with fantastic electrical and optical characteristics, and it's going to stick really well when you heat up and cool down the silicon. GaAs, not so much. You have to perform a chemical vapor deposition to get an oxide or nitride insulating layer on to it, and even then it will be of comparatively poor quality and it won't stick nearly as well, leading to manufacturing defects.

In other words, GaAs makes god-awful MOSFET transistors.

The argument moved over to proposing to use complementary MESFETs instead, which MESFETs use a Schottky barrier instead of a metal-oxide-semiconductor structure to get a low-leakage gate. Anyone who has worked with manufacture of Schottky devices should know they're not very low leakage compared to a MOS structure, and the manufacturing consistency on them isn't nearly as good either.
MESFET cross section, shown by University of Colorado

Now that we're down in the nanometer scale in silicon manufacturing, there's yet another argument -- that silicon transistors don't use SiO2 as their primary gate insulator anymore, but rather use some exotic hi-k dielectric to get enough capacitance for well-performing devices. I suppose that does put a little damper on the "use silicon forever because of the oxide" argument, but the thermal oxide is still used everywhere else on the device.

There's even more to the story. In the effort to make silicon "good enough" for ever faster clock speeds at acceptable power draw limits, many modifications to the silicon MOSFET were made -- more expensive than straight-up silicon MOSFETs but still cheaper than developing GaAs technology over time. This effort gave us:
Uniaxial strain in silicon to push up carrier mobility

High permittivity (hi k) dielectrics to allow for thicker gates (SiO2 would have been a monolayer that got tunneled through) without losing capacitance

And in the most recent iteration, the use of FinFETs (Intel calls them "Trigate transistors")

I'm starting to suspect that straight-up silicon combined with these tricks won't be good enough for 10nm or 7nm technology, and Intel will rely on using a silicon substrate with epitaxial layers of GaAs and other III-V semiconductors to make High-electron-mobility transistors. I can't predict the future, but I can tell you that if you look at what's in Intel-sponsored research papers, you will find that they'll publish and publish about something, then suddenly stop and a few years later it'll show up in products.
Share
New Message
Please login to post a reply