Come and play Airport madness game , i have heard its awesome and many people play it

Wednesday, February 09, 2005

Nanochips

The First Nanochips
As scientists and engineers continue to push back the limits of chipmaking technology, they have quietly entered into the nanometer realm!

Article By : G. Dan Hutcheson
Abstracted By: Dharmesh Tripathi
Overview/Faster, Better Chips

For most people, the notion of harnessing nanotechnology for electronic circuitry suggests something wildly futuristic. In fact, if you have used a personal computer made in the past few years, your work was most likely processed by semiconductors built with nanometer-scale features. These immensely sophisticated microchips or rather, nanochips are now manufactured by the millions, yet the scientists and engineers responsible for their development receive little recognition.

The recent strides are certainly impressive, but, you might ask, is semiconductor manufacture really nanotechnology? Indeed it is. After all, the most widely accepted definition of that word applies to something with dimensions smaller than 100 nanometers, and the first transistor gates under this mark went into production in 2000. Integrated circuits coming to market now have gates that are a scant 50 nanometers wide. That's 50 billionths of a meter, about a thousandth the width of a human hair.

Having such minuscule components conveniently allows one to stuff a lot into a compact package, but saving space per se is not the impetus behind the push for extreme miniaturization. The reason to make things small is that it lowers the unit cost for each transistor. As a bonus, this overall miniaturization shrinks the size of the gates, which are the parts of the transistors that switch between blocking electric current and allowing it to pass. The more narrow the gates, the faster the transistors can turn on and off, thereby raising the speed limits for the circuits using them. So as microprocessors gain more transistors, they also gain more speed.

The desire for boosting the number of transistors on a chip and for running it faster explains why the semiconductor industry, just as it crossed into the new millennium, shifted from manufacturing microchips to making nanochips. How it
quietly passed this milestone, and how it continues to advance, is an amazing story of people overcoming some of the greatest engineering challenges of our time.


Straining to Accelerate

The microprocessor that powers the computer on which I typed this text, a Pentium 4, contains some 42 million transistors intricately wired together. How in the world was this marvel of engineering constructed? Let us survey the steps.

Before the chipmaking process even begins, one needs to obtain a large crystal of pure silicon. The traditional method for doing so is to grow it from a small seed crystal that is immersed in a batch of molten silicon. This process yields a
cylindrical ingot--a massive gem-quality crystal--from which many thin wafers are then cut.

It turns out that such single-crystal ingots are no longer good enough for the job: they have too many "defects," dislocations in the atomic lattice that hamper the silicon's ability to conduct and otherwise cause trouble during chip
manufacture. So chipmakers now routinely deposit a thin, defect-free layer of single-crystal silicon on top of each wafer by exposing it to a gas containing silicon. This technique improves the speed of the transistors, but engineers have been pushing hard to do even better using something called silicon-on-insulator technology, which involves putting a thin layer of insulating oxide slightly below the surface of the wafer. Doing so lowers the capacitance (the ability to store electrical charge) between parts of the transistors and the underlying silicon substrate, capacitance that would otherwise sap speed and waste power. Adopting a silicon-on-insulator geometry can boost the rate at which the transistors can be made to switch on and off (or, alternatively, reduce the power needed) by up to 30 percent. The gain is equivalent to what one
gets in moving one generation ahead in feature size

IBM pioneered this technology and has been selling integrated circuits made with it for the past five years. The process IBM developed, dubbed SIMOX, short for separation by implantation of oxygen, was to bombard the silicon with oxygen atoms (or rather, oxygen ions, which have electrical charge and can thus be readily accelerated to high speeds). These ions implant themselves deep down, relatively speaking, where they combine with atoms in the wafer and form a layer of silicon dioxide.

One difficulty with this approach is that the passage of oxygen ions through the silicon creates many defects, so the surface has to be carefully heated afterward to mend disruptions to the crystal lattice. The greater problem is that oxygen implantation is inherently slow, which makes it costly. Hence, IBM reserved its silicon-on-insulator technology for its most expensive chips.

A new, faster method for accomplishing the same thing is, however, gaining ground. The idea is to first form an insulating oxide layer directly on top of a silicon wafer. One then flips the oxidized surface over and attaches it onto another, untreated wafer. After cleverly pruning off most of the silicon above the oxide layer, one ends up with the desired arrangement: a thin stratum of silicon on top of the insulating oxide layer on top of a bulk piece of silicon, which just provides physical support.

The never-ending push to boost the switching speed of transistors has also brought another very basic change to the foundations of chip manufacture, something called strained silicon. It turns out that forcing the crystal lattice of silicon to stretch slightly (by about 1 percent) increases the mobility of electrons passing through it considerably, which in turn allows the transistors built on it to operate faster. Chipmakers induce strain in silicon by bonding it to another
crystalline material--in this case, a silicon-germanium blend--for which the lattice spacing is greater. Although the technical details of how this strategy is being employed remain closely held, it is well known that many manufacturers are adopting this approach. Intel, for example, is using strained silicon in an advanced version of its Pentium 4 processor called Prescott, which began selling late last year.


Honey, I Shrunk the Features

Advances in the engineering of the silicon substrate are only part of the story: the design of the transistors constructed atop the silicon has also improved tremendously in recent years. One of the first steps in the fabrication of transistors on a digital chip is growing a thin layer of silicon dioxide on the surface of a wafer, which is done by exposing it to oxygen and water vapor, allowing the silicon, in a sense, to rust (oxidize). But unlike what happens to the steel body of an old car, the oxide does not crumble away from the surface. Instead it clings firmly, and oxygen atoms required for further oxidization must diffuse through the oxide coating to reach fresh silicon underneath. The regularity of this diffusion provides chipmakers with a way to control the thickness of the oxide layers they create.


Semiconductor manufacturers face daunting challenges as they move to extreme ultraviolet lithography, which reduces the wavelengths (and thus the size of the features that can be printed) by an order of magnitude. The prototype systems built so far are configured for a 13-nanometer wavelength. They are truly marvels of engineering--on both macroscales and nanoscales.

Take, for instance, the equipment needed to project images onto wafers. Because all materials absorb strongly at extreme ultraviolet wavelengths, these cameras cannot employ lenses, which would be essentially opaque. Instead the projectors must use rather sophisticated mirrors. For the same reason, the masks must be quite different from the glass screens used in conventional lithography. Extreme ultraviolet work demands masks that absorb and reflect light. To construct them, dozens of layers of molybdenum and silicon are laid down, each just a few nanometers thick. Doing so produces a highly reflective
surface onto which a patterned layer of chromium is applied to absorb light in just the appropriate places.

As with other aspects of chipmaking, these masks must be free from imperfections. But because the wavelengths are so small, probing for defects proves a considerable challenge. Scientists and engineers from industry, academe and government laboratories from across the U.S. and Europe are collaboratively seeking solutions to this and other technical hurdles that must be overcome before extreme ultraviolet lithography becomes practical.

Once the transistors are completed, millions of capacitors are often added to make dynamic random-access memory, or DRAM. The capacitors used for DRAM have lately become so small that manufacturing engineers are experiencing the same kinds of problems they encounter in fashioning transistor gates. Indeed, here the problems are even more urgent, and the answer,
again, appears to be atomic-layer deposition, which was adopted for the production of the latest generation of DRAM chips.

New Meets Old

Atomic-layer deposition can also help in the next phase of chip manufacture, hooking everything together. The procedure is to first lay down an insulating layer of glass on which a pattern of lines is printed and etched. The grooves are then filled with metal to form the wires. These steps are repeated to create six to eight layers of crisscrossing interconnections. Although the semiconductor industry has traditionally used aluminum for this bevy of wires, in recent
years it has shifted to copper, which allows the chips to operate faster and helps to maintain signal integrity. The problem is that copper contaminates the junctions, so a thin conductive barrier (one that does not slow the chip down) needs to be placed below it. The solution was atomic-layer deposition.

The switch to copper also proved challenging for another reason: laying down copper is inherently tricky. Many high-tech approaches were attempted, but none worked well. Then, out of frustration, engineers at IBM tried an old-fashioned method: electroplating, which leaves an uneven surface and has to be followed with mechanical polishing. At the time, the thought of polishing a wafer--that is, introducing an abrasive grit--was anathema to managers in this industry, which is downright obsessed with cleanliness. Hence, the engineers who originally experimented with this approach at IBM did so without seeking
permission from their supervisor. They were delighted to discover that the polishing made the wafer more amenable to lithographic patterning (because the projection equipment has a limited depth of focus), that it removed troublesome defects from the surface and that it made it easier to deposit films for subsequent processing steps.

The lesson to be learned here is that seemingly antiquated methods can be just as valuable as cutting-edge techniques. Indeed, the semiconductor industry has benefited a great deal in recent years from combinations of old and new. That it has advanced as far as it has is a testament to the ingenious ability of countless scientists and engineers to continually refine the basic method of chip manufacture, which is now more than four decades old.

Will the procedures used for fabricating electronic devices four decades down the road look anything like those currently employed? Although some futurists would argue that exotic forms of nanotechnology will revolutionize electronics by midcentury, I'm betting that the semiconductor industry remains pretty much intact, having by then carried out another dazzling series of incremental technical advances, ones that are today beyond anyone's imagination.

No comments: