Wednesday, April 20, 2011

MOORE’S LAW

MOORE’S LAW

The future of integrated electronics is the future of electronics itself. The advantages of integration will bring about a proliferation of electronics, pushing this science into many new areas.

Integrated circuits will lead to such wonders as home computers.or at least terminals connected to a central computer .automatic controls for automobiles, and personal portable communications equipment. The electronic wristwatch needs only a display to be feasible today. But the biggest potential lies in the production of large systems. In telephone communications, integrated circuits in digital filters will separate channels on multiplex equipment. Integrated circuits will also switch telephone circuits and perform data processing. Computers will be more powerful, and will be organized in completely different ways. For example, memories built of integrated electronics may be distributed throughout themachine instead of being concentrated in a central unit. In addition, the improved reliability made possible by integrated circuits will allow the construction of larger processing units. Machines similar to those in existence today will be built at lower costs and with faster turn-around.

Present and future By integrated electronics, I mean all the various technologies which are referred to as microelectronics today as well as any additional ones that result in electronics functions supplied to the user as irreducible units. These technologies were first investigated in the late 1950.s. The object was to miniaturize electronics equipment to include increasingly complex electronic functions in limited space with minimum weight. Several approaches evolved, including microassembly techniques for individual components, thinfilm structures and semiconductor integrated circuits. Each approach evolved rapidly and converged so that each borrowed techniques from another. Many researchers believe the way of the future to be a combination of the various approaches. The advocates of semiconductor integrated circuitry are already using the improved characteristics of thin-film resistors by applying such films directly to an active semiconductor substrate. Those advocating a technology based upon films are developing sophisticated techniques for the attachment of active semiconductor devices to the passive film arrays.

Both approaches have worked well and are being usedin equipment today.

The establishment

Integrated electronics is established today. Its techniques are almost mandatory for new military systems, since the reliability, size and weight required by some of them is achievable only with integration. Such programs as Apollo, for manned moon flight, have demonstrated the reliability of integrated electronics by showing that complete circuit functions are as free from failure as the best individual transistors.

Most companies in the commercial computer field have machines in design or in early production employing integrated electronics. These machines cost less and perform better than those which use .conventional. electronics. Instruments of various sorts, especially the rapidly increasing numbers employing digital techniques, are starting to use integration because it cuts costs of both manufacture

and design. The use of linear integrated circuitry is still restricted primarily to the military. Such integrated functions are expensive and not available in the variety required to satisfy a major fraction of linear electronics. But the first applications are beginning to appear in commercial electronics, particularly

in equipment which needs low-frequency amplifiers of small size.

Reliability counts

In almost every case, integrated electronics has demonstrated high reliability. Even at the present level of production .low compared to that of discrete components.it offers reduced systems cost, and in many systems improved performance has been realized. Integrated electronics will make electronic techniques

more generally available throughout all of society, performing many functions that presently are done inadequately by other techniques or not done at all. The principal advantages will be lower costs and greatly simplified design.payoffs from a ready supply of low-cost functional packages. For most applications, semiconductor integrated circuits will predominate. Semiconductor devices are the only reasonable candidates presently in existence for the active elements of integrated circuits. Passive semiconductor elements look attractive too, because of their potential for low cost and high reliability, but they can be used only if precision is not a prime requisite. Silicon is likely to remain the basic material, although others will be of use in specific applications. For example, gallium arsenide will be important in integrated microwave functions. But silicon will predominate at lower frequencies because of the technology which has already evolved around it and its oxide, and because it is an abundant and relatively inexpensive starting material.

Costs and curves

Reduced cost is one of the big attractions of integrated electronics, and the cost advantage continues to increase as the technology evolves toward the production of larger and larger circuit functions on a single semiconductor substrate. For simple circuits, the cost per component is nearly inversely proportional to the number of components, the result of the equivalent piece of semiconductor in the equivalent package

containing more components. But as components are added, decreased yields more than compensate for the increased complexity, tending to raise the cost per component. Thus there is a minimum cost at any given time in the evolution of the technology. At present, it is reached when 50 components are used per circuit. But the minimum is rising rapidly while the entire cost curve is falling (see graph below). If we

look ahead five years, a plot of costs suggests that the minimum cost per component might be expected in circuits with about 1,000 components per circuit (providing such circuit functions can be produced in moderate quantities.) In 1970, the manufacturing cost per component can be expected to be

only a tenth of the present cost. The complexity for minimum component costs has increased at a rate of roughly a factor of two per year Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number

of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.

Two-mil squares

With the dimensional tolerances already being employed in integrated circuits, isolated high-performance transistors can be built on centers two thousandths of an inch apart. Such a two-mil square can also contain several kilohms of resistance or a few diodes. This allows at least 500 components per linear inch or a quarter million per square inch. Thus, 65,000 components need occupy only about one-fourth a

square inch. On the silicon wafer currently used, usually an inch or more in diameter, there is ample room for such a structure if the components can be closely packed with no space wasted for interconnection patterns. This is realistic, since efforts to achieve a level of complexity above the presently available

integrated circuits are already underway using multilayer metalization patterns separated by dielectric films. Such a density of components can be achieved by present optical techniques and does not require the more exotic techniques, such as electron beam operations, which are being studied to make even smaller structures.

Increasing the yield

There is no fundamental obstacle to achieving device yields of 100%. At present, packaging costs so far exceed the cost of the semiconductor structure itself that there is no incentive to improve yields, but they can be raised as high as is economically justified. No barrier exists comparable to the thermodynamic equilibrium considerations that often limit yields in chemical reactions; it is not even necessary to do

any fundamental research or to replace present processes. Only the engineering effort is needed.

In the early days of integrated circuitry, when yields were extremely low, there was such incentive. Today ordinary integrated circuits are made with yields comparable with those obtained for individual semiconductor devices. The same pattern will make larger arrays economical, if other considerations

make such arrays desirable.

Heat problem

Will it be possible to remove the heat generated by tens of thousands of components in a single silicon chip? If we could shrink the volume of a standard high-speed digital computer to that required for the components themselves, we would expect it to glow brightly with present power

dissipation. But it won.t happen with integrated circuits. Since integrated electronic structures are two-dimensional, they have a surface available for cooling close to each center of heat generation. In addition, power is needed primarily to drive the various lines and capacitances associated with the system. As long as a function is confined to a small area on a wafer, the amount of capacitance which must be driven is

distinctly limited. In fact, shrinking dimensions on an integrated structure makes it possible to operate the structure at higher speed for the same power per unit area.

Day of reckoning

Clearly, we will be able to build such componentcrammed equipment. Next, we ask under what circumstances we should do it. The total cost of making a particular system function must be minimized. To do so, we could amortize the engineering over several identical items, or evolve flexible

techniques for the engineering of large functions so that no disproportionate expense need be borne by a particular array. Perhaps newly devised design automation procedures could translate from logic diagram to technological realization without any special engineering. It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected. The availability of large functions, combined with functional design and construction, should allow the manufacturer of large systems to design and construct a considerable variety of equipment both rapidly and economically.

Linear circuitry

Integration will not change linear systems as radically as digital systems. Still, a considerable degree of integration will be achieved with linear circuits. The lack of large-value capacitors and inductors is the greatest fundamental limitations to integrated electronics in the linear area. By their very nature, such elements require the storage of energy in a volume. For high Q it is necessary that the

volume be large. The incompatibility of large volume and integrated electronics is obvious from the terms themselves. Certain resonance phenomena, such as those in piezoelectric crystals, can be expected to have some applications for tuning functions, but inductors and capacitors will be with us

for some time. The integrated r-f amplifier of the future might well consist of integrated stages of gain, giving high performance at minimum cost, interspersed with relatively large tuning elements. Other linear functions will be changed considerably. The matching and tracking of similar components in integrated

structures will allow the design of differential amplifiers of greatly improved performance. The use of thermal feedback effects to stabilize integrated structures to a small fraction of a degree will allow the construction of oscillators with crystal stability. Even in the microwave area, structures included in the

definition of integrated electronics will become increasingly important. The ability to make and assemble components small compared with the wavelengths involved will allow the use of lumped parameter design, at least at the lower frequencies. It is difficult to predict at the present time just how extensive the invasion of the microwave area by integrated electronics will be. The successful realization of such items as phased-array antennas, for example, using a multiplicity of integrated microwave power sources, could completely revolutionize radar.

Moore's Law describes a long-term trend in the history of computing hardware, in which the number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years. Rather than being a naturally-occurring "law" that cannot be controlled, however, Moore's Law is effectively a business practice in which the advancement of transistor counts occurs at a fixed rate.

The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. All of these are improving at (roughly) exponential rates as well. This has dramatically increased the usefulness of digital electronics in nearly every segment of the world economy. Moore's law precisely describes a driving force of technological and social change in the late 20th and early 21st centuries. The trend has continued for more than half a century and is not expected to stop until 2015 or later.

The law is named for Intel co-founder Gordon E. Moore, who introduced the concept in a 1965 paper. It has since been used in the semiconductorindustry to guide long term planning and to set targets for research and development.

History

The term "Moore's law" was coined around 1970 by the Caltech professor, VLSI pioneer, and entrepreneur Carver Mead. Predictions of similar increases in computer power had existed years prior. Alan Turing in a 1950 paper had predicted that by the turn of the millennium, computers would have a billion words of memory. Moore may have heard Douglas Engelbart, a co-inventor of today's mechanical computer mouse, discuss the projected downscaling of integrated circuit size in a 1960 lecture. A New York Times article published August 31, 2009, credits Engelbart as having made the prediction in 1959.

Moore's original statement that transistor counts had doubled every year can be found in his publication "Cramming more components ontointegrated circuits", Electronics Magazine 19 April 1965:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer

Moore slightly altered the formulation of the law over time, bolstering the perceived accuracy of Moore's Law in retrospect.] Most notably, in 1975, Moore altered his projection to a doubling every two years [17]. Despite popular misconception, he is adamant that he did not predict a doubling "every 18 months". However, an Intel colleague had factored in the increasing performance of transistors to conclude that integrated circuits would double in performance every 18 months.

In April 2005, Intel offered $10,000 to purchase a copy of the original Electronics Magazine. David Clark, an engineer living in the UK, was the first to find a copy and offer it to Intel.

Other formulations and similar laws

http://upload.wikimedia.org/wikipedia/commons/thumb/9/90/Hard_drive_capacity_over_time.svg/350px-Hard_drive_capacity_over_time.svg.png

Several measures of digital technology are improving at exponential rates related to Moore's law, including the size, cost, density and speed of components. Moore himself wrote only about the density of components (or transistors) at minimum cost.

Transistors per integrated circuit. The most popular formulation is of the doubling of the number of transistors on integrated circuits every two years. At the end of the 1970s, Moore's law became known as the limit for the number of transistors on the most complex chips. Recent trends show that this rate has been maintained into 2007.

Density at minimum cost per transistor. This is the formulation given in Moore's 1965 paper. It is not just about the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest. As more transistors are put on a chip, the cost to make each transistor decreases, but the chance that the chip will not work due to a defect increases. In 1965, Moore examined the density of transistors at which cost is minimized, and observed that, as transistors were made smaller through advances in photolithography, this number would increase at "a rate of roughly a factor of two per year".

Cost per transistor. As the size of transistors has decreased, the cost per transistor has decreased as well. However, the manufacturing cost per unit area has only increased over time, since materials and energy expenditures per unit area have only increased with each successive technology node.

Computing performance per unit cost. Also, as the size of transistors shrinks, the speed at which they operate increases. It is also common to cite Moore's law to refer to the rapidly continuing advance in computing performance per unit cost, because increase in transistor count is also a rough measure of computer processing performance. On this basis, the performance of computers per unit cost—or more colloquially, "bang per buck"—doubles every 24 months

Power consumption. the power consumption of computer nodes doubles every 18 months.

Hard disk storage cost per unit of information. A similar law has held for hard disk storage cost per unit of information. The rate of progression in disk storage over the past decades has actually sped up more than once, corresponding to the utilization of error correcting codes, the magnetoresistive effect and the giant magnetoresistive effect. The current rate of increase in hard drivecapacity is roughly similar to the rate of increase in transistor count. Recent trends show that this rate has been maintained into 2007.

RAM storage capacity. Another version states that RAM storage capacity increases at the same rate as processing power.

http://upload.wikimedia.org/wikipedia/commons/thumb/9/94/Hendys_Law.jpg/350px-Hendys_Law.jpg

Pixels per dollar based on Australian recommended retail price of Kodak digital cameras

Network capacity According to Gerry/Gerald Butters, the former head of Lucent's Optical Networking Group at Bell Labs, there is another version, called Butter's Law of Photonics, a formulation which deliberately parallels Moore's law. Butter's law says that the amount of data coming out of an optical fiber is doubling every nine months. Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability of wavelength-division multiplexing (sometimes called "WDM") increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking and DWDM is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in the dot-com bubble. Nielsen's Law says that the bandwidth available to users increases by 50% annually.

Pixels per dollar. Similarly, Barry Hendy of Kodak Australia has plotted the "pixels per dollar" as a basic measure of value for a digital camera, demonstrating the historical linearity (on a log scale) of this market and the opportunity to predict the future trend of digital camera price and resolution.

The Great Moore's Law Compensator (TGMLC) generally referred to asbloat, is the principle that successive generations of computer software acquire enough bloat to offset the performance gains predicted by Moore's Law. In a 2008 article in InfoWorld, Randall C. Kennedy,[30] formerly of Intel, introduces this term using successive versions of Microsoft Office between the year 2000 and 2007 as his premise. Despite the gains in computational performance during this time period according to Moore's law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer.

As a target for industry and a self-fulfilling prophecy

Although Moore's law was initially made in the form of an observation and forecast, the more widely it became accepted, the more it served as a goal for an entire industry. This drove both marketing and engineering departments of semiconductor manufacturers to focus enormous energy aiming for the specified increase in processing power that it was presumed one or more of their competitors would soon actually attain. In this regard, it can be viewed as a self-fulfilling prophecy.[11][31]

Relation to manufacturing costs

As the cost of computer power to the consumer falls, the cost for producers to fulfill Moore's law follows an opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. Rising manufacturing costs are an important consideration for the sustaining of Moore's law. This had led to the formulation of "Moore's second law," which is that the capital cost of a semiconductor fab also increases exponentially over time.

Materials required for advancing technology (e.g., photoresists and other polymers and industrial chemicals) are derived from natural resources such as petroleum and so are affected by the cost and supply of these resources. Nevertheless, photoresist costs are coming down through more efficient delivery, though shortage risks remain.

The cost to tape-out a chip at 90 nm is at least US$1,000,000 and exceeds US$3,000,000 for 65 nm.

Future trends

Computer industry technology "road maps" predict (as of 2001) that Moore's law will continue for several chip generations. Depending on and after the doubling time used in the calculations, this could mean up to a hundredfold increase in transistor count per chip within a decade. The semiconductor industry technology roadmap uses a three-year doubling time for microprocessors, leading to a tenfold increase in the next decade. Intel was reported in 2005 as stating that the downsizing of silicon chips with good economics can continue during the next decade and in 2008 as predicting the trend through 2029.

Some of the new directions in research that may allow Moore's law to continue are:

§ Researchers from IBM and Georgia Tech created a new speed record when they ran a silicon/germanium helium supercooled transistor at 500 gigahertz (GHz). The transistor operated above 500 GHz at 4.5 K (−451 °F/−268.65 °C) and simulations showed that it could likely run at 1 THz (1,000 GHz). However, this trial only tested a single transistor.

§ In early 2006, IBM researchers announced that they had developed a technique to print circuitry only 29.9 nm wide using deep-ultraviolet(DUV, 193-nanometer) optical lithography. IBM claims that this technique may allow chipmakers to use current methods for seven years while continuing to achieve results forecast by Moore's law. New methods that can achieve smaller circuits are expected to be substantially more expensive.

§ In April 2008, researchers at HP Labs announced the creation of a working "memristor": a fourth basic passive circuit element whose existence had previously only been theorized. The memristor's unique properties allow for the creation of smaller and better-performing electronic devices This memristor bears some resemblance to resistive memory (CBRAM or RRAM) developed independently and recently by other groups for non-volatile memory applications.

Ultimate limits of the law

On 13 April 2005, Gordon Moore stated in an interview that the law cannot be sustained indefinitely: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens." He also noted that transistors would eventually reach the limits of miniaturization at atomic levels:

In terms of size [of transistors] you can see that we're approaching the size of atoms which is a fundamental barrier, but it'll be two or three generations before we get that far—but that's as far out as we've ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. By then they'll be able to make bigger chips and have transistor budgets in the billions.

In January 1995, the Digital Alpha 21164 microprocessor had 9.3 million transistors. This 64-bit processor was a technological spearhead at the time, even if the circuit’s market share remained average. Six years later, a state of the art microprocessor contained more than 40 million transistors. It is theorised that with further miniaturisation, by 2015 these processors should contain more than 15 billion transistors, and by 2020 will be in molecular scale production, where each molecule can be individually positioned.

In 2003 Intel predicted the end would come between 2013 and 2018 with 16 nanometer manufacturing processes and 5 nanometer gates, due toquantum tunneling, although others suggested chips could just get bigger, or become layered. In 2008 it was noted that for the last 30 years it has been predicted that Moore's law would last at least another decade.

Some see the limits of the law as being far in the distant future. Lawrence Krauss and Glenn D. Starkman announced an ultimate limit of around 600 years in their paper, based on rigorous estimation of total information-processing capacity of any system in the Universe.

Then again, the law has often met obstacles that first appeared insurmountable but were indeed surmounted before long. In that sense, Moore says he now sees his law as more beautiful than he had realized: "Moore's law is a violation of Murphy's law. Everything gets better and better."

Futurists and Moore's law

http://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/PPTMooresLawai.jpg/200px-PPTMooresLawai.jpg

Kurzweil's extension of Moore's law from integrated circuits to earliertransistors, vacuum tubes, relays andelectromechanical computers.

Futurists such as Vernor Vinge, Bruce Sterling, and Ray Kurzweil believe that the exponential improvement described by Moore's law will ultimately lead to a technological singularity: a period where progress in technology occurs almost instantly.

Although Kurzweil agrees that by 2019 the current strategy of ever finer photolithography will have run its course, he speculates that this does not mean the end of Moore's law:

Moore's law of Integrated Circuits was not the first, but the fifth paradigm to forecast accelerating price-performance ratios. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to relay-based machine that cracked the Nazi to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer.

Kurzweil speculates that it is likely that some new type of technology (possibly optical or quantum computers) will replace current integrated-circuit technology, and that Moore's Law will hold true long after 2020.

Lloyd shows how the potential computing capacity of a kilogram of matter equals pi times energy divided by Planck's constant. Since the energy in such a large number and Planks's constant is so small, this equation generates an extremely large number: about 5.0 * 1050 operations per second.

He believes that the exponential growth of Moore's law will continue beyond the use of integrated circuits into technologies that will lead to thetechnological singularity. The Law of Accelerating Returns described by Ray Kurzweil has in many ways altered the public's perception of Moore's Law. It is a common (but mistaken) belief that Moore's Law makes predictions regarding all forms of technology, when it actually only concerns semiconductor circuits. Many futurists still use the term "Moore's law" in this broader sense to describe ideas like those put forth by Kurzweil.

Moore himself, who never intended his eponymous law to be interpreted so broadly, has quipped:

Moore's law has been the name given to everything that changes exponentially. I say, if Gore invented the Internet, I invented the exponential.

Consequences and limitations

Transistor count versus computing performance

The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical computing performance. For example, the higher transistor density in multi-core CPUs doesn't greatly increase speed on many consumer applications that are not parallelized. There are cases where a roughly 45% increase in processor transistors have translated to roughly 10–20% increase in processing power. Changes in microarchitecture can affect processor speed independently of clock speed and transistor count: for example, early AMD64 processors had better overall performance compared to the late Pentium 4 series, which had more transistors.

Viewed even more broadly, the speed of a system is often limited by factors other than processor speed, such as internal bandwidth and storage speed, and one can evaluate a system's overall performance based on factors other than speed, like reliability, cost-effectiveness, or electricity usage.

Importance of non-CPU bottlenecks

Increasing CPU speeds and memory capacities have caused other bottlenecks, like disk seek times, to become more important by comparison, changing software and system designs.

Not all aspects of computing technology have improved exponentially. Random Access Memory (RAM) speeds and hard drive seek times improve at best a few percentage points each year. Since the capacity of RAM and hard drives is increasing much faster than is their access speed, intelligent use of their capacity becomes more and more important. For example, it now more often makes sense to use more disk space to reduce the number of seeks, such as by copying data into specially laid out indexes: space is getting much cheaper relative to disk seeks.

Similarly, CPU resources are getting much cheaper relative to memory and disk accesses. Thus, it sometimes becomes advantageous to use additional CPU resources to avoid accesses to RAM or (especially) disk. For example, some databases can now compress indexes and data, reducing the amount of data read from disk at the cost of using CPU time for compression and decompression. The increasing relative cost of disk seeks also makes the low seek times provided by solid state disks more attractive for some seek-heavy applications, and makes seek-reduction strategies like intelligent caching and prefetching more important to overall system speed.

No comments:

Post a Comment