Category Archives: Silicon

Moore’s Law: A 50th Anniversary Assessment

See our celebration of the 50th anniversary of Moore’s Law — “Moore’s Law Exceeded Moore’s Expectations.”

And for a longer treatment, see our 19-page paper — “Moore’s Law: A 50th Anniversary Assessment.”

Moore’s Law update

John Markoff surveys the near future prospects for Moore’s Law . . . and notes the first mention in print of the transistor, on July 1, 1948.

Huge $1.45 billion, a new low

After the EC antitrust authority today leveled a €1.06 billion fine against Intel, the company’s general counsel Bruce Sewell gave an illuminating interview to CNBC:

We better come up with a better way to restrict the EC’s range of motion on these matters. Sewell called the action “arbitrary.” The CNBC reporters called it a “shakedown.” They’re both right.

Meanwhile, EC competition commissioner Neelie Kroes added insult to injury when she blithely noted that Intel is now supporting European taxpayers.

A huge array of experts in the legal and economic fields quickly denounced the EU “fine,” (Can you really call $1.45 billion a fine?), and raised very serious questions about arbitrary antitrust becoming the chief protectionist tool of the 21st century.

Scholar Ronald Cass said the EC Competition Directorate acted as

prosecutor, investigator, and judge.

Grant Aldonas of the Center for Strategic and International Studies said,

Given the implications for R&D that drives Intel’s investment in both Europe and the United States, it makes little sense to divert these funds to the European Union’s coffers instead.

And as we attempt to emerge from a brutal economic crisis, where unemployment continues to rise, my former colleague Ken Ferree made the crucial macro point:

If you love jobs and economic growth, you have to love the companies that drive the economy and create employment demand.

The global economy cannot function if large nations or regions, like the EU, the U.S., or China, engage in over-the-top punitive actions against any company, let alone one of the most inventive firms of our time. Without engaging in the type of tit-for-tat protectionism that leads to destructive trade wars, we need to find a way to roll back what I called in a recent Wall Street Journal article “Europe’s anti-innovation ‘antitrust’ policy.” Moreover, we should resist letting the EC’s casual intrusiveness seep into our own antitrust jurisprudence, which has for the most part fortunately been more tightly focused on the question of consumer harm. As this excellent article notes, there is some reason to worry we might be sliding in the wrong direction.

Resisting these impulses will promote the global cooperation we need to rebound from the crisis. It will be better for innovative companies. Better for consumers of innovative, life changing products. And . . . better for the citizens, consumers, and entrepreneurs of that too-long underperforming land we call Europe.

Silicon Shift

Take a look at this 40 minute interview with Jen-Hsun Huang, CEO of graphics chip maker Nvidia. It’s a non-technical discussion of a very important topic in the large world of computing and the Internet. Namely, the rise of the GPU — the graphics processing unit.

Almost 40 years ago the CPU — or central processing unit — burst onto the scene and enabled the PC revolution, which was mostly about word processing (text) and simple spreadsheets (number crunching). But today, as Nvidia and AMD’s ATI division add programmability to their graphics chips, the GPU becomes the next generation general purpose processor. (Huang briefly describes the CUDA programmability architecture, which he compares to the x86 architecture of the CPU age.) With its massive parallelism and ability to render the visual applications most important to today’s consumers — games, photos, movies, art, photoshop, YouTube, GoogleEarth, virtual worlds — the GPU rises to match the CPU’s “centrality” in the computing scheme.

Less obvious, the GPU’s attributes also make it useful for all sorts of non-consumer applications like seismic geographic imaging for energy exploration, high-end military systems, and even quantitative finance.

Perhaps the most exciting shift unleashed by the GPU, however, is in cloud computing. At the January 2009 Consumer Electronics Show in Las Vegas, AMD and a small but revolutionary start-up called LightStage/Otoy announced they are building the world’s fastest petaflops supercomputer at LightStage/Otoy’s Burbank, CA, offices. But this isn’t just any supercomputer. It’s based on GPUs, not CPUs. And it’s not just really, really fast. Designed for the Internet age, this “render cloud” will enable real-time photorealistic 3D gaming and virtual worlds across the Web. It will compress the power of the most advanced motion picture CGI (computer generated imaging) techniques, which can consume hours to render one movie frame and months to produce movie sequences, into real-time . . . and link this power to the wider world over the Net. 

Watch this space. The GPU story is big.

Chang’s Fabless Chips

Not surprising, perhaps, that the Semiconductor Industry Association would give an award to long-time industry veteran Morris Chang. But the founder of Taiwan Semi played an absolutely crucial role in the history of computers, IT, communications, and anything that touches silicon.

TSMC, of course, popularized the idea of manufacturing chips that are designed by others. Such companies, called foundries, became essential partners to design specialists that save money by outsourcing production.

What people tend to overlook is how the Chinese-born engineer, who spent 25 years at Texas Instruments, helped propel a big American comeback. In the 1980s, Japanese chip makers used manufacturing muscle to hammer companies like Intel and TI. The U.S. manufacturers gradually rebounded, but newcomers such as Qualcomm, Broadcom and Nvidia — which might not exist without foundries — were an equally important factor. 

With the publication of his Introduction to VLSI Systems in the late 1970s, Carver Mead predicted this “fabless” model, splitting the design and manufacturing functions of previously integrated semiconductor firms. Mead had performed the research for Gordon Moore’s profound prediction in 1965 that integrated circuits could — and would — continue doubling in transistor density every 18 months or so for decades into the future. Mead even named this observation-prediction “Moore’s Law.”

Companies that remained integrated all these years — like Intel — have continued to lead in manufacturing technology, finding ingenious ways to sustain Moore’s Law. But the breadth and creativity and economic power of the silicon revolution would not have happened without Morris Chang’s fabless model.