Category Archives: Exaflood

An Exa-Prize for “Masters of Light”

Holy Swedish silica/on. It’s an exa-prize!

Calling them “Masters of Light,” the Royal Swedish Academy awarded the 2009 Nobel Prize in Physics to Charles Kao, for discoveries central to the development of optical fiber, and to Willard Boyle and George Smith of Bell Labs, for the invention of the charge-coupled device (CCD) digital imager.

Perhaps more than any two discoveries, these technologies are responsible for our current era of dramatically expanding cultural content and commercial opportunities across the Internet. I call this torrent of largely visual data gushing around the Web the “exaflood.” Exa means 1018, and today monthly Internet traffic in the U.S. tops two exabytes. For all of 2009, global Internet traffic should reach 100 exabytes, equal to the contents of around 5,000,000 Libraries of Congress. By 2015, the U.S. might transmit 1,000 exabytes, the equivalent of two Libraries of Congress every second for the entire year.

Almost all this content is transmitted via fiber optics, where laser light pulsing billions of times a second carries information thousands of miles through astoundingly pure glass (silica). And much of this content is created using CCD imagers, the silicon microchips that turn photons into electrons in your digital cameras, camcorders, mobile phones, and medical devices. The basic science of the breakthroughs involves mastering the delicate but powerful reflective, refractive, and quantum photoelectric properties of both light and one of the world’s simplest and most abundant materials — sand. Also known in different forms as silica and silicon.

The innovations derived from Kao, Boyle, and Smith’s discoveries will continue cascading through global society for decades to come.

Can Microsoft Grasp the Internet Cloud?

See my new Forbes.com commentary on the Microsoft-Yahoo search partnership:

Ballmer appears now to get it. “The more searches, the more you learn,” he says. “Scale drives knowledge, which can turn around and drive innovation and relevance.”

Microsoft decided in 2008 to build 20 new data centers at a cost of $1 billion each. This was a dramatic commitment to the cloud. Conceived by Bill Gates’s successor, Ray Ozzie, the global platform would serve up a new generation of Web-based Office applications dubbed Azure. It would connect video gamers on its Xbox Live network. And it would host Microsoft’s Hotmail and search applications.

The new Bing search engine earned quick acclaim for relevant searches and better-than-Google pre-packaged details about popular health, transportation, location and news items. But with just 8.4% of the market, Microsoft’s $20 billion infrastructure commitment would be massively underutilized. Meanwhile, Yahoo, which still leads in news, sports and finance content, could not remotely afford to build a similar new search infrastructure to compete with Google and Microsoft. Thus, the combination. Yahoo and Microsoft can share Ballmer’s new global infrastructure.

Jackson’s traffic spike

Om Malik surveys the Net traffic spike after Michael Jackson’s death:

Around 6:30 p.m. EST, Akamai’s Net Usage Index for News spiked all the way to 4,247,971 global visitors per minute vs. normal traffic of 2,000,000, a 112 percent gain.

Bandwidth Boom: Measuring Communications Capacity

See our new paper estimating the growth of consumer bandwidth – or our capacity to communicate – from 2000 to 2008. We found:

  • a huge 5,400% increase in residential bandwidth;
  • an astounding 54,200% boom in wireless bandwidth; and
  • an almost 100-fold increase in total consumer bandwidth

us-consumer-bandwidth-2000-08-res-wireless

U.S. consumer bandwidth at the end of 2008 totaled more than 717 terabits per second, yielding, on a per capita basis, almost 2.4 megabits per second of communications power.

Netflix and Web video

We’ve been talking about Netflix’s sure move to the Web for a long time now. In our presentations, we show how Netflix DVDs that today mostly arrive in the U.S. mail, if sent in high-def (HD) over the Net, would total almost eight exabytes per year. That’s almost half of all U.S. Internet traffic in 2008.

Well, here’s CEO Reed Hastings in Tuesday’s Wall Street Journal:

Netflix Inc. is a standout in the recession. The DVD-rental company added more subscribers than ever during the first three months of the year. Its stock has more than doubled since October.

But Netflix’s chief executive officer, Reed Hastings, thinks his core business is doomed. As soon as four years from now, he predicts, the business that generates most of Netflix’s revenue today will begin to decline, as DVDs delivered by mail steadily lose ground to movies sent straight over the Internet. So Mr. Hastings, who co-founded the company, is quickly trying to shift Netflix’s business — seeking to make more videos available online and cutting deals with electronics makers so consumers can play those movies on television sets.

His position offers a rare look at how a CEO manages a still-hot business as its time runs out. “Almost no companies succeed at what we’re doing,” he says.

Getting the exapoint. Creating the future.

Lots of commentators continue to misinterpret the research I and others have done on Internet traffic and its interplay with network infrastructure investment and communications policy.

I think that new video applications require lots more bandwidth — and, equally or even more important, that more bandwidth drives creative new applications. Two sides of the innovation coin. And I think investment friendly policies are necessary both to encourage deployment of new wireline and wireless broadband and also boost innovative new applications and services for consumers and businesses.

But this article, as one of many examples, mis-summarizes my view. It uses scary words like “apocalypse,” “catastrophe,” and, well, “scare mongering,” to describe my optimistic anticipation of an exaflood of Internet innovations coming our way. I don’t think that

the world will simply run out of bandwidth and we’ll all be weeping over our clogged tubes.

Not unless we block the expansion of new network capacity and capability. (more…)

Net traffic update…

See two recent articles (here and here) addressing a topic I’ve done lots of research on: Internet traffic growth, mostly due to Web video, and the technology investment needed to both drive and accommodate it.

Here’s one of my papers:
Estimating the Exaflood – 01.28.08 – by Bret Swanson & George Gilder Estimating the Exaflood – 01.28.08 – by Bret Swanson & George Gilder Bret Swanson

Bandwidth and QoS: Much ado about something

The supposed top finding of a new report commissioned by the British telecom regulator Ofcom is that we won’t need any QoS (quality of service) or traffic management to accommodate next generation video services, which are driving Internet traffic at consistently high annual growth rates of between 50% and 60%. TelecomTV One headlined, “Much ado about nothing: Internet CAN take video strain says UK study.” 

But the content of the Analysys Mason (AM) study, entitled “Delivering High Quality Video Services Online,” does not support either (1) the media headline — “Much ado about nothing,” which implies next generation services and brisk traffic growth don’t require much in the way of new technology or new investment to accommodate them — or (2) its own “finding” that QoS and traffic management aren’t needed to deliver these next generation content and services.

For example, AM acknowledges in one of its five key findings in the Executive Summary: 

innovative business models might be limited by regulation: if the ability to develop and deploy novel approaches was limited by new regulation, this might limit the potential for growth in online video services.

In fact, the very first key finding says:

A delay in the migration to 21CN-based bitstream products may have a negative impact on service providers that use current bitstream products, as growth in consumption of video services could be held back due to the prohibitive costs of backhaul capacity to support them on the legacy core network. We believe that the timely migration to 21CN will be important in enabling significant take-up of online video services at prices that are reasonable for consumers.

So very large investments in new technologies and platforms are needed, and new regulations that discourage this investment could delay crucial innovations on the edge. Sounds like much ado about something, something very big.  (more…)

Internet traffic update: right on the nose

As nearly every indicator of economic growth plummets, the Net maintains its rise. Given my research on the growth of the Internet, I’m always interested in the latest data. Here are year-end estimates, courtesy of Andrew Odlyzko at the University of Minnesota.

Monthly U.S. traffic by year-end 2008 was about 1.5 exabytes (10^18) per month, for an annual growth rate of around 50-60%. (An exabyte is a million terabytes, or a billion gigabytes.) My research suggests the Net should continue to grow at an annual compound rate of around 56% through 2015.

Silicon Shift

Take a look at this 40 minute interview with Jen-Hsun Huang, CEO of graphics chip maker Nvidia. It’s a non-technical discussion of a very important topic in the large world of computing and the Internet. Namely, the rise of the GPU — the graphics processing unit.

Almost 40 years ago the CPU — or central processing unit — burst onto the scene and enabled the PC revolution, which was mostly about word processing (text) and simple spreadsheets (number crunching). But today, as Nvidia and AMD’s ATI division add programmability to their graphics chips, the GPU becomes the next generation general purpose processor. (Huang briefly describes the CUDA programmability architecture, which he compares to the x86 architecture of the CPU age.) With its massive parallelism and ability to render the visual applications most important to today’s consumers — games, photos, movies, art, photoshop, YouTube, GoogleEarth, virtual worlds — the GPU rises to match the CPU’s “centrality” in the computing scheme.

Less obvious, the GPU’s attributes also make it useful for all sorts of non-consumer applications like seismic geographic imaging for energy exploration, high-end military systems, and even quantitative finance.

Perhaps the most exciting shift unleashed by the GPU, however, is in cloud computing. At the January 2009 Consumer Electronics Show in Las Vegas, AMD and a small but revolutionary start-up called LightStage/Otoy announced they are building the world’s fastest petaflops supercomputer at LightStage/Otoy’s Burbank, CA, offices. But this isn’t just any supercomputer. It’s based on GPUs, not CPUs. And it’s not just really, really fast. Designed for the Internet age, this “render cloud” will enable real-time photorealistic 3D gaming and virtual worlds across the Web. It will compress the power of the most advanced motion picture CGI (computer generated imaging) techniques, which can consume hours to render one movie frame and months to produce movie sequences, into real-time . . . and link this power to the wider world over the Net. 

Watch this space. The GPU story is big.

Straw Men Can’t Swim

The venerable Economist magazine has made a hash of my research on the growth of the Internet, which examines the rich media technologies now flooding onto the Web and projects Internet traffic over the coming decade. This “exaflood” of new applications and services represents a bounty of new entertainment, education, and business applications that can drive productivity and economic growth across all our industries and the world economy. 

But somehow, The Economist was convinced that my research represents some “gloomy prophesy,” that I am “doom-mongering” about an Internet “overload” that could “crash” the Internet. Where does The Economist find any evidence for these silly charges?

In a series of reports, articles (here and here), and presentations around the globe — and in a long, detailed, nuanced, very pleasant interview with The Economist, in which I thought the reporter grasped the key points — I have consistently said the exaflood is an opportunity, an embarrassment of riches.

I’ve also said it will take a lot of investment in networks (both wired and wireless), data centers, and other cloud infrastructure to both drive and accommodate this exaflood. Some have questioned this rather mundane statement, but for the life of me I can’t figure out why they deny building this amazingly powerful global Internet might cost a good bit of money.

One critic of mine has said he thinks we might need to spend $5-10 billion on new Net infrastructure over the next five years. What? We already spend some $70 billion a year on all communications infrastructure in the U.S. with an ever greater portion of that going toward what we might consider the Net. Google invests more than $3 billion a year in its cloud infrastructure, Verizon is building a $25-billion fiber-to-the-home network, and AT&T is investing another $10 billion, just for starters. Over the last 10 years, the cable TV companies invested some $120 billion. And Microsoft just yesterday said its new cloud computing infrastructure will consist of 20 new “super data centers,” at $1 billion a piece.

I’m glad The Economist quoted my line that “today’s networks are not remotely prepared to handle this exaflood.” Which is absolutely, unambiguously, uncontroversially true. Can you get all the HD video you want over your broadband connection today? Do all your remote applications work as fast as you’d like? Is your mobile phone and Wi-Fi access as widespread and flawless as you’d like? Do videos or applications always work instantly, without ever a hint of buffer or delay? Are today’s metro switches prepared for a jump from voice-over-IP to widespread high-resolution video conferencing? No, not even close.

But as we add capacity and robustness to many of these access networks, usage and traffic will surge, and the bottlenecks will shift to other parts of the Net. Core, edge, metro, access, data center — the architecture of the Net is ever-changing, with technologies and upgrades and investment happening in different spots at varying pace. This is not a debate about whether the Internet will “crash.” It’s a discussion about how the Net will evolve and grow, about what its capabilities and architecture will be, and about how much it will cost and how we will govern it, but mostly about how much it will yield in new innovation and economic growth.

The Economist and the myriad bloggers, who everyday try to kill some phantom catastrophe theory I do not recognize, are engaging in the old and very tedious practice of setting up digital straw men, which they then heroically strike down with a bold punch of the delete button. Ignoring the real issues and the real debate doesn’t take much effort, nor much thought.

Clouds are expensive

Microsoft, having a couple weeks ago finally capitulated to the Web with the announcement of Ray Ozzie’s new Net-based strategy, now says it will build 20 new data centers at $1 billion a piece. Google is already investing some $3 billion a year on its cloud infrastructure.

Lots of people have criticized my rough estimates of a couple hundred billion in new Net investment over the next five years, saying it’s closer to $5-10 billion, and I wonder what the heck they are thinking.

Getting Jacked Up in Three Dimensions

I’ve written a lot about what comes after high-definition (HD) video. New concepts like Ultra-HD from Japanese company NHK and IMAX-at-Home from Hewlett-Packard could be real by the middle of next decade. But that great TV innovator the National Football League is already ahead of the game.

Next week, a game between the San Diego Chargers and the Oakland Raiders will be broadcast live in 3-D to theaters in Los Angeles, New York and Boston. It is a preliminary step on what is likely a long road to any regular 3-D broadcasts of football games.

Speeding the Cloud

For someone like me who studies cloud computing and Internet traffic, which is measured in tera-, peta-, and exabytes, Google’s terabyte sorting record is interesting:

we were able to sort 1TB (stored on the Google File System as 10 billion 100-byte records in uncompressed text files) on 1,000 computers in 68 seconds. By comparison, the previous 1TB sorting record is 209 seconds on 910 computers.

But I suppose I can see how you might not feel the same way.

The big bad media monopoly

In the context of potential federal media regulation known as “à la carte,” my colleague Adam Thierer comments on new Internet video technologies and content here and here, and expertly reiterates an old theme of mine, namely that the Internet *is* à la carte.

« Previous Page