Category Archives: Cloud Computing

Digital Dynamism

See our new 20-page report — Digital Dynamism: Competition in the Internet Ecosystem:

The Internet is altering the communications landscape even faster than most imagined.

Data, apps, and content are delivered by a growing and diverse set of firms and platforms, interconnected in ever more complex ways. The new network, content, and service providers increasingly build their varied businesses on a common foundation — the universal Internet Protocol (IP). We thus witness an interesting phenomenon — the divergence of providers, platforms, services, content, and apps, and the convergence on IP.

The dynamism of the Internet ecosystem is its chief virtue. Infrastructure, services, and content are produced by an ever wider array of firms and platforms in overlapping and constantly shifting markets.

The simple, integrated telephone network, segregated entertainment networks, and early tiered Internet still exist, but have now been eclipsed by a far larger, more powerful phenomenon. A new, horizontal, hypercon- nected ecosystem has emerged. It is characterized by large investments, rapid innovation, and extreme product differentiation.

  • Consumers now enjoy at least five distinct, competing modes of broadband connectivity — cable modem, DSL, fiber optic, wireless broadband, and satellite — from at least five types of firms. Widespread wireless Wi- Fi nodes then extend these broadband connections.
  • Firms like Google, Microsoft, Amazon, Apple, Facebook, and Netflix are now major Internet infrastructure providers in the form of massive data centers, fiber networks, content delivery systems, cloud computing clusters, ecommerce and entertainment hubs, network protocols and software, and, in Google’s case, fiber optic access net- works. Some also build network devices and operating systems. Each competes to be the hub — or at least a hub — of the consumer’s digital life. So large are these new players that up to 80 percent of network traffic now bypasses the traditional public Internet backbone.
  • Billions of diverse consumer and enterprise devices plug into these networks, from PCs and laptops to smartphones and tablets, from game consoles and flat panel displays to automobiles, web cams, medical devices, and untold sensors and industrial machines.

The communications playing field is continually shifting. Cable disrupted telecom through broadband cable modem services. Mobile is a massively successful business, yet it is cannibalizing wireline services, with further disruptions from Skype and other IP communications apps. Mobile service providers used to control the handset market, but today handsets are mobile computers that wield their own substantial power with consumers. While the old networks typically delivered a single service — voice, video, or data — today’s broadband networks deliver multiple services, with the “Cloud” offering endless possibilities.

Also view the accompanying graphic, showing the progression of network innovation over time: Hyperconnected: The New Network Map.

Microsoft Outlines Economics of the Cloud

In a new white paper:

We believe that large clouds could one day deliver computing power at up to 80% lower cost than small clouds.  This is due to the combined effects of three factors:supply-sideeconomies of scale which allow large clouds to purchase and operate infrastructure cheaper; demand-sideeconomies of scale which allow large clouds to run that infrastructure more efficiently by pooling users; and multi-tenancy which allows users to share an application, splitting the cost of managing that application.

Can Microsoft Grasp the Internet Cloud?

See my new Forbes.com commentary on the Microsoft-Yahoo search partnership:

Ballmer appears now to get it. “The more searches, the more you learn,” he says. “Scale drives knowledge, which can turn around and drive innovation and relevance.”

Microsoft decided in 2008 to build 20 new data centers at a cost of $1 billion each. This was a dramatic commitment to the cloud. Conceived by Bill Gates’s successor, Ray Ozzie, the global platform would serve up a new generation of Web-based Office applications dubbed Azure. It would connect video gamers on its Xbox Live network. And it would host Microsoft’s Hotmail and search applications.

The new Bing search engine earned quick acclaim for relevant searches and better-than-Google pre-packaged details about popular health, transportation, location and news items. But with just 8.4% of the market, Microsoft’s $20 billion infrastructure commitment would be massively underutilized. Meanwhile, Yahoo, which still leads in news, sports and finance content, could not remotely afford to build a similar new search infrastructure to compete with Google and Microsoft. Thus, the combination. Yahoo and Microsoft can share Ballmer’s new global infrastructure.

Getting the exapoint. Creating the future.

Lots of commentators continue to misinterpret the research I and others have done on Internet traffic and its interplay with network infrastructure investment and communications policy.

I think that new video applications require lots more bandwidth — and, equally or even more important, that more bandwidth drives creative new applications. Two sides of the innovation coin. And I think investment friendly policies are necessary both to encourage deployment of new wireline and wireless broadband and also boost innovative new applications and services for consumers and businesses.

But this article, as one of many examples, mis-summarizes my view. It uses scary words like “apocalypse,” “catastrophe,” and, well, “scare mongering,” to describe my optimistic anticipation of an exaflood of Internet innovations coming our way. I don’t think that

the world will simply run out of bandwidth and we’ll all be weeping over our clogged tubes.

Not unless we block the expansion of new network capacity and capability. (more…)

Climbing the knowledge automation ladder

Check out Stephen Wolfram’s new project called Alpha, which moves beyond searching for information on the Web and toward the integration of knowledge into more useful, higher level patterns. I find the prospect of offloading onto Stephen Wolfram lots of data mining and other drudgery immediately appealing for my own research and analysis work. Quicker research would yield more — and one might hope, better — analysis. One can imagine lots of hiccups in getting to a real product, but this video demo offers a fun and enticing beginning.

Bandwidth and QoS: Much ado about something

The supposed top finding of a new report commissioned by the British telecom regulator Ofcom is that we won’t need any QoS (quality of service) or traffic management to accommodate next generation video services, which are driving Internet traffic at consistently high annual growth rates of between 50% and 60%. TelecomTV One headlined, “Much ado about nothing: Internet CAN take video strain says UK study.” 

But the content of the Analysys Mason (AM) study, entitled “Delivering High Quality Video Services Online,” does not support either (1) the media headline — “Much ado about nothing,” which implies next generation services and brisk traffic growth don’t require much in the way of new technology or new investment to accommodate them — or (2) its own “finding” that QoS and traffic management aren’t needed to deliver these next generation content and services.

For example, AM acknowledges in one of its five key findings in the Executive Summary: 

innovative business models might be limited by regulation: if the ability to develop and deploy novel approaches was limited by new regulation, this might limit the potential for growth in online video services.

In fact, the very first key finding says:

A delay in the migration to 21CN-based bitstream products may have a negative impact on service providers that use current bitstream products, as growth in consumption of video services could be held back due to the prohibitive costs of backhaul capacity to support them on the legacy core network. We believe that the timely migration to 21CN will be important in enabling significant take-up of online video services at prices that are reasonable for consumers.

So very large investments in new technologies and platforms are needed, and new regulations that discourage this investment could delay crucial innovations on the edge. Sounds like much ado about something, something very big.  (more…)

“Innovation isn’t dead.”

See this fun interview with the energetic early Web innovator Marc Andreesen. Andreesen is on the Facebook board, has his own social networking company called Ning, and is just launching a new venture fund. He talks about Kindles, iPhones, social nets, the theory of cascading innovation, and says we should create new “virtual banks” to get past the financial crisis.

Silicon Shift

Take a look at this 40 minute interview with Jen-Hsun Huang, CEO of graphics chip maker Nvidia. It’s a non-technical discussion of a very important topic in the large world of computing and the Internet. Namely, the rise of the GPU — the graphics processing unit.

Almost 40 years ago the CPU — or central processing unit — burst onto the scene and enabled the PC revolution, which was mostly about word processing (text) and simple spreadsheets (number crunching). But today, as Nvidia and AMD’s ATI division add programmability to their graphics chips, the GPU becomes the next generation general purpose processor. (Huang briefly describes the CUDA programmability architecture, which he compares to the x86 architecture of the CPU age.) With its massive parallelism and ability to render the visual applications most important to today’s consumers — games, photos, movies, art, photoshop, YouTube, GoogleEarth, virtual worlds — the GPU rises to match the CPU’s “centrality” in the computing scheme.

Less obvious, the GPU’s attributes also make it useful for all sorts of non-consumer applications like seismic geographic imaging for energy exploration, high-end military systems, and even quantitative finance.

Perhaps the most exciting shift unleashed by the GPU, however, is in cloud computing. At the January 2009 Consumer Electronics Show in Las Vegas, AMD and a small but revolutionary start-up called LightStage/Otoy announced they are building the world’s fastest petaflops supercomputer at LightStage/Otoy’s Burbank, CA, offices. But this isn’t just any supercomputer. It’s based on GPUs, not CPUs. And it’s not just really, really fast. Designed for the Internet age, this “render cloud” will enable real-time photorealistic 3D gaming and virtual worlds across the Web. It will compress the power of the most advanced motion picture CGI (computer generated imaging) techniques, which can consume hours to render one movie frame and months to produce movie sequences, into real-time . . . and link this power to the wider world over the Net. 

Watch this space. The GPU story is big.

Straw Men Can’t Swim

The venerable Economist magazine has made a hash of my research on the growth of the Internet, which examines the rich media technologies now flooding onto the Web and projects Internet traffic over the coming decade. This “exaflood” of new applications and services represents a bounty of new entertainment, education, and business applications that can drive productivity and economic growth across all our industries and the world economy. 

But somehow, The Economist was convinced that my research represents some “gloomy prophesy,” that I am “doom-mongering” about an Internet “overload” that could “crash” the Internet. Where does The Economist find any evidence for these silly charges?

In a series of reports, articles (here and here), and presentations around the globe — and in a long, detailed, nuanced, very pleasant interview with The Economist, in which I thought the reporter grasped the key points — I have consistently said the exaflood is an opportunity, an embarrassment of riches.

I’ve also said it will take a lot of investment in networks (both wired and wireless), data centers, and other cloud infrastructure to both drive and accommodate this exaflood. Some have questioned this rather mundane statement, but for the life of me I can’t figure out why they deny building this amazingly powerful global Internet might cost a good bit of money.

One critic of mine has said he thinks we might need to spend $5-10 billion on new Net infrastructure over the next five years. What? We already spend some $70 billion a year on all communications infrastructure in the U.S. with an ever greater portion of that going toward what we might consider the Net. Google invests more than $3 billion a year in its cloud infrastructure, Verizon is building a $25-billion fiber-to-the-home network, and AT&T is investing another $10 billion, just for starters. Over the last 10 years, the cable TV companies invested some $120 billion. And Microsoft just yesterday said its new cloud computing infrastructure will consist of 20 new “super data centers,” at $1 billion a piece.

I’m glad The Economist quoted my line that “today’s networks are not remotely prepared to handle this exaflood.” Which is absolutely, unambiguously, uncontroversially true. Can you get all the HD video you want over your broadband connection today? Do all your remote applications work as fast as you’d like? Is your mobile phone and Wi-Fi access as widespread and flawless as you’d like? Do videos or applications always work instantly, without ever a hint of buffer or delay? Are today’s metro switches prepared for a jump from voice-over-IP to widespread high-resolution video conferencing? No, not even close.

But as we add capacity and robustness to many of these access networks, usage and traffic will surge, and the bottlenecks will shift to other parts of the Net. Core, edge, metro, access, data center — the architecture of the Net is ever-changing, with technologies and upgrades and investment happening in different spots at varying pace. This is not a debate about whether the Internet will “crash.” It’s a discussion about how the Net will evolve and grow, about what its capabilities and architecture will be, and about how much it will cost and how we will govern it, but mostly about how much it will yield in new innovation and economic growth.

The Economist and the myriad bloggers, who everyday try to kill some phantom catastrophe theory I do not recognize, are engaging in the old and very tedious practice of setting up digital straw men, which they then heroically strike down with a bold punch of the delete button. Ignoring the real issues and the real debate doesn’t take much effort, nor much thought.

Clouds are expensive

Microsoft, having a couple weeks ago finally capitulated to the Web with the announcement of Ray Ozzie’s new Net-based strategy, now says it will build 20 new data centers at $1 billion a piece. Google is already investing some $3 billion a year on its cloud infrastructure.

Lots of people have criticized my rough estimates of a couple hundred billion in new Net investment over the next five years, saying it’s closer to $5-10 billion, and I wonder what the heck they are thinking.

“Googlephobia”: An Unholy Alliance

My colleague Adam Thierer with an excellent post warning of the coming war on Google:

So, here we have Wu raising the specter of search engine bias and Lessig raising the specter of Google-as-panopticon. And this comes on top of groups like EPIC and CDT calling for more regulation of the online advertising marketplace in the name of protecting privacy.  Alarm bells must be going off at the Googleplex. But we all have reason to be concerned because greater regulation of Google would mean greater regulation of the entire code / application layer of the Net.  It’s bad enough that we likely have greater regulation of the infrastructure layer on the way thanks to Net neutrality mandates. We need to work hard to contain the damage of increased calls for government to get it’s hands all over every other layer of the Net.

Speeding the Cloud

For someone like me who studies cloud computing and Internet traffic, which is measured in tera-, peta-, and exabytes, Google’s terabyte sorting record is interesting:

we were able to sort 1TB (stored on the Google File System as 10 billion 100-byte records in uncompressed text files) on 1,000 computers in 68 seconds. By comparison, the previous 1TB sorting record is 209 seconds on 910 computers.

But I suppose I can see how you might not feel the same way.

Cloudy Forecast

Coincident with the news that Microsoft is embracing the Web even for its longtime PC-centric OS and apps, The Economist has a big special report on “cloud computing,” including articles on:

– “The Evolution of Data Centres
– “Software as a Service
– “Connecting to the Cloud
– “The Economics of the Cloud
– The Effect on Business; and
– “Computers without Borders