Category Archives: Internet

Can Microsoft Grasp the Internet Cloud?

See my new Forbes.com commentary on the Microsoft-Yahoo search partnership:

Ballmer appears now to get it. “The more searches, the more you learn,” he says. “Scale drives knowledge, which can turn around and drive innovation and relevance.”

Microsoft decided in 2008 to build 20 new data centers at a cost of $1 billion each. This was a dramatic commitment to the cloud. Conceived by Bill Gates’s successor, Ray Ozzie, the global platform would serve up a new generation of Web-based Office applications dubbed Azure. It would connect video gamers on its Xbox Live network. And it would host Microsoft’s Hotmail and search applications.

The new Bing search engine earned quick acclaim for relevant searches and better-than-Google pre-packaged details about popular health, transportation, location and news items. But with just 8.4% of the market, Microsoft’s $20 billion infrastructure commitment would be massively underutilized. Meanwhile, Yahoo, which still leads in news, sports and finance content, could not remotely afford to build a similar new search infrastructure to compete with Google and Microsoft. Thus, the combination. Yahoo and Microsoft can share Ballmer’s new global infrastructure.

Broadband benefit = $32 billion

We recently estimated the dramatic gains in “consumer bandwidth” — our ability to communicate and take advantage of the Internet. So we note this new study from the Internet Innovation Alliance, written by economists Mark Dutz, Jonathan Orszag, and Robert Willig, that estimates a consumer surplus from U.S. residential broadband Internet access of $32 billion. “Consumer surplus” is the net benefit consumers enjoy, basically the additional value they receive from a product compared to what they pay.

Jackson’s traffic spike

Om Malik surveys the Net traffic spike after Michael Jackson’s death:

Around 6:30 p.m. EST, Akamai’s Net Usage Index for News spiked all the way to 4,247,971 global visitors per minute vs. normal traffic of 2,000,000, a 112 percent gain.

Bandwidth Boom: Measuring Communications Capacity

See our new paper estimating the growth of consumer bandwidth – or our capacity to communicate – from 2000 to 2008. We found:

  • a huge 5,400% increase in residential bandwidth;
  • an astounding 54,200% boom in wireless bandwidth; and
  • an almost 100-fold increase in total consumer bandwidth

us-consumer-bandwidth-2000-08-res-wireless

U.S. consumer bandwidth at the end of 2008 totaled more than 717 terabits per second, yielding, on a per capita basis, almost 2.4 megabits per second of communications power.

“Code” at 10

Check out Cato Unbound’s symposium on Lawrence Lessig’s 1999 book Code and Other Laws of Cyberspace. Declan McCullagh leads off, with Harvard’s Jonathan Zittrain and my former colleague Adam Thierer next, and then a response from Lessig himself.

Here’s Thierer’s bottom line:

Luckily for us, Lessig’s lugubrious predictions proved largely unwarranted. Code has not become the great regulator of markets or enslaver of man; it has been a liberator of both. Indeed, the story of the past digital decade has been the exact opposite of the one Lessig envisioned in Code. Cyberspace has proven far more difficult to “control” or regulate than any of us ever imagined. More importantly, the volume and pace of technological innovation we have witnessed over the past decade has been nothing short of stunning.

Had there been anything to the Lessig’s “code-is-law” theory, AOL’s walled-garden model would still be the dominant web paradigm instead of search, social networking, blogs, and wikis. Instead, AOL — a company Lessig spent a great deal of time fretting over in Code — was forced to tear down those walls years ago in an effort to retain customers, and now Time Warner isspinning it off entirely. Not only are walled gardens dead, but just about every proprietary digital system is quickly cracked open and modified or challenged by open source and free-to-the-world Web 2.0 alternatives. How can this be the case if, as Lessig predicted, unregulated code creates a world of “perfect control”?

Getting the exapoint. Creating the future.

Lots of commentators continue to misinterpret the research I and others have done on Internet traffic and its interplay with network infrastructure investment and communications policy.

I think that new video applications require lots more bandwidth — and, equally or even more important, that more bandwidth drives creative new applications. Two sides of the innovation coin. And I think investment friendly policies are necessary both to encourage deployment of new wireline and wireless broadband and also boost innovative new applications and services for consumers and businesses.

But this article, as one of many examples, mis-summarizes my view. It uses scary words like “apocalypse,” “catastrophe,” and, well, “scare mongering,” to describe my optimistic anticipation of an exaflood of Internet innovations coming our way. I don’t think that

the world will simply run out of bandwidth and we’ll all be weeping over our clogged tubes.

Not unless we block the expansion of new network capacity and capability. (more…)

Climbing the knowledge automation ladder

Check out Stephen Wolfram’s new project called Alpha, which moves beyond searching for information on the Web and toward the integration of knowledge into more useful, higher level patterns. I find the prospect of offloading onto Stephen Wolfram lots of data mining and other drudgery immediately appealing for my own research and analysis work. Quicker research would yield more — and one might hope, better — analysis. One can imagine lots of hiccups in getting to a real product, but this video demo offers a fun and enticing beginning.

Bandwidth caps: One hundred and one distractions

When Cablevision of New York announced this week it would begin offering broadband Internet service of 101 megabits per second for $99 per month, lots of people took notice. Which was the point.

Maybe the 101-megabit product is a good experiment. Maybe it will be successful. Maybe not. One hundred megabits per second is a lot, given today’s applications (and especially given cable’s broadcast tree-and-branch shared network topology). A hundred megabits, for example, could accommodate more than five fully uncompressed high-definition TV channels, or 10+ compressed HD streams. It’s difficult to imagine too many households finding a way today to consume that much bandwidth. Tomorrow is another question. The bottom line is that in addition to making a statement, Cablevision is probably mostly targeting the small business market with this product.

Far more perplexing than Cablevision’s strategy, however, was the reaction from groups like the reflexively critical Free Press:

We are encouraged by Cablevision’s plan to set a new high-speed bar of service for the cable industry. . . . this is a long overdue step in the right direction.

Free Press usually blasts any decision whatever by any network or media company. But by praising the 101-megabit experiment, Free Press is acknowledging the perfect legitimacy of charging variable prices for variable products. Pay more, get more. Pay less, get more affordably the type of service that will meet your needs the vast majority of the time. (more…)

Bandwidth and QoS: Much ado about something

The supposed top finding of a new report commissioned by the British telecom regulator Ofcom is that we won’t need any QoS (quality of service) or traffic management to accommodate next generation video services, which are driving Internet traffic at consistently high annual growth rates of between 50% and 60%. TelecomTV One headlined, “Much ado about nothing: Internet CAN take video strain says UK study.” 

But the content of the Analysys Mason (AM) study, entitled “Delivering High Quality Video Services Online,” does not support either (1) the media headline — “Much ado about nothing,” which implies next generation services and brisk traffic growth don’t require much in the way of new technology or new investment to accommodate them — or (2) its own “finding” that QoS and traffic management aren’t needed to deliver these next generation content and services.

For example, AM acknowledges in one of its five key findings in the Executive Summary: 

innovative business models might be limited by regulation: if the ability to develop and deploy novel approaches was limited by new regulation, this might limit the potential for growth in online video services.

In fact, the very first key finding says:

A delay in the migration to 21CN-based bitstream products may have a negative impact on service providers that use current bitstream products, as growth in consumption of video services could be held back due to the prohibitive costs of backhaul capacity to support them on the legacy core network. We believe that the timely migration to 21CN will be important in enabling significant take-up of online video services at prices that are reasonable for consumers.

So very large investments in new technologies and platforms are needed, and new regulations that discourage this investment could delay crucial innovations on the edge. Sounds like much ado about something, something very big.  (more…)

Apples and Oranges

Saul Hansell has done some good analysis of the broadband market (as I noted here), and I’m generally a big fan of the NYT’s Bits blog. But this item mixes cable TV apples with switched Internet oranges. And beyond that just misses the whole concept of products and prices.

Questioning whether Time Warner will be successful in its attempt to cap bandwidth usage on its broadband cable modem service — effectively raising the bandwidth pricing issue — Hansell writes:

I tried to explore the marginal costs with [Time Warner’s] Mr. Hobbs. When someone decides to spend a day doing nothing but downloading every Jerry Lewis movie from BitTorrent, Time Warner doesn’t have to write a bigger check to anyone. Rather, as best as I can figure it, the costs are all about building the network equipment and buying long-haul bandwidth for peak capacity.

If that is true, the question of what is “fair” is somewhat more abstract than just saying someone who uses more should pay more. After all, people who watch more hours of cable television don’t pay more than those who don’t.

It’s also true that a restaurant patron who finishes his meal doesn’t pay more than someone who leaves half the same menu item on his plate. If he orders two bowls of soup, he gets more soup. He can’t order one bowl of soup and demand each of his five dining partners also be served for free. Pricing decisions depend upon the product and the granularity that is being offered. (more…)

Flameout, meltdown, starburst

Sun Microsystems was the first to know — and to boldly say — that “the network is the computer.” And yet they couldn’t capitalize on this deep and early insight. Here are six reason why.

40 years ago today

Some good history on the evolution of the Internet. I especially liked Steve Crocker’s story about how he and his fellow early Internet developers would share ideas — not by email but by mail. Here’s Crocker’s first “Request for Comments” detailing networking protocols. Today there are some 5,000 RFCs.

Broadband bridges to where?

See my new commentary on the new $7.2 billion broadband program in the Federal stimulus bill. I conclude that if we’re going to spend taxpayer money at all, we should take advantage of local knowledge:

Many states have already pinpointed the areas most in need of broadband infrastructure. Local companies and entrepreneurs are likely to best know where broadband needs to be deployed – and to aggressively deliver it with the most appropriate, cost-effective technology that meets the needs of the particular market. Using the  states as smart conduits is also likely to get the money to network builders more quickly.

And that

After falling seriously behind foreign nations in broadband and in our favored measure of “bandwidth-per-capita” in the early 2000s, the U.S. got its act together and is now on the right path. In the last decade, total U.S. broadband lines grew from around 5 million to over 120 million, while residential broadband grew from under 5 million to 75 million. By far the most important broadband policy point is not to discourage or distort the annual $60+ billion that private companies already invest.

Cyber-security: Let’s get serious

As the Internet’s power grows — dominating our public discourse and driving deeper into every industry and commercial realm — it becomes a bigger target. As the key platform of our knowledge economy, it invites mischief, or worse.

Most digital citizens know the hassles and even dangers of viruses, phishing, and all manner of mal-ware. But cybersecurity is a much broader and deeper topic, and will grow ever more so. The Center for Strategic and International Studies published a good report on December 8 detailing the threats and offering recommendations to “the 44th Presidency.” CSIS suggested a number of specific actions, among them:

(1) a comprehensive national strategy; (2) that the White House lead the effort; (3) that we “regulate cyberspace”; (4) that we authenticate identities; (5) modernizing old laws not suited to the digital networked world; (6) building secure government systems;  and (7) not starting over, given what they saw as the previous administration’s productive start.

Yesterday the Senate Commerce Committee moved the ball forward with a hearing on the topic, including the head of the CSIS study James Lewis, nuclear engineer Joseph Weiss, cyber-guru Ed Amoroso of AT&T, and Eugene Spafford of Purdue University’s Center for Education and Research in Information Assurance and Security — better known by its brilliant acronym, CERIAS. (more…)

Wireless wonders

As the wireless world continues to churn out terrific new hardware and software innovations and network speeds, Engadget talks about the industry in depth with AT&T’s wireless chief Ralph de la Vega.

Rare reason in the broadband debate

Calm and reasoned discussion in debates over broadband and Internet policy are rare. But Saul Hansell, in a series of posts at the NYTimes Bits blog, does an admirable job surveying international broadband comparisons. Here are parts I and II, with part III on the way. [Update: Here’s part III. And here’s a good previous post on “broadband stimulus.”]

So far Hansell has asked two basic questions: Why is theirs faster? And why is theirs cheaper? “Theirs” being non-American broadband.

His answers: “Their” broadband is not too much faster than American broadband, at least not anymore. And their broadband is cheaper for a complicated set of reasons, but mostly because of government price controls that could hurt future investment and innovation in those nations that practice it. 

Ask America. We already tried it. But more on that later.

Hansell makes several nuanced points: (1) broadband speeds depend heavily on population density. The performance and cost of communications technologies are distance-sensitive. It’s much cheaper to deliver fast speeds in Asia’s big cities and Europe’s crowded plains than across America’s expanse. (2) Hansell also points to studies showing some speed-inflation in Europe and Asia. In other words, advertised speeds are often overstated. But most importantly, (3) Hansell echoes my basic point over the last couple years:

. . . Internet speeds in the United States are getting faster. Verizon is wiring half its territory with its FiOS service, which strings fiber optic cable to people’s homes. FiOS now offers 50 Mbps service and has the capacity to offer much faster speeds. As of the end of 2008, 4.1 million homes in the United States had fiber service, which puts the United States right behind Japan, which has brought fiber directly to 8.2 million homes, according to the Fiber to the Home Council. Much of what is called fiber broadband in Korea, Sweden and until recently Japan, only brings the fiber to the basement of apartment buildings or street-corner switch boxes.

AT&T is building out that sort of network for its U-Verse service, running fiber to small switching stations in neighborhoods, so that it can offer much faster DSL with data speed of up to 25 Mbps and and Internet video as well. And cable systems, which cover more than 90 percent of the country, are starting to deploy the next generation of Internet technology called Docsis 3.0. It can offer speeds of 50 Mbps. . . .

(more…)

“Innovation isn’t dead.”

See this fun interview with the energetic early Web innovator Marc Andreesen. Andreesen is on the Facebook board, has his own social networking company called Ning, and is just launching a new venture fund. He talks about Kindles, iPhones, social nets, the theory of cascading innovation, and says we should create new “virtual banks” to get past the financial crisis.

Silicon Shift

Take a look at this 40 minute interview with Jen-Hsun Huang, CEO of graphics chip maker Nvidia. It’s a non-technical discussion of a very important topic in the large world of computing and the Internet. Namely, the rise of the GPU — the graphics processing unit.

Almost 40 years ago the CPU — or central processing unit — burst onto the scene and enabled the PC revolution, which was mostly about word processing (text) and simple spreadsheets (number crunching). But today, as Nvidia and AMD’s ATI division add programmability to their graphics chips, the GPU becomes the next generation general purpose processor. (Huang briefly describes the CUDA programmability architecture, which he compares to the x86 architecture of the CPU age.) With its massive parallelism and ability to render the visual applications most important to today’s consumers — games, photos, movies, art, photoshop, YouTube, GoogleEarth, virtual worlds — the GPU rises to match the CPU’s “centrality” in the computing scheme.

Less obvious, the GPU’s attributes also make it useful for all sorts of non-consumer applications like seismic geographic imaging for energy exploration, high-end military systems, and even quantitative finance.

Perhaps the most exciting shift unleashed by the GPU, however, is in cloud computing. At the January 2009 Consumer Electronics Show in Las Vegas, AMD and a small but revolutionary start-up called LightStage/Otoy announced they are building the world’s fastest petaflops supercomputer at LightStage/Otoy’s Burbank, CA, offices. But this isn’t just any supercomputer. It’s based on GPUs, not CPUs. And it’s not just really, really fast. Designed for the Internet age, this “render cloud” will enable real-time photorealistic 3D gaming and virtual worlds across the Web. It will compress the power of the most advanced motion picture CGI (computer generated imaging) techniques, which can consume hours to render one movie frame and months to produce movie sequences, into real-time . . . and link this power to the wider world over the Net. 

Watch this space. The GPU story is big.

Pulling rabbits out of hats

How has debt-laden Level 3 survived (at least so far) two crashes now? Dennis Berman tells the story.

Since 2001, it has been paying $500 million to $600 million a year in interest. Yet it has never been able to cut its long-term debt load below $5 billion. Even worse, Level 3 hasn’t made a penny of profit since 1999. Its stock has traded below $8 for the past eight years. It closed at 98 cents on Monday.

Level 3 seemed a prime target to get pulled into today’s great credit maw, where decent but otherwise cash-strapped companies go to die. By November, bond investors were seriously doubting the company’s ability to pay off $1.1 billion in debt coming due this year and next, valuing its bonds as low as 30 cents on the dollar.

But the company has convinced large equity holders to bail them out of the crushing debt load, at least for the next year. 2010 and beyond will still be rough. Berman’s article did not mention the angel who saved Level 3 during the tech/telecom crash of 2000-02: Warren Buffett.

The nuts & bolts of the Net

For those who found the Google-net-neutrality-edge-caching story confusing, here’s a terrifically lucid primer by my PFF colleague Adam Marcus explaining “edge caching” and content delivery networks (CDNs) and, even more basically, the concepts of bandwidth and latency.

« Previous PageNext Page »