Category Archives: Internet

When Nerds Attack!

Yesterday’s Wall Street Journal story on the supposed softening of Google’s “net neutrality” policy stance, which I posted about here, predictably got all the nerds talking. 

Here was my attempt, over at the Technology Liberation Front, to put this topic in perspective:

_______________________ 

Bandwidth, Storewidth, and Net Neutrality

Very happy to see the discussion over The Wall Street Journal‘s Google/net neutrality story. Always good to see holes poked and the truth set free.

But let’s not allow the eruptions, backlashes, recriminations, and “debunkings” — This topic has been debunked. End of story. Over. Sit down! — obscure the still-fundamental issues. This is a terrific starting point for debate, not an end.

Content delivery networks (CDNs) and caching have always been a part of my analysis of the net neutrality debate. Here was testimony that George Gilder and I prepared for a Senate Commerce Committee hearing almost five years ago, in April 2004, where we predicted that a somewhat obscure new MCI “network layers” proposal, as it was then called, would be the next big communications policy issue. (At about the same time, my now-colleague Adam Thierer was also identifying this as an emerging issue/threat.)

Gilder and I tried to make the point that this “layers” — or network neutrality — proposal would, even if attractive in theory, be very difficult to define or implement. Networks are a dynamic realm of ever-shifting bottlenecks, where bandwidth, storage, caching, and peering, in the core, edge, and access, in the data center, on end-user devices, from the heavens and under the seas, constantly require new architectures, upgrades, and investments, thus triggering further cascades of hardware, software, and protocol changes elsewhere in this growing global web. It seemed to us at the time, ill-defined as it was, that this new policy proposal was probably a weapon for one group of Internet companies, with one type of business model, to bludgeon another set of Internet companies with a different business model. 

We wrote extensively about storage, caching, and content delivery networks in the pages of the Gilder Technology Report, first laying out the big conceptual issues in a 1999 article, “The Antediluvian Paradigm.” [Correction: “The Post-Diluvian Paradigm”] Gilder coined a word for this nexus of storage and bandwidth: Storewidth. Gilder and I even hosted a conference, also dubbed “Storewidth,” dedicated to these storage, memory, and content delivery network technologies. See, for instance, this press release for the 2001 conference with all the big players in the field, including Akamai, EMC, Network Appliance, Mirror Image, and one Eric Schmidt, chief executive officer of . . . Novell. In 2002, Google’s Larry Page spoke, as did Jay Adelson, founder of the big data-center-network-peering company Equinix, Yahoo!, and many of the big network and content companies. (more…)

Hey, Sergey and Larry, thanks

As perhaps the earliest opponent of “net neutrality” regulation, it feels good to know I’m no longer “evil.”

Net Neutrality forever! Wait, never mind…

When you’ve written as much as I have about the weird Web topic known as “network neutrality,” this is big news indeed.

The celebrated openness of the Internet — network providers are not supposed to give preferential treatment to any traffic — is quietly losing powerful defenders.

Google Inc. has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content, according to documents reviewed by The Wall Street Journal. Google has traditionally been one of the loudest advocates of equal network access for all content providers.

What some innocuously call “equal network access,” others call meddlesome regulation. Net neutrality could potentially provide a platform for Congress and the FCC to micromanage everything on the Net, from wires and switches to applications and services to the bits and bytes themselves. It is a potentially monstrous threat to dynamic innovation on the fast-growing Net, where experimentation still reigns. 

But now Google, a newly powerful force in Washington and Obamaland, may be reversing course 180-degrees. The regulatory threat level may have just dropped from orange to yellow.

Update: Richard Bennett expertly comments here.

Yep, confirmed, it’s a sham

An investigation confirms that FCC Chairman Kevin Martin’s crusade to force à la carte pricing and unbundling of cable TV channels was indeed a sham, as many of us have been saying for years.

My colleague Ken Ferree comments here.

Web Wars

A new report says we’re not prepared for the cyberwars to come and need a White House office to address emerging threats.

Straw Men Can’t Swim

The venerable Economist magazine has made a hash of my research on the growth of the Internet, which examines the rich media technologies now flooding onto the Web and projects Internet traffic over the coming decade. This “exaflood” of new applications and services represents a bounty of new entertainment, education, and business applications that can drive productivity and economic growth across all our industries and the world economy. 

But somehow, The Economist was convinced that my research represents some “gloomy prophesy,” that I am “doom-mongering” about an Internet “overload” that could “crash” the Internet. Where does The Economist find any evidence for these silly charges?

In a series of reports, articles (here and here), and presentations around the globe — and in a long, detailed, nuanced, very pleasant interview with The Economist, in which I thought the reporter grasped the key points — I have consistently said the exaflood is an opportunity, an embarrassment of riches.

I’ve also said it will take a lot of investment in networks (both wired and wireless), data centers, and other cloud infrastructure to both drive and accommodate this exaflood. Some have questioned this rather mundane statement, but for the life of me I can’t figure out why they deny building this amazingly powerful global Internet might cost a good bit of money.

One critic of mine has said he thinks we might need to spend $5-10 billion on new Net infrastructure over the next five years. What? We already spend some $70 billion a year on all communications infrastructure in the U.S. with an ever greater portion of that going toward what we might consider the Net. Google invests more than $3 billion a year in its cloud infrastructure, Verizon is building a $25-billion fiber-to-the-home network, and AT&T is investing another $10 billion, just for starters. Over the last 10 years, the cable TV companies invested some $120 billion. And Microsoft just yesterday said its new cloud computing infrastructure will consist of 20 new “super data centers,” at $1 billion a piece.

I’m glad The Economist quoted my line that “today’s networks are not remotely prepared to handle this exaflood.” Which is absolutely, unambiguously, uncontroversially true. Can you get all the HD video you want over your broadband connection today? Do all your remote applications work as fast as you’d like? Is your mobile phone and Wi-Fi access as widespread and flawless as you’d like? Do videos or applications always work instantly, without ever a hint of buffer or delay? Are today’s metro switches prepared for a jump from voice-over-IP to widespread high-resolution video conferencing? No, not even close.

But as we add capacity and robustness to many of these access networks, usage and traffic will surge, and the bottlenecks will shift to other parts of the Net. Core, edge, metro, access, data center — the architecture of the Net is ever-changing, with technologies and upgrades and investment happening in different spots at varying pace. This is not a debate about whether the Internet will “crash.” It’s a discussion about how the Net will evolve and grow, about what its capabilities and architecture will be, and about how much it will cost and how we will govern it, but mostly about how much it will yield in new innovation and economic growth.

The Economist and the myriad bloggers, who everyday try to kill some phantom catastrophe theory I do not recognize, are engaging in the old and very tedious practice of setting up digital straw men, which they then heroically strike down with a bold punch of the delete button. Ignoring the real issues and the real debate doesn’t take much effort, nor much thought.

Clouds are expensive

Microsoft, having a couple weeks ago finally capitulated to the Web with the announcement of Ray Ozzie’s new Net-based strategy, now says it will build 20 new data centers at $1 billion a piece. Google is already investing some $3 billion a year on its cloud infrastructure.

Lots of people have criticized my rough estimates of a couple hundred billion in new Net investment over the next five years, saying it’s closer to $5-10 billion, and I wonder what the heck they are thinking.

“Googlephobia”: An Unholy Alliance

My colleague Adam Thierer with an excellent post warning of the coming war on Google:

So, here we have Wu raising the specter of search engine bias and Lessig raising the specter of Google-as-panopticon. And this comes on top of groups like EPIC and CDT calling for more regulation of the online advertising marketplace in the name of protecting privacy.  Alarm bells must be going off at the Googleplex. But we all have reason to be concerned because greater regulation of Google would mean greater regulation of the entire code / application layer of the Net.  It’s bad enough that we likely have greater regulation of the infrastructure layer on the way thanks to Net neutrality mandates. We need to work hard to contain the damage of increased calls for government to get it’s hands all over every other layer of the Net.

Speeding the Cloud

For someone like me who studies cloud computing and Internet traffic, which is measured in tera-, peta-, and exabytes, Google’s terabyte sorting record is interesting:

we were able to sort 1TB (stored on the Google File System as 10 billion 100-byte records in uncompressed text files) on 1,000 computers in 68 seconds. By comparison, the previous 1TB sorting record is 209 seconds on 910 computers.

But I suppose I can see how you might not feel the same way.

The big bad media monopoly

In the context of potential federal media regulation known as “à la carte,” my colleague Adam Thierer comments on new Internet video technologies and content here and here, and expertly reiterates an old theme of mine, namely that the Internet *is* à la carte.

Cloudy Forecast

Coincident with the news that Microsoft is embracing the Web even for its longtime PC-centric OS and apps, The Economist has a big special report on “cloud computing,” including articles on:

– “The Evolution of Data Centres
– “Software as a Service
– “Connecting to the Cloud
– “The Economics of the Cloud
– The Effect on Business; and
– “Computers without Borders

« Previous Page