Category Archives: Bandwidth

The Internet Survives, and Thrives, For Now

See my analysis of the FCC’s new “net neutrality” policy at RealClearMarkets:

Despite the Federal Communications Commission’s “net neutrality” announcement this week, the American Internet economy is likely to survive and thrive. That’s because the new proposal offered by FCC chairman Julius Genachowski is lacking almost all the worst ideas considered over the last few years. No one has warned more persistently than I against the dangers of over-regulating the Internet in the name of “net neutrality.”

In a better world, policy makers would heed my friend Andy Kessler’s advice to shutter the FCC. But back on earth this new compromise should, for the near-term at least, cap Washington’s mischief in the digital realm.

. . .

The Level 3-Comcast clash showed what many of us have said all along: “net neutrality” was a purposely ill-defined catch-all for any grievance in the digital realm. No more. With the FCC offering some definition, however imperfect, businesses will now mostly have to slug it out in a dynamic and tumultuous technology arena, instead of running to the press and politicians.

NetFlix Boom Leads to Switch

NetFlix is moving its content delivery platform from Akamai back to Level 3. Level 3 is adding 2.9 terabits per second of new capacity specifically to support NetFlix’s booming movie streaming business.

International Broadband Comparison, continued

New numbers from Cisco allow us to update our previous comparison of actual Internet usage around the world. We think this is a far more useful metric than the usual “broadband connections per 100 inhabitants” used by the OECD and others to compile the oft-cited world broadband rankings.

What the per capita metric really measures is household size. And because the U.S. has more people in each household than many other nations, we appear worse in those rankings. But as the Phoenix Center has noted, if each OECD nation reached 100% broadband nirvana — i.e., every household in every nation connected — the U.S. would actually fall from 15th to 20th. Residential connections per capita is thus not a very illuminating measure.

But look at the actual Internet traffic generated and consumed in the U.S.

The U.S. far outpaces every other region of the world. In the second chart, you can see that in fact only one nation, South Korea, generates significantly more Internet traffic per user than the U.S. This is no surprise. South Korea was the first nation to widely deploy fiber-to-the-x and was also the first to deploy 3G mobile, leading to not only robust infrastructure but also a vibrant Internet culture. The U.S. dwarfs most others.

If the U.S. was so far behind in broadband, we could not generate around twice as much network traffic per user compared to nations we are told far exceed our broadband capacity and connectivity. The U.S. has far to go in a never-ending buildout of its communications infrastructure. But we invest more than other nations, we’ve got better broadband infrastructure overall, and we use broadband more — and more effectively (see the Connectivity Scorecard and The Economist’s Digital Economy rankings) — than almost any other nation.

The conventional wisdom on this one is just plain wrong.

The Regulatory Threat to Web Video

See our commentary at Forbes.com, responding to Revision3 CEO Jim Louderback’s calls for Internet regulation.

What we have here is “mission creep.” First, Net Neutrality was about an “open Internet” where no websites were blocked or degraded. But as soon as the whole industry agreed to these perfectly reasonable Open Web principles, Net Neutrality became an exercise in micromanagement of network technologies and broadband business plans. Now, Louderback wants to go even further and regulate prices. But there’s still more! He also wants to regulate the products that broadband providers can offer.

Chronically Critical Broadband Country Comparisons

With the release of the FCC’s National Broadband Plan, we continue to hear all sorts of depressing stories about the sorry state of American broadband Internet access. But is it true?

International comparisons in such a fast-moving arena as tech and communications are tough. I don’t pretend it is easy to boil down a hugely complex topic to one right answer, but I did have some critical things to say about a major recent report that got way too many things wrong. A new article by that report’s author singled out France as especially more advanced than the U.S. To cut through all the clutter of conflicting data and competing interpretations on broadband deployment, access, adoption, prices, and speeds, however, maybe a simple chart will help.

Here we compare network usage. Not advertised speeds, which are suspect. Not prices which can be distorted by the use of purchasing power parity (PPP). Not “penetration,” which is largely a function of income, urbanization, and geography. No, just simply, how much data traffic do various regions create and consume.

If U.S. networks were so backward — too sparse, too slow, too expensive — would Americans be generating 65% more network traffic per capita than their Western European counterparts?

Washington liabilities vs. innovative assets

Our new article at RealClearMarkets:

As Washington and the states pile up mountainous liabilities — $3 trillion for unfunded state pensions, $10 trillion in new federal deficits through 2019, and $38 trillion (or is it $50 trillion?) in unfunded Medicare promises — the U.S. needs once again to call on its chief strategic asset: radical innovation.

One laboratory of growth will continue to be the Internet. The U.S. began the 2000’s with fewer than five million residential broadband lines and zero mobile broadband. We begin the new decade with 71 million residential lines and 300 million portable and mobile broadband devices. In all, consumer bandwidth grew almost 15,000%.

Even a thriving Internet, however, cannot escape Washington’s eager eye. As the Federal Communications Commission contemplates new “network neutrality” regulation and even a return to “Title II” telephone regulation, we have to wonder where growth will come from in the 2010’s . . . .

Collective vs. Creative: The Yin and Yang of Innovation

Later this week the FCC will accept the first round of comments in its “Open Internet” rule making, commonly known as Net Neutrality. Never mind that the Internet is already open and it was never strictly neutral. Openness and neutrality are two appealing buzzwords that serve as the basis for potentially far reaching new regulation of our most dynamic economic and cultural sector – the Internet.

I’ll comment on Net Neutrality from several angles over the coming days. But a terrific essay by Berkeley’s Jaron Lanier impelled me to begin by summarizing some of the big meta-arguments that have been swirling the last few years and which now broadly define the opposing sides in the Net Neutrality debate. After surveying these broad categories, I’ll get into the weeds on technology, business, and policy.

The thrust behind Net Neutrality is a view that the Internet should conform to a narrow set of technology and business “ideals” – “open,” “neutral,” “non-discriminatory.” Wonderful words. Often virtuous. But these aren’t the only traits important to economic and cultural systems. In fact, Net Neutrality sets up a false dichotomy – a manufactured war – between open and closed, collaborative versus commercial, free versus paid, content versus conduit. I’ve made a long list of the supposed opposing forces. Net Neutrality favors only one side of the table below. It seeks to cement in place one model of business and technology. It is intensely focused on the left-hand column and is either oblivious or hostile to the right-hand column. It thinks the right-hand items are either bad (prices) or assumes they appear magically (bandwidth).

We skeptics of Net Neutrality, on the other hand, do not favor one side or the other. We understand that there are virtues all around. Here’s how I put it on my blog last autumn:

Suggesting we can enjoy Google’s software innovations without the network innovations of AT&T, Verizon, and hundreds of service providers and technology suppliers is like saying that once Microsoft came along we no longer needed Intel.

No, Microsoft and Intel built upon each other in a virtuous interplay. Intel’s microprocessor and memory inventions set the stage for software innovation. Bill Gates exploited Intel’s newly abundant transistors by creating radically new software that empowered average businesspeople and consumers to engage with computers. The vast new PC market, in turn, dramatically expanded Intel’s markets and volumes and thus allowed it to invest in new designs and multi-billion dollar chip factories across the globe, driving Moore’s law and with it the digital revolution in all its manifestations.

Software and hardware. Bits and bandwidth. Content and conduit. These things are complementary. And yes, like yin and yang, often in tension and flux, but ultimately interdependent.

Likewise, we need the ability to charge for products and set prices so that capital can be rationally allocated and the hundreds of billions of dollars in network investment can occur. It is thus these hard prices that yield so many of the “free” consumer surplus advantages we all enjoy on the Web. No company or industry can capture all the value of the Web. Most of it comes to us as consumers. But companies and content creators need at least the ability to pursue business models that capture some portion of this value so they can not only survive but continually reinvest in the future. With a market moving so fast, with so many network and content models so uncertain during this epochal shift in media and communications, these content and conduit companies must be allowed to define their own products and set their own prices. We need to know what works, and what doesn’t.

When the “network layers” regulatory model, as it was then known, was first proposed back in 2003-04, my colleague George Gilder and I prepared testimony for the U.S. Senate. Although the layers model was little more than an academic notion, we thought then this would become the next big battle in Internet policy. We were right. Even though the “layers” proposal was (and is!) an ill-defined concept, the model we used to analyze what Net Neutrality would mean for networks and Web business models still applies. As we wrote in April of 2004:

Layering proponents . . . make a fundamental error. They ignore ever changing trade-offs between integration and modularization that are among the most profound and strategic decisions any company in any industry makes. They disavow Harvard Business professor Clayton Christensen’s theorems that dictate when modularization, or “layering,” is advisable, and when integration is far more likely to yield success. For example, the separation of content and conduit – the notion that bandwidth providers should focus on delivering robust, high-speed connections while allowing hundreds of millions of professionals and amateurs to supply the content—is often a sound strategy. We have supported it from the beginning. But leading edge undershoot products (ones that are not yet good enough for the demands of the marketplace) like video-conferencing often require integration.

Over time, the digital and photonic technologies at the heart of the Internet lead to massive integration – of transistors, features, applications, even wavelengths of light onto fiber optic strands. This integration of computing and communications power flings creative power to the edges of the network. It shifts bottlenecks. Crystalline silicon and flawless fiber form the low-entropy substrate that carry the world’s high-entropy messages – news, opinions, new products, new services. But these feats are not automatic. They cannot be legislated or mandated. And just as innovation in the core of the network unleashes innovation at the edges, so too more content and creativity at the edge create the need for ever more capacity and capability in the core. The bottlenecks shift again. More data centers, better optical transmission and switching, new content delivery optimization, the move from cell towers to femtocell wireless architectures. There is no final state of equilibrium where one side can assume that the other is a stagnant utility, at least not in the foreseeable future.

I’ll be back with more analysis of the Net Neutrality debate, but for now I’ll let Jaron Lanier (whose book You Are Not a Gadget was published today) sum up the argument:

Here’s one problem with digital collectivism: We shouldn’t want the whole world to take on the quality of having been designed by a committee. When you have everyone collaborate on everything, you generate a dull, average outcome in all things. You don’t get innovation.

If you want to foster creativity and excellence, you have to introduce some boundaries. Teams need some privacy from one another to develop unique approaches to any kind of competition. Scientists need some time in private before publication to get their results in order. Making everything open all the time creates what I call a global mush.

There’s a dominant dogma in the online culture of the moment that collectives make the best stuff, but it hasn’t proven to be true. The most sophisticated, influential and lucrative examples of computer code—like the page-rank algorithms in the top search engines or Adobe’s Flash—always turn out to be the results of proprietary development. Indeed, the adored iPhone came out of what many regard as the most closed, tyrannically managed software-development shop on Earth.

Actually, Silicon Valley is remarkably good at not making collectivization mistakes when our own fortunes are at stake. If you suggested that, say, Google, Apple and Microsoft should be merged so that all their engineers would be aggregated into a giant wiki-like project—well you’d be laughed out of Silicon Valley so fast you wouldn’t have time to tweet about it. Same would happen if you suggested to one of the big venture-capital firms that all the start-ups they are funding should be merged into a single collective operation.

But this is exactly the kind of mistake that’s happening with some of the most influential projects in our culture, and ultimately in our economy.

Digital Decade: The Pundits

See this fun and quite insightful discussion of the digital 2000’s (and beyond) with Esther Dyson, Jaron Lanier, and Paul Saffo (hat tip: Adam Thierer).

New York and Net Neutrality

This morning, the Technology Committee of the New York City Council convened a large hearing on a resolution urging Congress to pass a robust Net Neutrality law. I was supposed to testify, but our narrowband transportation system prevented me from getting to New York. Here, however, is the testimony I prepared. It focuses on investment, innovation, and the impact Net Neutrality would have on both.

“Net Neutrality’s Impact on Internet Innovation” – by Bret Swanson – 11.20.09

Must Watch Web Debate

If you’re interested in Net Neutrality regulation and have some time on your hands, watch this good debate at the Web 2.0 conference. The resolution was “A Network Neutrality law is necessary,” and the two opposing sides were:

Against

  • James Assey – Executive Vice President, National Cable and Telecommunications Association
  • Robert Quinn –  Senior Vice President-Federal Regulatory, AT&T
  • Christopher Yoo – Professor of Law and Communication; Director, Center for Technology, Innovation, and Competition, UPenn Law

For

  • Tim Wu – Coined the term “Network Neutrality”; Professor of Law, Columbia Law
  • Brad Burnham – VC, Union Square Ventures
  • Nicholas Economides – Professor of Economics, Stern School of Business, New York University.

I think the side opposing the resolution wins, hands down — no contest really — but see for yourself.

“HD”Tube: YouTube moves toward 1080p

YouTube is moving toward a 1080p Hi Def video capability, just as we long-predicted.

This video may be “1080p,” but the frame-rate is slow, and the video motion is thus not very smooth. George Ou estimates the bit-rate at 3.7 Mbps, which is not enough for real full-motion HD. But we’re moving quickly in that direction.

Quote of the Day

“I hope that they (government regulators) leave it alone . . . The Internet is working beautifully as it is.”

— Tim Draper, Silicon Valley venture capitalist, who along with many other SV investors and executives signed a letter advocating new Internet regulations apparently unaware of its true content.

Two-year study finds fast changing Web

See our brief review of Arbor Networks’ new two-year study where they captured and analyzed 264 exabytes of Internet traffic. Highlights:

  • Internet traffic growing at least 45% annually.
  • Web video jumped to 52% of all Internet traffic from 42%.
  • P2P, although still substantial, dropped more than any other application.
  • Google, between 2007 and 2009, jumped from outside the top-ten global ISPs by traffic volume to the number 3 spot.
  • Comcast jumped from outside the top-ten to number 6.
  • Content delivery networks (CDNs) are now responsible for around 10% of global Internet traffic.
  • This fast-changing ecosystem is not amenable to rigid rules imposed from a central authority, as would be the case under “net neutrality” regulation.

Preparing to Pounce: D.C. angles for another industry

As you’ve no doubt heard, Washington D.C. is angling for a takeover of the . . . U.S. telecom industry?!

That’s right: broadband, routers, switches, data centers, software apps, Web video, mobile phones, the Internet. As if its agenda weren’t full enough, the government is preparing a dramatic centralization of authority over our healthiest, most dynamic, high-growth industry.

Two weeks ago, FCC chairman Julius Genachowski proposed new “net neutrality” regulations, which he will detail on October 22. Then on Friday, Yochai Benkler of Harvard’s Berkman Center published an FCC-commissioned report on international broadband comparisons. The voluminous survey serves up data from around the world on broadband penetration rates, speeds, and prices. But the real purpose of the report is to make a single point: foreign “open access” broadband regulation, good; American broadband competition, bad. These two tracks — “net neutrality” and “open access,” combined with a review of the U.S. wireless industry and other investigations — lead straight to an unprecedented government intrusion of America’s vibrant Internet industry.

Benkler and his team of investigators can be commended for the effort that went into what was no doubt a substantial undertaking. The report, however,

  • misses all kinds of important distinctions among national broadband markets, histories, and evolutions;
  • uses lots of suspect data;
  • underplays caveats and ignores some important statistical problems;
  • focuses too much on some metrics, not enough on others;
  • completely bungles America’s own broadband policy history; and
  • draws broad and overly-certain policy conclusions about a still-young, dynamic, complex Internet ecosystem.

The gaping, jaw-dropping irony of the report was its failure even to mention the chief outcome of America’s previous open-access regime: the telecom/tech crash of 2000-02. We tried this before. And it didn’t work! The Great Telecom Crash of 2000-02 was the equivalent for that industry what the Great Panic of 2008 was to the financial industry. A deeply painful and historic plunge. In the case of the Great Telecom Crash, U.S. tech and telecom companies lost some $3 trillion in market value and one million jobs. The harsh open access policies (mandated network sharing, price controls) that Benkler lauds in his new report were a main culprit. But in Benkler’s 231-page report on open access policies, there is no mention of the Great Crash. (more…)

Did Cisco just blow $2.9 billion?

Cisco better hope wireless “net neutrality” does not happen. It just bought a company called Starent that helps wireless carriers manage the mobile exaflood.

See this partial description of Starent’s top product:

Intelligence at Work

Key to creating and delivering differentiat ed ser vices—and meeting subscriber demand—is the ST40’s ability to recognize different traffic flows, which allows it to shape and manage bandwidth, while interacting with applications to a very fine degree. The system does this through its session intelligence that utilizes deep packet inspection (DPI) technology, ser vice steering, and intelligent traffic control to dynamically monitor and control sessions on a per-subscriber/per-flow basis.

The ST40’s interaction with and understanding of key elements within the multimedia call—devices, applications, transport mechanisms, policies—and assists in the ser vice creation process by:

Providing a greater degree of information granularity and flexibility for billing, network planning, and usage trend analysis

Sharing information with external application ser vers that perform value-added processing

Exploiting user-specific attributes to launch unique applications on a per-subscriber basis

Extending mobility management information to non-mobility aware applications

Enabling policy, charging, and Quality of Ser vice (QoS) features

Traffic management. QoS. Deep Packet Inspection. Per service billing. Special features and products. Many of these technologies and features could be outlawed or curtailed under net neutrality. And the whole booming wireless arena could suffer.

A QoS primer

In case my verses attempting an analysis of Quality-of-Service and “net neutrality” regulation need supplementary explanation, here’s a terrifically lucid seven-minute Internet packet primer — in prose and pictures — from George Ou. Also, a longer white paper on the same topic:

Seven-minute Flash presentation: The need for a smarter prioritized Internet

White paper: Managing Broadband Networks: A Policymaker’s Guide

Leviathan Spam

Leviathan Spam

Send the bits with lasers and chips
See the bytes with LED lights

Wireless, optical, bandwidth boom
A flood of info, a global zoom

Now comes Lessig
Now comes Wu
To tell us what we cannot do

The Net, they say,
Is under attack
Stop!
Before we can’t turn back

They know best
These coder kings
So they prohibit a billion things

What is on their list of don’ts?
Most everything we need the most

To make the Web work
We parse and label
We tag the bits to keep the Net stable

The cloud is not magic
It’s routers and switches
It takes a machine to move exadigits

Now Lessig tells us to route is illegal
To manage Net traffic, Wu’s ultimate evil (more…)

A New Leash on the Net?

Today, FCC chairman Julius Genachowski proposed new regulations on communications networks. We were among the very first opponents of these so-called “net neutrality” rules when they were first proposed in concept back in 2004. Here are a number of our relevant articles over the past few years:

What price, broadband?

See this new paper from economists Rob Shapiro and Kevin Hassett showing how artificial limits on varied pricing of broadband could severely forestall broadband adoption.

To the extent that lower-income and middle-income consumers are required to pay a greater share of network costs, we should expect a substantial delay in achieving universal broadband access. Our simulations suggest that spreading the costs equally among all consumers — the minority who use large amounts of bandwidth and the majority who use very little — will significantly slow the rate of adoption at the lower end of the income scale and extend the life of the digital divide.

If costs are shifted more heavily to those who use the most bandwidth and, therefore, are most responsible for driving up the cost of expanding network capabilities, the digital divergence among the races and among income groups can be eliminated much sooner.

Dept. of Modern Afflictions

Do you suffer from “network deprivation”? I hope so. I do.

« Previous PageNext Page »