Archive for the ‘bandwidth’ Category

M-Lab: The Real Source of the Web Slow-Down

Wednesday, November 5th, 2014

Last week, M-Lab, a group that monitors select Internet network links, issued a report claiming interconnection disputes caused significant declines in consumer broadband speeds in 2013 and 2014.

This was not news. Everyone knew the disputes between Netflix and Comcast/Verizon/AT&T and others affected consumer speeds. We wrote about the controversy here, here, and here, and our “How the Net Works” report offered broader context.

The M-Lab study, “ISP Interconnection and Its Impact on Consumer Internet Performance,” however, does have some good new data. Although M-Lab says it doesn’t know who was “at fault,” advocates seized on the report as evidence of broadband provider mischief at big interconnection points.

But the M-Lab data actually show just the opposite. As you can see in the three graphs below, Comcast, Time Warner, Verizon, and to a lesser extent AT&T all show sharp drops in performance in May of 2013. Then network performance of all four networks at the three monitoring points in New York, Dallas, and Los Angeles all show sudden improvements in March of 2014.

The simultaneous drops and spikes for all four suggest these firms could not have been the cause. It would have required some sort of amazingly precise coordination among the four firms. Rather, the simultaneous action suggests the cause was some outside entity or event. Dan Rayburn of StreamingMedia agrees and offers very useful commentary on the M-Lab study here.

As it happens, Netflix was moving much of its content away from third-party content delivery networks (CDNs) in the spring of 2013 and onto its own OpenConnect platform, which used Cogent and Level 3 to connect to many broadband providers. The smaller cable firms, Cablevision and Cox, meanwhile, had agreed to connect to Netflix for free and unsurprisingly show no degradation.

Nine months of degraded performance ensued. Then, in the spring of 2014, Netflix agreed to connect directly to the networks of the major BSPs, first Comcast, then Verizon, then Time Warner and AT&T. General performance on most networks across the Internet improved. The M-Lab data actually reinforce what we had already pieced together: Netflix moving its traffic onto the networks of Cogent and Level 3 overwhelmed both the capacity of those networks and blew through their peering agreements with the broadband providers.

Network analyst George Ou offered his perspective of what had happened: “Cogent purchased on the order of 1 terabit of capacity from Comcast and got 4 terabits from Comcast. When I say ‘purchased’, I mean bartered. Then they turned around and sold 100 terabits of capacity to [their] own customers. That’s why Cogent customers are suffering slow performance. Then they demanded 8x the bandwidth from Comcast at no extra cost and when Comcast refused, they blamed their slow service on Comcast.”

When Netflix and Comcast finally agreed to connect in the late winter of 2014, it improved performance for everyone. Around 30% of all Netflix’s traffic (which itself can account for a third of all peak-hour U.S. traffic) had been removed from the oversold Cogent and Level 3 networks, thus opening up capacity for those backbones’ connections to the other broadband providers. Additional direct connections between Netflix and the other BSPs then improved things further.

A previous analysis by Peter Sevcik of NetForecast last summer showed this phenomenon of a rising tide after the Comcast/Netflix hook-up (see below). M-Lab thus confirmed what many had already revealed, a non-broadband provider cause of the slow down, exactly the opposite of the media interpretation of the M-Lab study.

But wait! As we were about to post this analysis, we got word via Dan Rayburn that Cogent — quite amazingly — just admitted that in February 2014 it instituted a new fast-lane / slow-lane quality-of-service regime on its network. M-Lab’s monitoring equipment was actually given fast-lane status! This is ironic on a number of levels — and we still need to figure out if this was coordinated or coincident with the Comcast-Netflix hookup — but it may help explain why the performance improvement across Cogent-BSP links as shown by M-Lab was even more sudden and dramatic than the Netflix improvement noticed elsewhere. And of course, it reinforces, yet again, the real source of the slow-down.

Interconnection: Arguing for Inefficiency

Monday, October 6th, 2014

Last week Level 3 posted some new data from interconnection points with three large broadband service providers. The first column of the chart, with data from last spring, shows lots of congestion between Level 3 and the three BSPs. You might recall the battles of last winter and early spring when Netflix streaming slowed down and it accused Comcast and other BSPs of purposely “throttling” its video traffic. (We wrote about the incident here, here, here, and here.)

The second column of the Level 3 chart, with data from September, shows that traffic with two of the three BSPs is much less congested today. Level 3 says, reasonably, the cause for the change is Netflix’s on-net transit (or paid peering) agreements with Comcast and (presumably) Verizon, in which Netflix and the broadband firms established direct connections with one another. As Level 3 writes, “You might say that it’s good news overall.” And it is: these on-net transit agreements, which have been around for at least 15 years, and which are used by Google, Amazon, Microsoft, all the content delivery networks (CDNs), and many others, make the Net work better and more efficiently, cutting costs for content providers and delivering better, faster, more robust services to consumers.

But Level 3 says despite this apparent improvement, the data really shows the broadband providers demanding “tolls” and that this is bad for the Internet overall. It thinks Netflix and the broadband providers should be forced to employ an indirect A–>B–>C architecture even when a direct A–>C architecture is more efficient.

The Level 3 charts make another probably unintended point. Recall that Netflix, starting around two years ago, began building its own CDN called OpenConnect. Its intention was always to connect directly to the broadband providers (A–>C) and to bypass Level 3 and other backbone providers (B). This is exactly what happened. Netflix connected to Comcast, Verizon, and others (although for a small fee, rather than for free, as it had hoped). And it looks like the broadband providers were smart not to build out massive new interconnection capacity with Level 3 to satisfy a peering agreement that was out of balance, and which, as soon as Netflix left, regained balance. It would have been a huge waste (what they used to call stranded investment).

Why the fuss over “sponsored data”?

Monday, January 6th, 2014

Today, at the Consumer Electronics Show in Las Vegas, AT&T said it would begin letting content firms — Google, ESPN, Netflix, Amazon, a new app, etc. — pay for a portion of the mobile data used by consumers of this content. If a mobile user has a 2 GB plan but likes to watch lots of Yahoo! news video clips, which consume a lot of data, Yahoo! can now subsidize that user by paying for that data usage, which won’t count against the user’s data limit.

Lots of people were surprised — or “surprised” — at the announcement and reacted violently. They charged AT&T with “double dipping,” imposing “taxes,” and of course the all-purpose net neutrality violation.

But this new sponsored data program is typical of multisided markets where a platform provider offers value to two or more parties — think magazines who charge both subscribers and advertisers. We addressed this topic before the idea was a reality. Back in June 2013, we argued that sponsored data would make lots of mobile consumers better off and no one worse off.

Two weeks ago, for example, we got word ESPN had been talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

Sounds like a reasonable deal all around. But not to everyone. “This is what a net neutrality violation looks like,” wrote Public Knowledge, a key backer of Internet regulation.

The idea that ESPN would pay to exempt its bits from data caps offends net neutrality’s abstract notion that all bits must be treated equal. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data — plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially if a non-watcher of ESPN, am worse off.

The critics’ real worry, then, is that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But is this government’s role — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms? This was our warning. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

What if magazines were barred from carrying advertisements? They’d have to make all their money from subscribers and thus (attempt to) charge much higher prices or change their business model. Consumers would lose, either through higher prices or less diversity of product offerings. And advertisers, deprived of an outlet to reach an audience, would lose. That’s what we call a lose-lose-lose proposition.

Maybe sponsored data will take off. Maybe not. It’s clear, however, in the highly dynamic mobile Internet business, we should allow such voluntary experiments.

Digital Dynamism

Wednesday, November 13th, 2013

See our new 20-page report – Digital Dynamism: Competition in the Internet Ecosystem:

The Internet is altering the communications landscape even faster than most imagined.

Data, apps, and content are delivered by a growing and diverse set of firms and platforms, interconnected in ever more complex ways. The new network, content, and service providers increasingly build their varied businesses on a common foundation — the universal Internet Protocol (IP). We thus witness an interesting phenomenon — the divergence of providers, platforms, services, content, and apps, and the convergence on IP.

The dynamism of the Internet ecosystem is its chief virtue. Infrastructure, services, and content are produced by an ever wider array of firms and platforms in overlapping and constantly shifting markets.

The simple, integrated telephone network, segregated entertainment networks, and early tiered Internet still exist, but have now been eclipsed by a far larger, more powerful phenomenon. A new, horizontal, hypercon- nected ecosystem has emerged. It is characterized by large investments, rapid innovation, and extreme product differentiation.

  • Consumers now enjoy at least five distinct, competing modes of broadband connectivity — cable modem, DSL, fiber optic, wireless broadband, and satellite — from at least five types of firms. Widespread wireless Wi- Fi nodes then extend these broadband connections.
  • Firms like Google, Microsoft, Amazon, Apple, Facebook, and Netflix are now major Internet infrastructure providers in the form of massive data centers, fiber networks, content delivery systems, cloud computing clusters, ecommerce and entertainment hubs, network protocols and software, and, in Google’s case, fiber optic access net- works. Some also build network devices and operating systems. Each competes to be the hub — or at least a hub — of the consumer’s digital life. So large are these new players that up to 80 percent of network traffic now bypasses the traditional public Internet backbone.
  • Billions of diverse consumer and enterprise devices plug into these networks, from PCs and laptops to smartphones and tablets, from game consoles and flat panel displays to automobiles, web cams, medical devices, and untold sensors and industrial machines.

The communications playing field is continually shifting. Cable disrupted telecom through broadband cable modem services. Mobile is a massively successful business, yet it is cannibalizing wireline services, with further disruptions from Skype and other IP communications apps. Mobile service providers used to control the handset market, but today handsets are mobile computers that wield their own substantial power with consumers. While the old networks typically delivered a single service — voice, video, or data — today’s broadband networks deliver multiple services, with the “Cloud” offering endless possibilities.

Also view the accompanying graphic, showing the progression of network innovation over time: Hyperconnected: The New Network Map.

U.S. Share of Internet Traffic Grows

Thursday, October 10th, 2013

Over the last half decade, during a protracted economic slump, we’ve documented the persistent successes of Digital America — for example the rise of the App Economy. Measuring the health of our tech sectors is important, in part because policy agendas are often based on assertions of market failure (or regulatory failure) and often include comparisons with other nations. Several years ago we developed a simple new metric that we thought better reflected the health of broadband in international comparisons. Instead of measuring broadband using “penetration rates,” or the number of  connections per capita, we thought a much better indicator was actual Internet usage. So we started looking at Internet traffic per capita and per Internet user (see here, here, here, and, for more context, here).

We’ve update the numbers here, using Cisco’s Visual Networking Index for traffic estimates and Internet user figures from the International Telecommunications Union. And the numbers suggest the U.S. digital economy, and its broadband networks, are healthy and extending their lead internationally. (Patrick Brogan of USTelecom has also done excellent work on this front; see his new update.)

If we look at regional comparisons of traffic per person, we see North America generates and consumes nearly seven times the world average and more around two and a half times that of Western Europe.

Looking at individual nations, and switching to the metric of traffic per user, we find that the U.S. is actually pulling away from the rest of the world. In our previous reports, the U.S. trailed only South Korea, was essentially tied with Canada, and generated around 60-70% more traffic than Western European nations. Now, the U.S. has separated itself from Canada and is generating two to three times the traffic per user of Western Europe and Japan.

Perhaps the most remarkable fact, as Brogan notes, is that the U.S. has nearly caught up with South Korea, which, for the last decade, was a real outlier — far and away the worldwide leader in Internet infrastructure and usage.

Traffic is difficult to measure and its nature and composition can change quickly. There are a number of factors we’ll talk more about later, such as how much of this traffic originates in the U.S. but is destined for foreign lands. Yet these are some of the best numbers we have, and the general magnitudes reinforce the idea that the U.S. digital economy, under a relatively light-touch regulatory model, is performing well.

A Decade Later, Net Neutrality Goes To Court

Monday, September 9th, 2013

Today the D.C. Federal Appeals Court hears Verizon’s challenge to the Federal Communications Commission’s “Open Internet Order” — better known as “net neutrality.”

Hard to believe, but we’ve been arguing over net neutrality for a decade. I just pulled up some testimony George Gilder and I prepared for a Senate Commerce Committee hearing in April 2004. In it, we asserted that a newish “horizontal layers” regulatory proposal, then circulating among comm-law policy wonks, would become the next big tech policy battlefield. Horizontal layers became net neutrality, the Bush FCC adopted the non-binding Four Principles of an open Internet in 2005, the Obama FCC pushed through actual regulations in 2010, and now today’s court challenge, which argues that the FCC has no authority to regulate the Internet and that, in fact, Congress told the FCC not to regulate the Internet.

Over the years we’ve followed the debate, and often weighed in. Here’s a sampling of our articles, reports, reply comments, and even some doggerel:

— Bret Swanson

Net ‘Neutrality’ or Net Dynamism? Easy Choice.

Tuesday, May 14th, 2013

Consumers beware. A big content company wants to help pay for the sports you love to watch.

ESPN is reportedly talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

As good a deal as this may be for consumers (and the companies involved), the potential arrangement offends some people’s very particular notion of “network neutrality.” They often have trouble defining what they mean by net neutrality, but they know rule breakers when they see them. Sure enough, long time net neutrality advocate Public Knowledge noted, “This is what a network neutrality violation looks like.”

The basic notion is that all bits on communications networks should be treated the same. No prioritization, no discrimination, and no partnerships between content companies and conduit companies. Over the last decade, however, as we debated net neutrality in great depth and breadth, we would point out that such a notional rule would likely result in many perverse consequences. For example, we noted that, had net neutrality existed at the time, the outlawing of pay-for-prioritization would have banned the rise of content delivery networks (CDNs), which have fundamentally improved the user experience for viewing online content. When challenged in this way, the net neutrality proponents would often reply, Well, we didn’t mean that. Of course that should be allowed. We also would point out that yesterday’s and today’s networks discriminate among bits in all sorts of ways, and that we would continue doing so in the future. Their arguments often deteriorated into a general view that Bad things should be banned. Good things should be allowed. And who do you think would be the arbiter of good and evil? You guessed it.

So what is the argument in the case of ESPN? The idea that ESPN would pay to exempt its bits from data caps apparently offends the abstract all-bits-equal notion. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially as an ESPN non-watcher, am worse off.

So the critics’ real worry is, I suppose, that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But this is NOT what government policy should be — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms. This is what we warned would happen. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

These practical and utilitarian arguments about technology and economics are important. Yet they ignore perhaps the biggest point of all: the FCC has no authority to regulate the Internet. The Internet is perhaps the greatest free-flowing, fast-growing, dynamic engine of cultural and economic value we’ve known. The Internet’s great virtue is its ability to change and grow, to foster experimentation and innovation. Diversity in networks, content, services, apps, and business models is a feature, not a bug. Regulation necessarily limits this freedom and diversity, making everything more homogeneous and diminishing the possibilities for entrepreneurship and innovation. Congress has given the FCC no authority to regulate the Internet. The FCC invented this job for itself and is now being challenged in court.

Possible ESPN-mobile partnerships are just the latest reminder of why we don’t want government limiting our choices — and all the possibilities — on the Internet.

— Bret Swanson

Broadband Bullfeathers

Friday, December 14th, 2012

Several years ago, some American lawyers and policymakers were looking for ways to boost government control of the Internet. So they launched a campaign to portray U.S. broadband as a pathetic patchwork of tin-cans-and-strings from the 1950s. The implication was that broadband could use a good bit of government “help.”

They initially had some success with a gullible press. The favorite tools were several reports that measured, nation by nation, the number of broadband connections per 100 inhabitants. The U.S. emerged from these reports looking very mediocre. How many times did we read, “The U.S. is 16th in the world in broadband”? Upon inspection, however, the reports weren’t very useful. Among other problems, they were better at measuring household size than broadband health. America, with its larger households, would naturally have fewer residential broadband subscriptions (not broadband users) than nations with smaller households (and thus more households per capita). And as the Phoenix Center demonstrated, rather hilariously, if the U.S. and other nations achieved 100% residential broadband penetration, America would actually fall to 20th from 15th.

In the fall of 2009, a voluminous report from Harvard’s Berkman Center tried to stitch the supposedly ominous global evidence into a case-closed indictment of U.S. broadband. The Berkman report, however, was a complete bust (see, for example, these thorough critiques: 1, 2, and 3 as well as my brief summary analysis).

Berkman’s statistical analyses had failed on their own terms. Yet it was still important to think about the broadband economy in a larger context. We asked the question, how could U.S. broadband be so backward if so much of the world’s innovation in broadband content, services, and devices was happening here?

To name just a few: cloud computing, YouTube, Twitter, Facebook, Netflix, iPhone, Android, ebooks, app stores, iPad. We also showed that the U.S. generates around 60% more network traffic per capita and per Internet user than Western Europe, the supposed world broadband leader. The examples multiply by the day. As FCC chairman Julius Genachowski likes to remind us, the U.S. now has more 4G LTE wireless subscribers than the rest of the world combined.

Yet here comes a new book with the same general thrust — that the structure of the U.S. communications market is delivering poor information services to American consumers. In several new commentaries summarizing the forthcoming book’s arguments, author Susan Crawford’s key assertion is that U.S. broadband is slow. It’s so bad, she thinks broadband should be a government utility. But is U.S. broadband slow?

According to actual network throughput measured by Akamai, the world’s largest content delivery network, the U.S. ranks in the top ten or 15 across a range of bandwidth metrics. It is ninth in average connection speed, for instance, and 13th in average peak speed. Looking at proportions of populations who enjoy speeds above a certain threshold, Akamai finds the U.S. is seventh in the percentage of connections exceeding 10 megabits per second (Mbps) and 13th in the percentage exceeding 4 Mbps. (See the State of the Internet report, 2Q 2012.)

You may not be impressed with rankings of seventh or 13th. But did you look at the top nations on the list? Hong Kong, South Korea, Latvia, Switzerland, the Netherlands, Japan, etc.

Each one of them is a relatively small, densely populated country. The national rankings are largely artifacts of geography and the size of the jurisdictions observed. Small nations with high population densities fare well. It is far more economical to build high-speed communications links in cities and other relatively dense populations. Accounting for this size factor, the U.S. actually looks amazingly good. Only Canada comes close to the U.S. among geographically larger nations.

But let’s look even further into the data. Akamai also supplies speeds for individual U.S. states. If we merge the tables of nations and states, the U.S. begins to look not like a broadband backwater or even a middling performer but an overwhelming success. Here are the two sets of Akamai data combined into tables that directly compare the successful small nations with their more natural counterparts, the U.S. states (shaded in blue).

Average Broadband Connection Speed — Nine of the top 15 entities are U.S. states.

Average Peak Connection Speed — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 10 Megabits per Second — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 4 Megabits per Second — Ten of the top 16 entities are U.S. states.

Among the 61 ranked entities on these four measures of broadband speed, 39, or almost two-thirds, are U.S. states. American broadband is not “pitifully slow.” In fact, if we were to summarize U.S. broadband, we’d have to say, compared to the rest of the world, it is very fast.

It is true that not every state or region in the U.S. enjoys top speeds. Yes, we need more, better, faster, wider coverage of wired and wireless broadband. In underserved neighborhoods as well as our already advanced areas. We need constant improvement both to accommodate today’s content and services and to drive tomorrow’s innovations. We should not, however, be making broad policy under the illusion that U.S. broadband, taken as a whole, is deficient. The quickest way to make U.S. broadband deficient is probably to enact policies that discourage investment and innovation — such as trying to turn a pretty successful and healthy industry that invests $60 billion a year into a government utility.

— Bret Swanson

The $66-billion Internet Expansion

Thursday, November 8th, 2012

Sixty-six billion dollars over the next three years. That’s AT&T’s new infrastructure plan, announced yesterday. It’s a bold commitment to extend fiber optics and 4G wireless to most of the country and thus dramatically expand the key platform for growth in the modern U.S. economy.

The company specifically will boost its capital investments by an additional $14 billion over previous estimates. This should enable coverage of 300 million Americans (around 97% of the population) with LTE wireless and 75% of AT&T’s residential service area with fast IP broadband. It’s adding 10,000 new cell towers, a thousand distributed antenna systems, and 40,000 “small cells” that augment and extend the wireless network to, for example, heavily trafficked public spaces. Also planned are fiber optic connections to an additional 1 million businesses.

As the company expands its fiber optic and wireless networks — to drive and accommodate the type of growth seen in the chart above — it will be retiring parts of its hundred-year-old copper telephone network. To do this, it will need cooperation from federal and state regulators. This is the end of phone network, the transition to all Internet, all the time, everywhere.

The Real Deal on U.S. Broadband

Monday, June 11th, 2012

Is American broadband broken?

Tim Lee thinks so. Where he once leaned against intervention in the broadband marketplace, Lee says four things are leading him to rethink and tilt toward more government control.

First, Lee cites the “voluminous” 2009 Berkman Report. Which is surprising. The report published by Harvard’s Berkman Center may have been voluminous, but it lacked accuracy in its details and persuasiveness in its big-picture take-aways. Berkman used every trick in the book to claim “open access” regulation around the world boosted other nation’s broadband economies and lack of such regulation in the U.S. harmed ours. But the report’s data and methodology were so thoroughly discredited (especially in two detailed reports issued by economists Robert Crandall, Everett Ehrlich, and Jeff Eisenach and Robert Hahn) that the FCC, which commissioned the report, essentially abandoned it.  Here was my summary of the economists’ critiques:

The [Berkman] report botched its chief statistical model in half a dozen ways. It used loads of questionable data. It didn’t account for the unique market structure of U.S. broadband. It reversed the arrow of time in its country case studies. It ignored the high-profile history of open access regulation in the U.S. It didn’t conduct the literature review the FCC asked for. It excommunicated Switzerland.

. . .

Berkman’s qualitative analysis was, if possible, just as misleading. It passed along faulty data on broadband speeds and prices. It asserted South Korea’s broadband boom was due to open access regulation, but in fact most of South Korea’s surge happened before it instituted any regulation. The study said Japanese broadband, likewise, is a winner because of regulation. But regulated DSL is declining fast even as facilities-based (unshared, proprietary) fiber-to-the-home is surging.

Berkman also enjoyed comparing broadband speeds of tiny European and Asian countries to the whole U.S. But if we examine individual American states — New York or Arizona, for example — we find many of them outrank most European nations and Europe as a whole. In fact, applying the same Speedtest.com data Berkman used, the U.S. as a whole outpaces Europe as a whole! Comparing small islands of excellence to much larger, more diverse populations or geographies is bound to skew your analysis.

The Berkman report twisted itself in pretzels trying to paint a miserable picture of the U.S. Internet economy and a glowing picture of heavy regulation in foreign nations. Berkman, however, ignored the prima facie evidence of a vibrant U.S. broadband marketplace, manifest in the boom in Web video, mobile devices, the App Economy, cloud computing, and on and on.

How could the bulk of the world’s best broadband apps, services, and sites be developed and achieve their highest successes in the U.S. if American broadband were so slow and thinly deployed? We came up with a metric that seemed to refute the notion that U.S. broadband was lagging, namely, how much network traffic Americans generate vis-à-vis the rest of the world. It turned out the U.S. generates more network traffic per capita and per Internet user than any nation but South Korea and generates about two-thirds more per-user traffic than the closest advanced economy of comparable size, Western Europe.

Berkman based its conclusions almost solely on (incorrect) measures of “broadband penetration” — the number of broadband subscriptions per capita — but that metric turned out to be a better indicator of household size than broadband health. Lee acknowledges the faulty analysis but still assumes “broadband penetration” is the sine qua non measure of Internet health. Maybe we’re not awful, as Berkman claimed, Lee seems to be saying, but even if we correct for their methodological mistakes, U.S. broadband penetration is still just OK. “That matters,” Lee writes,

because a key argument for America’s relatively hands-off approach to broadband regulation has been that giving incumbents free rein would give them incentive to invest more in their networks. The United States is practically the only country to pursue this policy, so if the incentive argument was right, its advocates should have been able to point to statistics showing we’re doing much better than the rest of the world. Instead, the argument has been over just how close to the middle of the pack we are.

No, I don’t agree that the argument has consisted of bickering over whether the U.S. is more or less mediocre. Not at all. I do agree that advocates of government regulation have had to adjust their argument – U.S. broadband is awful mediocre. Yet they still hang their hat on “broadband penetration” because most other evidence on the health of the U.S. digital economy is even less supportive of their case.

In each of the last seven years, U.S. broadband providers have invested between $60 and $70 billion in their networks. Overall, the U.S. leads the world in info-tech investment — totaling nearly $500 billion last year. The U.S. now boasts more than 80 million residential broadband links and 200+ million mobile broadband subscribers. U.S. mobile operators have deployed more 4G mobile network capacity than anyone, and Verizon just announced its FiOS fiber service will offer 300 megabit-per-second residential connections — perhaps the fastest large-scale deployment in the world.

Eisenach and Crandall followed up their critique of the Berkman study with a fresh March 2012 analysis of “open access” regulation around the world (this time with Allan Ingraham). They found:

  • “it is clear that copper loop unbundling did not accelerate the deployment or increase the penetration of first-generation broadband networks, and that it had a depressing effect on network investment”
  • “By contrast, it seems clear that platform competition was very important in promoting broadband deployment and uptake in the earlier era of DSL and cable modem competition.”
  • “to the extent new fiber networks are being deployed in Europe, they are largely being deployed by unregulated, non-ILEC carriers, not by the regulated incumbent telecom companies, and not by entrants that have relied on copper-loop unbundling.”

Lee doesn’t mention the incisive criticisms of the Berkman study nor the voluminous literature, including this latest example, showing open access policies are ineffective at best, and more likely harmful.

In coming posts, I’ll address Lee’s three other worries.

— Bret Swanson

World Broadband Update

Tuesday, June 28th, 2011

The OECD published its annual Communications Outlook last week, and the 390 pages offer a wealth of information on all-things-Internet — fixed line, mobile, data traffic, price comparisons, etc. Among other remarkable findings, OECD notes that:

In 1960, only three countries — Canada, Sweden and the United States — had more than one phone for every four inhabitants. For most of what would become OECD countries a year later, the figure was less than 1 for every 10 inhabitants, and less than 1 in 100 in a couple of cases. At that time, the 84 million telephones in OECD countries represented 93% of the global total. Half a century later there are 1.7 billion telephones in OECD countries and a further 4.1 billion around the world. More than two in every three people on Earth now have a mobile phone.

Very useful stuff. But in recent times the report has also served as a chance for some to misrepresent the relative health of international broadband markets. The common refrain the past several years was that the U.S. had fallen way behind many European and Asian nations in broadband. The mantra that the U.S. is “15th in the world in broadband” — or 16th, 21st, 24th, take your pick — became a sort of common lament. Except it wasn’t true.

As we showed here, the second half of the two-thousand-aughts saw an American broadband boom. The Phoenix Center and others showed that the most cited stat in those previous OECD reports — broadband connections per 100 inhabitants — actually told you more about household size than broadband. And we developed metrics to better capture the overall health of a nation’s Internet market — IP traffic per Internet user and per capita.

Below you’ll see an update of the IP traffic per Internet user chart, built upon Cisco’s most recent (June 1, 2011) Visual Networking Index report. The numbers, as they did last year, show the U.S. leads every region of the world in the amount of IP traffic we generate and consume both in per user and per capita terms. Among nations, only South Korea tops the U.S., and only Canada matches the U.S.

Although Asia contains broadband stalwarts like Korea, Japan, and Singapore, it also has many laggards. If we compare the U.S. to the most uniformly advanced region, Western Europe, we find the U.S. generates 62% more traffic per user. (These figures are based on Cisco’s 2010 traffic estimates and the ITU’s 2010 Internet user numbers.)

As we noted last year, it’s not possible for the U.S. to both lead the world by a large margin in Internet usage and lag so far behind in broadband. We think these traffic per user and per capita figures show that our residential, mobile, and business broadband networks are among the world’s most advanced and ubiquitous.

Lots of other quantitative and qualitative evidence — from our smart-phone adoption rates to the breakthrough products and services of world-leading device (Apple), software (Google, Apple), and content companies (Netflix) — reaffirms the fairly obvious fact that the U.S. Internet ecosystem is in fact healthy, vibrant, and growing. Far from lagging, it leads the world in most of the important digital innovation indicators.

— Bret Swanson

Data roaming mischief . . . Another pebble in the digital river?

Thursday, March 17th, 2011

Mobile communications is among the healthiest of U.S. industries. Through a time of economic peril and now merely uncertainty, mobile innovation hasn’t wavered. It’s been a too-rare bright spot. Huge amounts of infrastructure investment, wildly proliferating software apps, too many devices to count. If anything, the industry is moving so fast on so many fronts that we risk not keeping up with needed capacity.

Mobile, perhaps not coincidentally, has also been historically a quite lightly regulated industry. But emerging is a sort of slow boil of small but many rules, or proposed rules, that could threaten the sector’s success. I’m thinking of the “bill shock” proceeding, in which the FCC is looking at billing practices and various “remedies.” And the failure to settle the D block public safety spectrum issue in a timely manner. And now we have a group of  rural mobile providers who want the FCC to set prices in the data roaming market.

You remember that “roaming” is when service provider A pays provider B for access to B’s network so that A’s customers can get service when they are outside A’s service area, or where it has capacity constraints, or for redundancy. These roaming agreements are numerous and have always been privately negotiated. The system works fine.

But now a group of provider A’s, who may not want to build large amounts of new network capacity to meet rising demand for mobile data, like video, Facebook, Twitter, and app downloads, etc., want the FCC to mandate access to B’s networks at regulated prices. And in this case, the B’s have spent many tens of billions of dollars in spectrum and network equipment to provide fast data services, though even these investments can barely keep up with blazing demand.

The FCC has never regulated mobile phone rates, let alone data rates, let alone data roaming rates. And of course mobile voice and data rates have been dropping like rocks. These few rural providers are asking the FCC to step in where it hasn’t before. They are asking the FCC to impose old-time common carrier regulation in a modern competitive market – one in which the FCC has no authority to impose common carrier rules and prices.

In the chart above, we see U.S. info-tech investment in 2010 approached $500 billion. Communications equipment and structures (like cell phone towers) surpassed $105 billion. The fourth generation of mobile networks is just in its infancy. We will need to invest many tens of billions of dollars each year for the foreseeable future both to drive and accommodate Internet innovation, which spreads productivity enhancements and wealth across every sector in the economy.

It is perhaps not surprising that a small number of service providers who don’t invest as much in high-capacity networks might wish to gain artificially cheap access to the networks of the companies who invest tens of billions of dollars per year in their mobile networks alone. Who doesn’t like lower input prices? Who doesn’t like his competitors to do the heavy lifting and surf in his wake? But the also not surprising result of such a policy could be to reduce the amount that everyone invests in new networks. And this is simply an outcome the technology industry, and the entire country, cannot afford. The FCC itself has said that “broadband is the great infrastructure challenge of the early 21st century.”

Economist Michael Mandel has offered a useful analogy:

new regulations [are] like  tossing small pebbles into a stream. Each pebble by itself would have very little effect on the flow of the stream. But throw in enough small pebbles and you can make a very effective dam.

Why does this happen? The answer is that each pebble by itself is harmless. But each pebble, by diverting the water into an ever-smaller area,  creates a ‘negative externality’ that creates more turbulence and slows the water flow.

Similarly, apparently harmless regulations can create negative externalities that add up over time, by forcing companies to spending  time and energy meeting the new requirements. That reduces business flexibility and hurts innovation and growth.

It may be true that none of the proposed new rules for wireless could alone bring down the sector. But keep piling them up, and you can dangerously slow an important economic juggernaut. Price controls for data roaming are a terrible idea.

Mobile traffic grew 159% in 2010 . . . Tablets giving big boost

Thursday, February 3rd, 2011

Among other findings in the latest version of Cisco’s always useful Internet traffic updates:

  • Mobile data traffic was even higher in 2010 than Cisco had projected in last year’s report. Actual growth was 159% (2.6x) versus projected growth of 149% (2.5x).
  • By 2015, we should see one mobile device per capita . . . worldwide. That means around 7.1 billion mobile devices compared to 7.2 billion people.
  • Mobile tablets (e.g., iPads) are likely to generate as much data traffic in 2015 as all mobile devices worldwide did in 2010.
  • Mobile traffic should grow at an annual compound rate of 92% through 2015. That would mean 26-fold growth between 2010 and 2015.

Did the FCC order get lots worse in last two weeks?

Tuesday, December 21st, 2010

So, here we are. Today the FCC voted 3-2 to issue new rules governing the Internet. I expect the order to be struck down by the courts and/or Congress. Meantime, a few observations:

  • The order appears to be more intrusive on the topic of “paid prioritization” than was Chairman Genachowski’s outline earlier this month. (Keep in mind, we haven’t seen the text. The FCC Commissioners themselves only got access to the text at 11:42 p.m. last night.)
  • If this is true, if the “nondiscrimination” ban goes further than a simple reasonableness test, which itself would be subject to tumultuous legal wrangling, then the Net Neutrality order could cause more problems than I wrote about in this December 7 column.
  • A prohibition or restriction on “paid prioritization” is a silly rule that belies a deep misunderstanding of how our networks operate today and how they will need to operate tomorrow. Here’s how I described it in recent FCC comments:

In September 2010, a new network company that had operated in stealth mode digging ditches and boring tunnels for the previous 24 months, emerged on the scene. As Forbes magazine described it, this tiny new company, Spread Networks

“spent the last two years secretly digging a gopher hole from Chicago to New York, usurping the erstwhile fastest paths. Spread’s one-inch cable is the latest weapon in the technology arms race among Wall Street houses that use algorithms to make lightning-fast trades. Every day these outfits control bigger stakes of the markets – up to 70% now. “Anybody pinging both markets  has to be on this line, or they’re dead,” says Jon A. Najarian, cofounder of OptionMonster, which tracks high-frequency trading.

“Spread’s advantage lies in its route, which makes nearly a straight line from a data center  in Chicago’s South Loop to a building across the street from Nasdaq’s servers in Carteret, N.J. Older routes largely follow railroad rights-of-way through Indiana, Ohio and Pennsylvania. At 825 miles and 13.3 milliseconds, Spread’s circuit shaves 100 miles and 3 milliseconds off of the previous route of lowest latency, engineer-talk for length of delay.”

Why spend an estimated $300 million on an apparently duplicative route when numerous seemingly similar networks already exist? Because, Spread says, three milliseconds matters.

Spread offers guaranteed latency on its dark fiber product of no more than 13.33 milliseconds. Its managed wave product is guaranteed at no more than 15.75 milliseconds. It says competitors’ routes between Chicago and New York range from 16 to 20 milliseconds. We don’t know if Spread will succeed financially. But Spread is yet another demonstration that latency is of enormous and increasing importance. From entertainment to finance to medicine, the old saw is truer than ever: time is money. It can even mean life or death.

A policy implication arises. The Spread service is, of course, a form a “paid prioritization.” Companies are paying “eight to 10 times the going rate” to get their bits where they want them, when they want them.5 It is not only a demonstration of the heroic technical feats required to increase the power and diversity of our networks. It is also a prime example that numerous network users want to and will pay money to achieve better service.

One way to achieve better service is to deploy more capacity on certain links. But capacity is not always the problem. As Spread shows, another way to achieve better service is to build an entirely new 750-mile fiber route through mountains to minimize laser light delay. Or we might deploy a network of server caches that store non-realtime data closer to the end points of networks, as many Content Delivery Networks (CDNs) have done. But when we can’t build a new fiber route or store data – say, when we need to get real-time packets from point to point over the existing network – yet another option might be to route packets more efficiently with sophisticated QoS technologies. Each of these solutions fits a particular situation. They take advantage of, or submit to, the technological and economic trade-offs of the moment or the era. They are all legitimate options. Policy simply must allow for the diversity and flexibility of technical and economic options – including paid prioritization – needed to manage networks and deliver value to end-users.

Depending on how far the FCC is willing to take these misguided restrictions, it could actually lead to the very outcomes most reviled by “open Internet” fanatics — that is, more industry concentration, more “walled gardens,” more closed networks. Here’s how I described the possible effect of restrictions on the important voluntary network management tools and business partnerships needed to deliver robust multimedia services:

There has also been discussion of an exemption for “specialized services.” Like wireless, it is important that such specialized services avoid the possible innovation-sapping effects of a Net Neutrality regulatory regime. But the Commission should consider several unintended consequences of moving down the path of explicitly defining, and then exempting, particular “specialized” services while choosing to regulate the so-called “basic,” “best-effort,” or “entry level” “open Internet.”

Regulating the “basic” Internet but not “specialized” services will surely push most of the network and application innovation and investment into the unregulated sphere. A “specialized” exemption, although far preferable to a Net Neutrality world without such an exemption, would tend to incentivize both CAS providers and ISPs service providers to target the “specialized” category and thus shrink the scope of the “open Internet.”

In fact, although specialized services should and will exist, they often will interact with or be based on the “basic” Internet. Finding demarcation lines will be difficult if not impossible. In a world of vast overlap, convergence, integration, and modularity, attempting to decide what is and is not “the Internet” is probably futile and counterproductive. The very genius of the Internet is its ability to connect to, absorb, accommodate, and spawn new networks, applications and services. In a great compliment to its virtues, the definition of the Internet is constantly changing. Moreover, a regime of rigid quarantine would not be good for consumers. If a CAS provider or ISP has to build a new physical or logical network, segregate services and software, or develop new products and marketing for a specifically defined “specialized” service, there would be a very large disincentive to develop and offer simple innovations and new services to customers over the regulated “basic” Internet. Perhaps a consumer does not want to spend the extra money to jump to the next tier of specialized service. Perhaps she only wants the service for a specific event or a brief period of time. Perhaps the CAS provider or ISP can far more economically offer a compelling service over the “basic” Internet with just a small technical tweak, where a leap to a full-blown specialized service would require more time and money, and push the service beyond the reach of the consumer. The transactions costs of imposing a “specialized” quarantine would reduce technical and economic flexibility on both CAS providers and ISPs and, most crucially, on consumers.

Or, as we wrote in our previous Reply Comments about a related circumstance, “A prohibition of the voluntary partnerships that are likely to add so much value to all sides of the market – service provider, content creator, and consumer – would incentivize the service provider to close greater portions of its networks to outside content, acquire more content for internal distribution, create more closely held ‘managed services’ that meet the standards of the government’s ‘exclusions,’ and build a new generation of larger, more exclusive ‘walled gardens’ than would otherwise be the case. The result would be to frustrate the objective of the proceeding. The result would be a less open Internet.”

It is thus possible that a policy seeking to maintain some pure notion of a basic “open Internet” could severely devalue the open Internet the Commission is seeking to preserve.

All this said, the FCC’s legal standing is so tenuous and this order so rooted in reasoning already rejected by the courts, I believe today’s Net Neutrality rule will be overturned. Thus despite the numerous substantive and procedural errors committed on this “darkest day of the year,” I still expect the Internet to “survive and thrive.”

The Internet Survives, and Thrives, For Now

Tuesday, December 7th, 2010

See my analysis of the FCC’s new “net neutrality” policy at RealClearMarkets:

Despite the Federal Communications Commission’s “net neutrality” announcement this week, the American Internet economy is likely to survive and thrive. That’s because the new proposal offered by FCC chairman Julius Genachowski is lacking almost all the worst ideas considered over the last few years. No one has warned more persistently than I against the dangers of over-regulating the Internet in the name of “net neutrality.”

In a better world, policy makers would heed my friend Andy Kessler’s advice to shutter the FCC. But back on earth this new compromise should, for the near-term at least, cap Washington’s mischief in the digital realm.

. . .

The Level 3-Comcast clash showed what many of us have said all along: “net neutrality” was a purposely ill-defined catch-all for any grievance in the digital realm. No more. With the FCC offering some definition, however imperfect, businesses will now mostly have to slug it out in a dynamic and tumultuous technology arena, instead of running to the press and politicians.

NetFlix Boom Leads to Switch

Friday, November 12th, 2010

NetFlix is moving its content delivery platform from Akamai back to Level 3. Level 3 is adding 2.9 terabits per second of new capacity specifically to support NetFlix’s booming movie streaming business.

International Broadband Comparison, continued

Thursday, October 14th, 2010

New numbers from Cisco allow us to update our previous comparison of actual Internet usage around the world. We think this is a far more useful metric than the usual “broadband connections per 100 inhabitants” used by the OECD and others to compile the oft-cited world broadband rankings.

What the per capita metric really measures is household size. And because the U.S. has more people in each household than many other nations, we appear worse in those rankings. But as the Phoenix Center has noted, if each OECD nation reached 100% broadband nirvana — i.e., every household in every nation connected — the U.S. would actually fall from 15th to 20th. Residential connections per capita is thus not a very illuminating measure.

But look at the actual Internet traffic generated and consumed in the U.S.

The U.S. far outpaces every other region of the world. In the second chart, you can see that in fact only one nation, South Korea, generates significantly more Internet traffic per user than the U.S. This is no surprise. South Korea was the first nation to widely deploy fiber-to-the-x and was also the first to deploy 3G mobile, leading to not only robust infrastructure but also a vibrant Internet culture. The U.S. dwarfs most others.

If the U.S. was so far behind in broadband, we could not generate around twice as much network traffic per user compared to nations we are told far exceed our broadband capacity and connectivity. The U.S. has far to go in a never-ending buildout of its communications infrastructure. But we invest more than other nations, we’ve got better broadband infrastructure overall, and we use broadband more — and more effectively (see the Connectivity Scorecard and The Economist’s Digital Economy rankings) — than almost any other nation.

The conventional wisdom on this one is just plain wrong.

The Regulatory Threat to Web Video

Monday, May 17th, 2010

See our commentary at Forbes.com, responding to Revision3 CEO Jim Louderback’s calls for Internet regulation.

What we have here is “mission creep.” First, Net Neutrality was about an “open Internet” where no websites were blocked or degraded. But as soon as the whole industry agreed to these perfectly reasonable Open Web principles, Net Neutrality became an exercise in micromanagement of network technologies and broadband business plans. Now, Louderback wants to go even further and regulate prices. But there’s still more! He also wants to regulate the products that broadband providers can offer.

Chronically Critical Broadband Country Comparisons

Friday, March 26th, 2010

With the release of the FCC’s National Broadband Plan, we continue to hear all sorts of depressing stories about the sorry state of American broadband Internet access. But is it true?

International comparisons in such a fast-moving arena as tech and communications are tough. I don’t pretend it is easy to boil down a hugely complex topic to one right answer, but I did have some critical things to say about a major recent report that got way too many things wrong. A new article by that report’s author singled out France as especially more advanced than the U.S. To cut through all the clutter of conflicting data and competing interpretations on broadband deployment, access, adoption, prices, and speeds, however, maybe a simple chart will help.

Here we compare network usage. Not advertised speeds, which are suspect. Not prices which can be distorted by the use of purchasing power parity (PPP). Not “penetration,” which is largely a function of income, urbanization, and geography. No, just simply, how much data traffic do various regions create and consume.

If U.S. networks were so backward — too sparse, too slow, too expensive — would Americans be generating 65% more network traffic per capita than their Western European counterparts?

Washington liabilities vs. innovative assets

Friday, March 12th, 2010

Our new article at RealClearMarkets:

As Washington and the states pile up mountainous liabilities — $3 trillion for unfunded state pensions, $10 trillion in new federal deficits through 2019, and $38 trillion (or is it $50 trillion?) in unfunded Medicare promises — the U.S. needs once again to call on its chief strategic asset: radical innovation.

One laboratory of growth will continue to be the Internet. The U.S. began the 2000’s with fewer than five million residential broadband lines and zero mobile broadband. We begin the new decade with 71 million residential lines and 300 million portable and mobile broadband devices. In all, consumer bandwidth grew almost 15,000%.

Even a thriving Internet, however, cannot escape Washington’s eager eye. As the Federal Communications Commission contemplates new “network neutrality” regulation and even a return to “Title II” telephone regulation, we have to wonder where growth will come from in the 2010’s . . . .

Collective vs. Creative: The Yin and Yang of Innovation

Tuesday, January 12th, 2010

Later this week the FCC will accept the first round of comments in its “Open Internet” rule making, commonly known as Net Neutrality. Never mind that the Internet is already open and it was never strictly neutral. Openness and neutrality are two appealing buzzwords that serve as the basis for potentially far reaching new regulation of our most dynamic economic and cultural sector — the Internet.

I’ll comment on Net Neutrality from several angles over the coming days. But a terrific essay by Berkeley’s Jaron Lanier impelled me to begin by summarizing some of the big meta-arguments that have been swirling the last few years and which now broadly define the opposing sides in the Net Neutrality debate. After surveying these broad categories, I’ll get into the weeds on technology, business, and policy.

The thrust behind Net Neutrality is a view that the Internet should conform to a narrow set of technology and business “ideals” — “open,” “neutral,” “non-discriminatory.” Wonderful words. Often virtuous. But these aren’t the only traits important to economic and cultural systems. In fact, Net Neutrality sets up a false dichotomy — a manufactured war — between open and closed, collaborative versus commercial, free versus paid, content versus conduit. I’ve made a long list of the supposed opposing forces. Net Neutrality favors only one side of the table below. It seeks to cement in place one model of business and technology. It is intensely focused on the left-hand column and is either oblivious or hostile to the right-hand column. It thinks the right-hand items are either bad (prices) or assumes they appear magically (bandwidth).

We skeptics of Net Neutrality, on the other hand, do not favor one side or the other. We understand that there are virtues all around. Here’s how I put it on my blog last autumn:

Suggesting we can enjoy Google’s software innovations without the network innovations of AT&T, Verizon, and hundreds of service providers and technology suppliers is like saying that once Microsoft came along we no longer needed Intel.

No, Microsoft and Intel built upon each other in a virtuous interplay. Intel’s microprocessor and memory inventions set the stage for software innovation. Bill Gates exploited Intel’s newly abundant transistors by creating radically new software that empowered average businesspeople and consumers to engage with computers. The vast new PC market, in turn, dramatically expanded Intel’s markets and volumes and thus allowed it to invest in new designs and multi-billion dollar chip factories across the globe, driving Moore’s law and with it the digital revolution in all its manifestations.

Software and hardware. Bits and bandwidth. Content and conduit. These things are complementary. And yes, like yin and yang, often in tension and flux, but ultimately interdependent.

Likewise, we need the ability to charge for products and set prices so that capital can be rationally allocated and the hundreds of billions of dollars in network investment can occur. It is thus these hard prices that yield so many of the “free” consumer surplus advantages we all enjoy on the Web. No company or industry can capture all the value of the Web. Most of it comes to us as consumers. But companies and content creators need at least the ability to pursue business models that capture some portion of this value so they can not only survive but continually reinvest in the future. With a market moving so fast, with so many network and content models so uncertain during this epochal shift in media and communications, these content and conduit companies must be allowed to define their own products and set their own prices. We need to know what works, and what doesn’t.

When the “network layers” regulatory model, as it was then known, was first proposed back in 2003-04, my colleague George Gilder and I prepared testimony for the U.S. Senate. Although the layers model was little more than an academic notion, we thought then this would become the next big battle in Internet policy. We were right. Even though the “layers” proposal was (and is!) an ill-defined concept, the model we used to analyze what Net Neutrality would mean for networks and Web business models still applies. As we wrote in April of 2004:

Layering proponents . . . make a fundamental error. They ignore ever changing trade-offs between integration and modularization that are among the most profound and strategic decisions any company in any industry makes. They disavow Harvard Business professor Clayton Christensen’s theorems that dictate when modularization, or “layering,” is advisable, and when integration is far more likely to yield success. For example, the separation of content and conduit—the notion that bandwidth providers should focus on delivering robust, high-speed connections while allowing hundreds of millions of professionals and amateurs to supply the content—is often a sound strategy. We have supported it from the beginning. But leading edge undershoot products (ones that are not yet good enough for the demands of the marketplace) like video-conferencing often require integration.

Over time, the digital and photonic technologies at the heart of the Internet lead to massive integration — of transistors, features, applications, even wavelengths of light onto fiber optic strands. This integration of computing and communications power flings creative power to the edges of the network. It shifts bottlenecks. Crystalline silicon and flawless fiber form the low-entropy substrate that carry the world’s high-entropy messages — news, opinions, new products, new services. But these feats are not automatic. They cannot be legislated or mandated. And just as innovation in the core of the network unleashes innovation at the edges, so too more content and creativity at the edge create the need for ever more capacity and capability in the core. The bottlenecks shift again. More data centers, better optical transmission and switching, new content delivery optimization, the move from cell towers to femtocell wireless architectures. There is no final state of equilibrium where one side can assume that the other is a stagnant utility, at least not in the foreseeable future.

I’ll be back with more analysis of the Net Neutrality debate, but for now I’ll let Jaron Lanier (whose book You Are Not a Gadget was published today) sum up the argument:

Here’s one problem with digital collectivism: We shouldn’t want the whole world to take on the quality of having been designed by a committee. When you have everyone collaborate on everything, you generate a dull, average outcome in all things. You don’t get innovation.

If you want to foster creativity and excellence, you have to introduce some boundaries. Teams need some privacy from one another to develop unique approaches to any kind of competition. Scientists need some time in private before publication to get their results in order. Making everything open all the time creates what I call a global mush.

There’s a dominant dogma in the online culture of the moment that collectives make the best stuff, but it hasn’t proven to be true. The most sophisticated, influential and lucrative examples of computer code—like the page-rank algorithms in the top search engines or Adobe’s Flash—always turn out to be the results of proprietary development. Indeed, the adored iPhone came out of what many regard as the most closed, tyrannically managed software-development shop on Earth.

Actually, Silicon Valley is remarkably good at not making collectivization mistakes when our own fortunes are at stake. If you suggested that, say, Google, Apple and Microsoft should be merged so that all their engineers would be aggregated into a giant wiki-like project—well you’d be laughed out of Silicon Valley so fast you wouldn’t have time to tweet about it. Same would happen if you suggested to one of the big venture-capital firms that all the start-ups they are funding should be merged into a single collective operation.

But this is exactly the kind of mistake that’s happening with some of the most influential projects in our culture, and ultimately in our economy.

Digital Decade: The Pundits

Thursday, January 7th, 2010

See this fun and quite insightful discussion of the digital 2000’s (and beyond) with Esther Dyson, Jaron Lanier, and Paul Saffo (hat tip: Adam Thierer).

New York and Net Neutrality

Friday, November 20th, 2009

This morning, the Technology Committee of the New York City Council convened a large hearing on a resolution urging Congress to pass a robust Net Neutrality law. I was supposed to testify, but our narrowband transportation system prevented me from getting to New York. Here, however, is the testimony I prepared. It focuses on investment, innovation, and the impact Net Neutrality would have on both.

“Net Neutrality’s Impact on Internet Innovation” – by Bret Swanson – 11.20.09

Must Watch Web Debate

Thursday, November 19th, 2009

If you’re interested in Net Neutrality regulation and have some time on your hands, watch this good debate at the Web 2.0 conference. The resolution was “A Network Neutrality law is necessary,” and the two opposing sides were:

Against

  • James Assey – Executive Vice President, National Cable and Telecommunications Association
  • Robert Quinn -  Senior Vice President-Federal Regulatory, AT&T
  • Christopher Yoo – Professor of Law and Communication; Director, Center for Technology, Innovation, and Competition, UPenn Law

For

  • Tim Wu – Coined the term “Network Neutrality”; Professor of Law, Columbia Law
  • Brad Burnham – VC, Union Square Ventures
  • Nicholas Economides – Professor of Economics, Stern School of Business, New York University.

I think the side opposing the resolution wins, hands down — no contest really — but see for yourself.

“HD”Tube: YouTube moves toward 1080p

Tuesday, November 17th, 2009

YouTube is moving toward a 1080p Hi Def video capability, just as we long-predicted.

This video may be “1080p,” but the frame-rate is slow, and the video motion is thus not very smooth. George Ou estimates the bit-rate at 3.7 Mbps, which is not enough for real full-motion HD. But we’re moving quickly in that direction.

Quote of the Day

Wednesday, October 28th, 2009

“I hope that they (government regulators) leave it alone . . . The Internet is working beautifully as it is.”

— Tim Draper, Silicon Valley venture capitalist, who along with many other SV investors and executives signed a letter advocating new Internet regulations apparently unaware of its true content.

Two-year study finds fast changing Web

Wednesday, October 21st, 2009

See our brief review of Arbor Networks’ new two-year study where they captured and analyzed 264 exabytes of Internet traffic. Highlights:

  • Internet traffic growing at least 45% annually.
  • Web video jumped to 52% of all Internet traffic from 42%.
  • P2P, although still substantial, dropped more than any other application.
  • Google, between 2007 and 2009, jumped from outside the top-ten global ISPs by traffic volume to the number 3 spot.
  • Comcast jumped from outside the top-ten to number 6.
  • Content delivery networks (CDNs) are now responsible for around 10% of global Internet traffic.
  • This fast-changing ecosystem is not amenable to rigid rules imposed from a central authority, as would be the case under “net neutrality” regulation.

Preparing to Pounce: D.C. angles for another industry

Monday, October 19th, 2009

As you’ve no doubt heard, Washington D.C. is angling for a takeover of the . . . U.S. telecom industry?!

That’s right: broadband, routers, switches, data centers, software apps, Web video, mobile phones, the Internet. As if its agenda weren’t full enough, the government is preparing a dramatic centralization of authority over our healthiest, most dynamic, high-growth industry.

Two weeks ago, FCC chairman Julius Genachowski proposed new “net neutrality” regulations, which he will detail on October 22. Then on Friday, Yochai Benkler of Harvard’s Berkman Center published an FCC-commissioned report on international broadband comparisons. The voluminous survey serves up data from around the world on broadband penetration rates, speeds, and prices. But the real purpose of the report is to make a single point: foreign “open access” broadband regulation, good; American broadband competition, bad. These two tracks — “net neutrality” and “open access,” combined with a review of the U.S. wireless industry and other investigations — lead straight to an unprecedented government intrusion of America’s vibrant Internet industry.

Benkler and his team of investigators can be commended for the effort that went into what was no doubt a substantial undertaking. The report, however,

  • misses all kinds of important distinctions among national broadband markets, histories, and evolutions;
  • uses lots of suspect data;
  • underplays caveats and ignores some important statistical problems;
  • focuses too much on some metrics, not enough on others;
  • completely bungles America’s own broadband policy history; and
  • draws broad and overly-certain policy conclusions about a still-young, dynamic, complex Internet ecosystem.

The gaping, jaw-dropping irony of the report was its failure even to mention the chief outcome of America’s previous open-access regime: the telecom/tech crash of 2000-02. We tried this before. And it didn’t work! The Great Telecom Crash of 2000-02 was the equivalent for that industry what the Great Panic of 2008 was to the financial industry. A deeply painful and historic plunge. In the case of the Great Telecom Crash, U.S. tech and telecom companies lost some $3 trillion in market value and one million jobs. The harsh open access policies (mandated network sharing, price controls) that Benkler lauds in his new report were a main culprit. But in Benkler’s 231-page report on open access policies, there is no mention of the Great Crash. (more…)

Did Cisco just blow $2.9 billion?

Wednesday, October 14th, 2009

Cisco better hope wireless “net neutrality” does not happen. It just bought a company called Starent that helps wireless carriers manage the mobile exaflood.

See this partial description of Starent’s top product:

Intelligence at Work

Key to creating and delivering differentiat ed ser vices—and meeting subscriber demand—is the ST40’s ability to recognize different traffic flows, which allows it to shape and manage bandwidth, while interacting with applications to a very fine degree. The system does this through its session intelligence that utilizes deep packet inspection (DPI) technology, ser vice steering, and intelligent traffic control to dynamically monitor and control sessions on a per-subscriber/per-flow basis.

The ST40’s interaction with and understanding of key elements within the multimedia call—devices, applications, transport mechanisms, policies—and assists in the ser vice creation process by:

Providing a greater degree of information granularity and flexibility for billing, network planning, and usage trend analysis

Sharing information with external application ser vers that perform value-added processing

Exploiting user-specific attributes to launch unique applications on a per-subscriber basis

Extending mobility management information to non-mobility aware applications

Enabling policy, charging, and Quality of Ser vice (QoS) features

Traffic management. QoS. Deep Packet Inspection. Per service billing. Special features and products. Many of these technologies and features could be outlawed or curtailed under net neutrality. And the whole booming wireless arena could suffer.

A QoS primer

Wednesday, September 23rd, 2009

In case my verses attempting an analysis of Quality-of-Service and “net neutrality” regulation need supplementary explanation, here’s a terrifically lucid seven-minute Internet packet primer — in prose and pictures — from George Ou. Also, a longer white paper on the same topic:

Seven-minute Flash presentation: The need for a smarter prioritized Internet

White paper: Managing Broadband Networks: A Policymaker’s Guide

Leviathan Spam

Wednesday, September 23rd, 2009

Leviathan Spam

Send the bits with lasers and chips
See the bytes with LED lights

Wireless, optical, bandwidth boom
A flood of info, a global zoom

Now comes Lessig
Now comes Wu
To tell us what we cannot do

The Net, they say,
Is under attack
Stop!
Before we can’t turn back

They know best
These coder kings
So they prohibit a billion things

What is on their list of don’ts?
Most everything we need the most

To make the Web work
We parse and label
We tag the bits to keep the Net stable

The cloud is not magic
It’s routers and switches
It takes a machine to move exadigits

Now Lessig tells us to route is illegal
To manage Net traffic, Wu’s ultimate evil (more…)

A New Leash on the Net?

Monday, September 21st, 2009

Today, FCC chairman Julius Genachowski proposed new regulations on communications networks. We were among the very first opponents of these so-called “net neutrality” rules when they were first proposed in concept back in 2004. Here are a number of our relevant articles over the past few years:

What price, broadband?

Thursday, September 3rd, 2009

See this new paper from economists Rob Shapiro and Kevin Hassett showing how artificial limits on varied pricing of broadband could severely forestall broadband adoption.

To the extent that lower-income and middle-income consumers are required to pay a greater share of network costs, we should expect a substantial delay in achieving universal broadband access. Our simulations suggest that spreading the costs equally among all consumers — the minority who use large amounts of bandwidth and the majority who use very little — will significantly slow the rate of adoption at the lower end of the income scale and extend the life of the digital divide.

If costs are shifted more heavily to those who use the most bandwidth and, therefore, are most responsible for driving up the cost of expanding network capabilities, the digital divergence among the races and among income groups can be eliminated much sooner.

Dept. of Modern Afflictions

Tuesday, September 1st, 2009

Do you suffer from “network deprivation”? I hope so. I do.

Agreeing with Kessler

Friday, August 21st, 2009

After challenging Andy Kessler over the Google Voice-Apple-AT&T dustup, I should point out some areas of agreement.

Andy writes:

Some might say it is time to rethink our national communications policy. But even that’s obsolete. I’d start with a simple idea. There is no such thing as voice or text or music or TV shows or video. They are all just data.

Right, all these markets and business models in hardware, software, and content — core network, edge network, data center, storage, content delivery, operating system, browser, local software, software as a service (SAS), professional content, amateur content, advertising, subscriptions, etc. — are fusing via the Internet. Or at least they overlap in so many areas and at any moment are on the verge of converging in others, that any attempt to parse them into discreet sectors to be regulated is mostly futile. By the time you make up new categories, the categories change.

Which naturally applies to one of the most contentious topics in Net policy:

Competition brings de facto network neutrality and open access (if you don’t like one service blocking apps, use another), thus one less set of artificial rules to be gamed.

Exactly. Net Neutrality could be an unworkably complex and rigid intrusion into this highly dynamic space. Better to let companies compete and evolve.

Kessler concludes:

Data is toxic to old communications and media pipes. Instead, data gains value as it hops around in the packets that make up the Internet structure. New services like Twitter don’t need to file with the FCC.

And new features for apps like Google Voice are only limited by the imagination.

The Internet is disrupting communications companies. Although yesterday I defended the service providers, who are also the key investors in all-important Net infrastructure, it is true their legacy business models are under assault from the inexorable forces of quantum technologies. Web video assaults the cable companies’ discrete channel line-ups. Big bandwidth banished “long distance” voice and, as Kessler says, will continue disrupting voice calling plans. On the other hand, the robust latency and jitter requirements of voice and video, and the realities of cybersecurity will continue to modify the generalized principle that bits are bits.

Even if we can see where things are going — more openness, more modularity, more “bits are bits” — we can’t for the most part mandate these things by law. We have to let them happen. And in many cases, as with the Apple-AT&T iPhone, it was an integrated offering (the exclusive handset arrangement) that yielded an unprecedented unleashing of a new modular mobile phone arena. Those 100,000 new “apps” and a new, open Web-based mobile computing model. Integration and modularity are in constant tension and flux, building off one another, pulling and pushing on one another. Neither can claim ultimate virtue. We have to let them slug it out.

As I wrote yesterday, innovation yin and yang.

Innovation Yin and Yang

Thursday, August 20th, 2009

There are two key mistakes in the public policy arena that we don’t talk enough about. They are two apparently opposite sides of the same fallacious coin.

Call the first fallacy “innovation blindness.” In this case, policy makers can’t see the way new technologies or ideas might affect, say, the future cost of health care, or the environment. The result is a narrow focus on today’s problems rather than tomorrow’s opportunities. The orientation toward the problem often exacerbates it by closing off innovations that could transcend the issue altogether.

The second fallacy is “innovation assumption.” Here, the mistake is taking innovation for granted. Assume the new technology will come along even if we block experimentation. Assume the entrepreneur will start the new business, build the new facility, launch the new product, or hire new people even if we make it impossibly expensive or risky for her to do so. Assume the other guy’s business is a utility while you are the one innovating, so he should give you his product at cost — or for free — while you need profits to reinvest and grow.

Reversing these two mistakes yields the more fruitful path. We should base policy on the likely scenario of future innovation and growth. But then we have to actually allow and encourage the innovation to occur.

All this sprung to mind as I read Andy Kessler’s article, “Why AT&T Killed Google Voice.” For one thing, Google Voice isn’t dead . . . but let’s start at the beginning.

Kessler is a successful investor, an insightful author, and a witty columnist. I enjoy seeing him each year at the Gilder/Forbes Telecosm Conference, where he delights the crowd with fast-paced, humorous commentaries on finance and technology. Here, however, Kessler falls prey to the innovation assumption fallacy.

Kessler argues that Google Voice, a new unified messaging application that combines all your phone numbers into one and can do conference calls and call transcripts, is going to overturn the entire world of telecom. Then he argues that Apple and AT&T attempted to kill Google Voice by blocking it as an “app” on Apple’s iPhone App Store. Why? Because Google Voice, according to Kessler, can do everything the telecom companies and Apple can do — better, even. These big, slow, old companies felt threatened to their core and are attempting to stifle an innovation that could put them out of business. We need new regulations to level the playing field.

Whoa. Wait a minute.

Google Voice seems like a nice product, but it is largely a call-forwarding system. I’ve already had call forwarding, simultaneous ring, Web-based voice mail, and other unified messaging features for five years. Good stuff. Maybe Google Voice will be the best of its kind.

There are just all sorts of fun and productive things happening all across the space. It was the very AT&T-Apple-iPhone combo that created “visual voice mail,” which allowed you to see and choose individual messages instead of wading through long queues of unwanted recordings.

But let’s move on to think about much larger issues.

Suggesting we can enjoy Google’s software innovations without the network innovations of AT&T, Verizon, and hundreds of service providers and technology suppliers is like saying that once Microsoft came along we no longer needed Intel. (more…)

Can Microsoft Grasp the Internet Cloud?

Saturday, August 1st, 2009

See my new Forbes.com commentary on the Microsoft-Yahoo search partnership:

Ballmer appears now to get it. “The more searches, the more you learn,” he says. “Scale drives knowledge, which can turn around and drive innovation and relevance.”

Microsoft decided in 2008 to build 20 new data centers at a cost of $1 billion each. This was a dramatic commitment to the cloud. Conceived by Bill Gates’s successor, Ray Ozzie, the global platform would serve up a new generation of Web-based Office applications dubbed Azure. It would connect video gamers on its Xbox Live network. And it would host Microsoft’s Hotmail and search applications.

The new Bing search engine earned quick acclaim for relevant searches and better-than-Google pre-packaged details about popular health, transportation, location and news items. But with just 8.4% of the market, Microsoft’s $20 billion infrastructure commitment would be massively underutilized. Meanwhile, Yahoo, which still leads in news, sports and finance content, could not remotely afford to build a similar new search infrastructure to compete with Google and Microsoft. Thus, the combination. Yahoo and Microsoft can share Ballmer’s new global infrastructure.

Doom? Or Boom?

Tuesday, July 28th, 2009

Do we really understand just how fast technology advances over time? And the magnitude of price changes and innovations it yields?

Especially in the realm of public policy, we often obsess over today’s seemingly intractable problems without realizing that technology and economic growth often show us a way out.

In several recent presentations in Atlanta and Seattle, I’ve sought to measure the growth of a key technological input — consumer bandwidth — and to show how the pace of technological change in other arenas is likely to continue remaking our world for the better . . . if we let it.

Bandwidth Boom – NARUC Seattle – Bret Swanson – 07.22.09

Broadband benefit = $32 billion

Tuesday, July 14th, 2009

We recently estimated the dramatic gains in “consumer bandwidth” — our ability to communicate and take advantage of the Internet. So we note this new study from the Internet Innovation Alliance, written by economists Mark Dutz, Jonathan Orszag, and Robert Willig, that estimates a consumer surplus from U.S. residential broadband Internet access of $32 billion. “Consumer surplus” is the net benefit consumers enjoy, basically the additional value they receive from a product compared to what they pay.

Bandwidth Boom: Measuring Communications Capacity

Wednesday, June 24th, 2009

See our new paper estimating the growth of consumer bandwidth – or our capacity to communicate – from 2000 to 2008. We found:

  • a huge 5,400% increase in residential bandwidth;
  • an astounding 54,200% boom in wireless bandwidth; and
  • an almost 100-fold increase in total consumer bandwidth

us-consumer-bandwidth-2000-08-res-wireless

U.S. consumer bandwidth at the end of 2008 totaled more than 717 terabits per second, yielding, on a per capita basis, almost 2.4 megabits per second of communications power.

Getting the exapoint. Creating the future.

Friday, May 1st, 2009

Lots of commentators continue to misinterpret the research I and others have done on Internet traffic and its interplay with network infrastructure investment and communications policy.

I think that new video applications require lots more bandwidth — and, equally or even more important, that more bandwidth drives creative new applications. Two sides of the innovation coin. And I think investment friendly policies are necessary both to encourage deployment of new wireline and wireless broadband and also boost innovative new applications and services for consumers and businesses.

But this article, as one of many examples, mis-summarizes my view. It uses scary words like “apocalypse,” “catastrophe,” and, well, “scare mongering,” to describe my optimistic anticipation of an exaflood of Internet innovations coming our way. I don’t think that

the world will simply run out of bandwidth and we’ll all be weeping over our clogged tubes.

Not unless we block the expansion of new network capacity and capability. (more…)

Bandwidth caps: One hundred and one distractions

Thursday, April 30th, 2009

When Cablevision of New York announced this week it would begin offering broadband Internet service of 101 megabits per second for $99 per month, lots of people took notice. Which was the point.

Maybe the 101-megabit product is a good experiment. Maybe it will be successful. Maybe not. One hundred megabits per second is a lot, given today’s applications (and especially given cable’s broadcast tree-and-branch shared network topology). A hundred megabits, for example, could accommodate more than five fully uncompressed high-definition TV channels, or 10+ compressed HD streams. It’s difficult to imagine too many households finding a way today to consume that much bandwidth. Tomorrow is another question. The bottom line is that in addition to making a statement, Cablevision is probably mostly targeting the small business market with this product.

Far more perplexing than Cablevision’s strategy, however, was the reaction from groups like the reflexively critical Free Press:

We are encouraged by Cablevision’s plan to set a new high-speed bar of service for the cable industry. . . . this is a long overdue step in the right direction.

Free Press usually blasts any decision whatever by any network or media company. But by praising the 101-megabit experiment, Free Press is acknowledging the perfect legitimacy of charging variable prices for variable products. Pay more, get more. Pay less, get more affordably the type of service that will meet your needs the vast majority of the time. (more…)