Category Archives: Bandwidth

Wi-Fi and LTE-U: What’s the real story on unlicensed spectrum?

Today The Wall Street Journal highlighted a debate over unlicensed wireless spectrum that’s been brewing for the last few months. On one side, mobile carriers like Verizon and T-Mobile are planning to roll out a new technology known as LTE-U that will make use of the existing unlicensed spectrum most commonly used for Wi-Fi. LTE-U is designed to deliver a similar capability as Wi-Fi, namely short-range connectivity to mobile devices. As billions of mobile devices and Web video continue to strain wireless networks and existing spectrum allocations (see “The Immersive Internet Needs More Wireless Spectrum”), mobile service providers (and everyone else) is looking for good sources of spectrum. For the meantime, they’ve found it in the 5 GHz ISM band. The 5 GHz band is a good place in which to deploy “small cells” (think miniature cell towers delivering transmissions over a much smaller area) which can greatly enhance the capacity, reach, and overall functionality of wireless services.

Google and the cable companies, such as Comcast, however, are opposed to the use of LTE-U because they say  LTE-U could interfere with Wi-Fi. The engineering department at the Federal Communications Commission (FCC) has been looking into the matter for the last few months to see whether the objections are valid, but the agency has not yet reported any firm conclusions.

Is this a technical issue? Or a business dispute?

Until I see some compelling technical evidence that LTE-U interferes with Wi-Fi, this looks like a business dispute. Meaning the FCC probably should not get involved. The 2.4 GHz and 5 GHz spectrum in which Wi-Fi (and Bluetooth and other technologies) operate is governed by just a few basic rules. Most crucially, devices must not exceed certain power thresholds, and they can’t actively interfere with one another. Wi-Fi was designed to share nicely, but as everyone knows, large numbers of devices in one area, or large numbers of Wi-Fi hotspots can cause interference and thus degrade performance. The developers of LTE-U have spent the last couple years designing it specifically to play by the rules of the unlicensed spectrum and to play nicely with Wi-Fi.

The early results are encouraging. In real world tests so far,

  • LTE-U delivers better performance than Wi-Fi,
  • doesn’t degrade nearby Wi-Fi performance, and
  • may in fact improve the performance of nearby Wi-Fi networks.

For more commentary and technical analysis, see Richard Bennett’s recent posts here and here. Bennett was an early co-inventor of Wi-Fi and thus knows what he’s talking about. Also, Qualcomm has a white paper here and some good technical reports here and here.

Another line of opposition to LTE-U says that the mobile service providers like Verizon and T-Mobile will use LTE-U to deliver services that compete with Wi-Fi and will thus disadvantage competitive service providers. But the mobile service providers already operate lots of Wi-Fi hotspots. They are some of the biggest operators of Wi-Fi hotspots anywhere. In other words, they already compete (if that’s the right word) with Google and cable firms in this unlicensed space. LTE-U is merely a different protocol that makes use of the same unlicensed spectrum, and must abide by the same rules, as Wi-Fi.  The mobile providers just think LTE-U can deliver better performance and better integrate with their wide area LTE-based cellular networks. Consider an analogy: the rental fleets of Hertz and Avis are both made up of Ford vehicles. Hertz then decides to start renting Fords and Chevys. The new Chevys don’t push Fords off the road. They are both cars that must obey the rules of the road and the laws of physics. The two types of vehicles can coexist and operate just as they did before. Hertz is not crowding out Avis because it is now using Chevys.

I’ll be looking for more real world tests that either confirm or contradict the initially encouraging evidence. Until then, we shouldn’t prejudge and block a potentially useful new technology.

Permission Slips for Internet Innovation

Screen Shot 2015-08-15 at 1.03.35 PM

See my commentary in The Wall Street Journal this weekend — “Permission Slips for Internet Innovation.”

Screen Shot 2015-08-16 at 1.06.49 PM

Continue reading . . .

The last refuge of Internet regulators: the theory of the “terminating access monopoly”

Net neutrality activists have deployed a long series of rationales in their quest for government control of the Internet. As each rationale is found wanting, they simply move onto the next, more exotic theory. The debate has gone on so long that they’ve even begun recycling through old theories that were discredited long ago.

In the beginning, the activists argued that there should be no pay for performance anywhere on the Net. We pointed out the most obvious example of a harmful consequence of their proposal: their rules, as originally written, would have banned widely used content delivery networks (CDNs), which speed delivery of packets (for a price).

Then they argued that without strong government rules, broadband service providers would block innovation at the “edges” of the network. But for the last decade, under minimal regulation, we’ve enjoyed an explosion of new technologies, products, and services from content and app firms like YouTube, Facebook, Netflix, Amazon, Twitter, WhatsApp, Etsy, Snapchat, Pinterest, Twitch, and a thousand others. Many of these firms have built businesses worth billions of dollars.

They said we needed new rules because the light-touch regulatory environment had left broadband in the U.S. lagging its international rivals, whose farsighted industrial policies had catapulted them far ahead of America. Oops. Turns out, the U.S. leads the world in broadband. (See my colleague Richard Bennett’s detailed report on global broadband and my own.)

Then they argued that, regardless of how well the U.S. is doing, do you really trust a monopoly to serve consumer needs? We need to stop the broadband monopolist — the cable company. Turns out most Americans have several choices in broadband providers, and the list of options is growing — see FiOS, U-verse, Google Fiber, satellite, broadband wireless from multiple carriers, etc. No, broadband service is not like peanut butter. Because of the massive investments required to build networks, there will never be many dozens of wires running to each home. But neither is broadband a monopoly.

Artificially narrowing the market is the first refuge of nearly all bureaucrats concerned with competition. It’s an easy way to conjure a monopoly in almost any circumstance. My favorite example was the Federal Trade Commission’s initial opposition in 2003 to the merger of Haagen-Dasz (Nestle) and Godiva (Dreyer’s). The government argued it would harmfully reduce competition in the market for “super premium ice cream.” The relevant market, in the agency’s telling, wasn’t food, or desserts, or sweets, or ice cream, or even premium ice cream, but super premium ice cream.

(more…)

The U.S. Leads the World in Broadband

See our Wall Street Journal op-ed from December 8, which summarizes our new research on Internet traffic and argues for a continued policy of regulatory humility for the digital economy.

Continue reading here. Or read the text below the fold . . . 

(more…)

How can U.S. broadband lag if it generates 2-3 times the traffic of other nations?

Is the U.S. broadband market healthy or not? This question is central to the efforts to change the way we regulate the Internet. In a short new paper from the American Enterprise Institute, we look at a simple way to gauge whether the U.S. has in fact fallen behind other nations in coverage, speed, and price . . . and whether consumers enjoy access to content. Here’s a summary:

  • Internet traffic volume is an important indicator of broadband health, as it encapsulates and distills the most important broadband factors, such as access, coverage, speed, price, and content availability.
  • US Internet traffic — a measure of the nation’s “digital output” — is two to three times higher than most advanced nations, and the United States generates more Internet traffic per capita and per Internet user than any major nation except for South Korea.
  • The US model of broadband investment and innovation — which operates in an environment that is largely free from government interference — has been a dramatic success.
  • Overturning this successful policy by imposing heavy regulation on the Internet puts one of America’s most vital industries at risk.

M-Lab: The Real Source of the Web Slow-Down

Last week, M-Lab, a group that monitors select Internet network links, issued a report claiming interconnection disputes caused significant declines in consumer broadband speeds in 2013 and 2014.

This was not news. Everyone knew the disputes between Netflix and Comcast/Verizon/AT&T and others affected consumer speeds. We wrote about the controversy here, here, and here, and our “How the Net Works” report offered broader context.

The M-Lab study, “ISP Interconnection and Its Impact on Consumer Internet Performance,” however, does have some good new data. Although M-Lab says it doesn’t know who was “at fault,” advocates seized on the report as evidence of broadband provider mischief at big interconnection points.

But the M-Lab data actually show just the opposite. As you can see in the three graphs below, Comcast, Time Warner, Verizon, and to a lesser extent AT&T all show sharp drops in performance in May of 2013. Then network performance of all four networks at the three monitoring points in New York, Dallas, and Los Angeles all show sudden improvements in March of 2014.

The simultaneous drops and spikes for all four suggest these firms could not have been the cause. It would have required some sort of amazingly precise coordination among the four firms. Rather, the simultaneous action suggests the cause was some outside entity or event. Dan Rayburn of StreamingMedia agrees and offers very useful commentary on the M-Lab study here.

(more…)

Interconnection: Arguing for Inefficiency

Last week Level 3 posted some new data from interconnection points with three large broadband service providers. The first column of the chart, with data from last spring, shows lots of congestion between Level 3 and the three BSPs. You might recall the battles of last winter and early spring when Netflix streaming slowed down and it accused Comcast and other BSPs of purposely “throttling” its video traffic. (We wrote about the incident here, here, here, and here.)

The second column of the Level 3 chart, with data from September, shows that traffic with two of the three BSPs is much less congested today. Level 3 says, reasonably, the cause for the change is Netflix’s on-net transit (or paid peering) agreements with Comcast and (presumably) Verizon, in which Netflix and the broadband firms established direct connections with one another. As Level 3 writes, “You might say that it’s good news overall.” And it is: these on-net transit agreements, which have been around for at least 15 years, and which are used by Google, Amazon, Microsoft, all the content delivery networks (CDNs), and many others, make the Net work better and more efficiently, cutting costs for content providers and delivering better, faster, more robust services to consumers.

But Level 3 says despite this apparent improvement, the data really shows the broadband providers demanding “tolls” and that this is bad for the Internet overall. It thinks Netflix and the broadband providers should be forced to employ an indirect A–>B–>C architecture even when a direct A–>C architecture is more efficient.

The Level 3 charts make another probably unintended point. Recall that Netflix, starting around two years ago, began building its own CDN called OpenConnect. Its intention was always to connect directly to the broadband providers (A–>C) and to bypass Level 3 and other backbone providers (B). This is exactly what happened. Netflix connected to Comcast, Verizon, and others (although for a small fee, rather than for free, as it had hoped). And it looks like the broadband providers were smart not to build out massive new interconnection capacity with Level 3 to satisfy a peering agreement that was out of balance, and which, as soon as Netflix left, regained balance. It would have been a huge waste (what they used to call stranded investment).

Why the fuss over “sponsored data”?

Today, at the Consumer Electronics Show in Las Vegas, AT&T said it would begin letting content firms — Google, ESPN, Netflix, Amazon, a new app, etc. — pay for a portion of the mobile data used by consumers of this content. If a mobile user has a 2 GB plan but likes to watch lots of Yahoo! news video clips, which consume a lot of data, Yahoo! can now subsidize that user by paying for that data usage, which won’t count against the user’s data limit.

Lots of people were surprised — or “surprised” — at the announcement and reacted violently. They charged AT&T with “double dipping,” imposing “taxes,” and of course the all-purpose net neutrality violation.

But this new sponsored data program is typical of multisided markets where a platform provider offers value to two or more parties — think magazines who charge both subscribers and advertisers. We addressed this topic before the idea was a reality. Back in June 2013, we argued that sponsored data would make lots of mobile consumers better off and no one worse off.

Two weeks ago, for example, we got word ESPN had been talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

Sounds like a reasonable deal all around. But not to everyone. “This is what a net neutrality violation looks like,” wrote Public Knowledge, a key backer of Internet regulation.

The idea that ESPN would pay to exempt its bits from data caps offends net neutrality’s abstract notion that all bits must be treated equal. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data — plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially if a non-watcher of ESPN, am worse off.

The critics’ real worry, then, is that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But is this government’s role — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms? This was our warning. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

What if magazines were barred from carrying advertisements? They’d have to make all their money from subscribers and thus (attempt to) charge much higher prices or change their business model. Consumers would lose, either through higher prices or less diversity of product offerings. And advertisers, deprived of an outlet to reach an audience, would lose. That’s what we call a lose-lose-lose proposition.

Maybe sponsored data will take off. Maybe not. It’s clear, however, in the highly dynamic mobile Internet business, we should allow such voluntary experiments.

Digital Dynamism

See our new 20-page report — Digital Dynamism: Competition in the Internet Ecosystem:

The Internet is altering the communications landscape even faster than most imagined.

Data, apps, and content are delivered by a growing and diverse set of firms and platforms, interconnected in ever more complex ways. The new network, content, and service providers increasingly build their varied businesses on a common foundation — the universal Internet Protocol (IP). We thus witness an interesting phenomenon — the divergence of providers, platforms, services, content, and apps, and the convergence on IP.

The dynamism of the Internet ecosystem is its chief virtue. Infrastructure, services, and content are produced by an ever wider array of firms and platforms in overlapping and constantly shifting markets.

The simple, integrated telephone network, segregated entertainment networks, and early tiered Internet still exist, but have now been eclipsed by a far larger, more powerful phenomenon. A new, horizontal, hypercon- nected ecosystem has emerged. It is characterized by large investments, rapid innovation, and extreme product differentiation.

  • Consumers now enjoy at least five distinct, competing modes of broadband connectivity — cable modem, DSL, fiber optic, wireless broadband, and satellite — from at least five types of firms. Widespread wireless Wi- Fi nodes then extend these broadband connections.
  • Firms like Google, Microsoft, Amazon, Apple, Facebook, and Netflix are now major Internet infrastructure providers in the form of massive data centers, fiber networks, content delivery systems, cloud computing clusters, ecommerce and entertainment hubs, network protocols and software, and, in Google’s case, fiber optic access net- works. Some also build network devices and operating systems. Each competes to be the hub — or at least a hub — of the consumer’s digital life. So large are these new players that up to 80 percent of network traffic now bypasses the traditional public Internet backbone.
  • Billions of diverse consumer and enterprise devices plug into these networks, from PCs and laptops to smartphones and tablets, from game consoles and flat panel displays to automobiles, web cams, medical devices, and untold sensors and industrial machines.

The communications playing field is continually shifting. Cable disrupted telecom through broadband cable modem services. Mobile is a massively successful business, yet it is cannibalizing wireline services, with further disruptions from Skype and other IP communications apps. Mobile service providers used to control the handset market, but today handsets are mobile computers that wield their own substantial power with consumers. While the old networks typically delivered a single service — voice, video, or data — today’s broadband networks deliver multiple services, with the “Cloud” offering endless possibilities.

Also view the accompanying graphic, showing the progression of network innovation over time: Hyperconnected: The New Network Map.

U.S. Share of Internet Traffic Grows

Over the last half decade, during a protracted economic slump, we’ve documented the persistent successes of Digital America — for example the rise of the App Economy. Measuring the health of our tech sectors is important, in part because policy agendas are often based on assertions of market failure (or regulatory failure) and often include comparisons with other nations. Several years ago we developed a simple new metric that we thought better reflected the health of broadband in international comparisons. Instead of measuring broadband using “penetration rates,” or the number of  connections per capita, we thought a much better indicator was actual Internet usage. So we started looking at Internet traffic per capita and per Internet user (see here, here, here, and, for more context, here).

We’ve update the numbers here, using Cisco’s Visual Networking Index for traffic estimates and Internet user figures from the International Telecommunications Union. And the numbers suggest the U.S. digital economy, and its broadband networks, are healthy and extending their lead internationally. (Patrick Brogan of USTelecom has also done excellent work on this front; see his new update.)

If we look at regional comparisons of traffic per person, we see North America generates and consumes nearly seven times the world average and more around two and a half times that of Western Europe.

Looking at individual nations, and switching to the metric of traffic per user, we find that the U.S. is actually pulling away from the rest of the world. In our previous reports, the U.S. trailed only South Korea, was essentially tied with Canada, and generated around 60-70% more traffic than Western European nations. Now, the U.S. has separated itself from Canada and is generating two to three times the traffic per user of Western Europe and Japan.

Perhaps the most remarkable fact, as Brogan notes, is that the U.S. has nearly caught up with South Korea, which, for the last decade, was a real outlier — far and away the worldwide leader in Internet infrastructure and usage.

Traffic is difficult to measure and its nature and composition can change quickly. There are a number of factors we’ll talk more about later, such as how much of this traffic originates in the U.S. but is destined for foreign lands. Yet these are some of the best numbers we have, and the general magnitudes reinforce the idea that the U.S. digital economy, under a relatively light-touch regulatory model, is performing well.

A Decade Later, Net Neutrality Goes To Court

Today the D.C. Federal Appeals Court hears Verizon’s challenge to the Federal Communications Commission’s “Open Internet Order” — better known as “net neutrality.”

Hard to believe, but we’ve been arguing over net neutrality for a decade. I just pulled up some testimony George Gilder and I prepared for a Senate Commerce Committee hearing in April 2004. In it, we asserted that a newish “horizontal layers” regulatory proposal, then circulating among comm-law policy wonks, would become the next big tech policy battlefield. Horizontal layers became net neutrality, the Bush FCC adopted the non-binding Four Principles of an open Internet in 2005, the Obama FCC pushed through actual regulations in 2010, and now today’s court challenge, which argues that the FCC has no authority to regulate the Internet and that, in fact, Congress told the FCC not to regulate the Internet.

Over the years we’ve followed the debate, and often weighed in. Here’s a sampling of our articles, reports, reply comments, and even some doggerel:

— Bret Swanson

Net ‘Neutrality’ or Net Dynamism? Easy Choice.

Consumers beware. A big content company wants to help pay for the sports you love to watch.

ESPN is reportedly talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

As good a deal as this may be for consumers (and the companies involved), the potential arrangement offends some people’s very particular notion of “network neutrality.” They often have trouble defining what they mean by net neutrality, but they know rule breakers when they see them. Sure enough, long time net neutrality advocate Public Knowledge noted, “This is what a network neutrality violation looks like.”

The basic notion is that all bits on communications networks should be treated the same. No prioritization, no discrimination, and no partnerships between content companies and conduit companies. Over the last decade, however, as we debated net neutrality in great depth and breadth, we would point out that such a notional rule would likely result in many perverse consequences. For example, we noted that, had net neutrality existed at the time, the outlawing of pay-for-prioritization would have banned the rise of content delivery networks (CDNs), which have fundamentally improved the user experience for viewing online content. When challenged in this way, the net neutrality proponents would often reply, Well, we didn’t mean that. Of course that should be allowed. We also would point out that yesterday’s and today’s networks discriminate among bits in all sorts of ways, and that we would continue doing so in the future. Their arguments often deteriorated into a general view that Bad things should be banned. Good things should be allowed. And who do you think would be the arbiter of good and evil? You guessed it.

So what is the argument in the case of ESPN? The idea that ESPN would pay to exempt its bits from data caps apparently offends the abstract all-bits-equal notion. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially as an ESPN non-watcher, am worse off.

So the critics’ real worry is, I suppose, that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But this is NOT what government policy should be — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms. This is what we warned would happen. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

These practical and utilitarian arguments about technology and economics are important. Yet they ignore perhaps the biggest point of all: the FCC has no authority to regulate the Internet. The Internet is perhaps the greatest free-flowing, fast-growing, dynamic engine of cultural and economic value we’ve known. The Internet’s great virtue is its ability to change and grow, to foster experimentation and innovation. Diversity in networks, content, services, apps, and business models is a feature, not a bug. Regulation necessarily limits this freedom and diversity, making everything more homogeneous and diminishing the possibilities for entrepreneurship and innovation. Congress has given the FCC no authority to regulate the Internet. The FCC invented this job for itself and is now being challenged in court.

Possible ESPN-mobile partnerships are just the latest reminder of why we don’t want government limiting our choices — and all the possibilities — on the Internet.

— Bret Swanson

Broadband Bullfeathers

Several years ago, some American lawyers and policymakers were looking for ways to boost government control of the Internet. So they launched a campaign to portray U.S. broadband as a pathetic patchwork of tin-cans-and-strings from the 1950s. The implication was that broadband could use a good bit of government “help.”

They initially had some success with a gullible press. The favorite tools were several reports that measured, nation by nation, the number of broadband connections per 100 inhabitants. The U.S. emerged from these reports looking very mediocre. How many times did we read, “The U.S. is 16th in the world in broadband”? Upon inspection, however, the reports weren’t very useful. Among other problems, they were better at measuring household size than broadband health. America, with its larger households, would naturally have fewer residential broadband subscriptions (not broadband users) than nations with smaller households (and thus more households per capita). And as the Phoenix Center demonstrated, rather hilariously, if the U.S. and other nations achieved 100% residential broadband penetration, America would actually fall to 20th from 15th.

In the fall of 2009, a voluminous report from Harvard’s Berkman Center tried to stitch the supposedly ominous global evidence into a case-closed indictment of U.S. broadband. The Berkman report, however, was a complete bust (see, for example, these thorough critiques: 1, 2, and 3 as well as my brief summary analysis).

Berkman’s statistical analyses had failed on their own terms. Yet it was still important to think about the broadband economy in a larger context. We asked the question, how could U.S. broadband be so backward if so much of the world’s innovation in broadband content, services, and devices was happening here?

To name just a few: cloud computing, YouTube, Twitter, Facebook, Netflix, iPhone, Android, ebooks, app stores, iPad. We also showed that the U.S. generates around 60% more network traffic per capita and per Internet user than Western Europe, the supposed world broadband leader. The examples multiply by the day. As FCC chairman Julius Genachowski likes to remind us, the U.S. now has more 4G LTE wireless subscribers than the rest of the world combined.

Yet here comes a new book with the same general thrust — that the structure of the U.S. communications market is delivering poor information services to American consumers. In several new commentaries summarizing the forthcoming book’s arguments, author Susan Crawford’s key assertion is that U.S. broadband is slow. It’s so bad, she thinks broadband should be a government utility. But is U.S. broadband slow?

According to actual network throughput measured by Akamai, the world’s largest content delivery network, the U.S. ranks in the top ten or 15 across a range of bandwidth metrics. It is ninth in average connection speed, for instance, and 13th in average peak speed. Looking at proportions of populations who enjoy speeds above a certain threshold, Akamai finds the U.S. is seventh in the percentage of connections exceeding 10 megabits per second (Mbps) and 13th in the percentage exceeding 4 Mbps. (See the State of the Internet report, 2Q 2012.)

You may not be impressed with rankings of seventh or 13th. But did you look at the top nations on the list? Hong Kong, South Korea, Latvia, Switzerland, the Netherlands, Japan, etc.

Each one of them is a relatively small, densely populated country. The national rankings are largely artifacts of geography and the size of the jurisdictions observed. Small nations with high population densities fare well. It is far more economical to build high-speed communications links in cities and other relatively dense populations. Accounting for this size factor, the U.S. actually looks amazingly good. Only Canada comes close to the U.S. among geographically larger nations.

But let’s look even further into the data. Akamai also supplies speeds for individual U.S. states. If we merge the tables of nations and states, the U.S. begins to look not like a broadband backwater or even a middling performer but an overwhelming success. Here are the two sets of Akamai data combined into tables that directly compare the successful small nations with their more natural counterparts, the U.S. states (shaded in blue).

Average Broadband Connection Speed — Nine of the top 15 entities are U.S. states.

Average Peak Connection Speed — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 10 Megabits per Second — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 4 Megabits per Second — Ten of the top 16 entities are U.S. states.

Among the 61 ranked entities on these four measures of broadband speed, 39, or almost two-thirds, are U.S. states. American broadband is not “pitifully slow.” In fact, if we were to summarize U.S. broadband, we’d have to say, compared to the rest of the world, it is very fast.

It is true that not every state or region in the U.S. enjoys top speeds. Yes, we need more, better, faster, wider coverage of wired and wireless broadband. In underserved neighborhoods as well as our already advanced areas. We need constant improvement both to accommodate today’s content and services and to drive tomorrow’s innovations. We should not, however, be making broad policy under the illusion that U.S. broadband, taken as a whole, is deficient. The quickest way to make U.S. broadband deficient is probably to enact policies that discourage investment and innovation — such as trying to turn a pretty successful and healthy industry that invests $60 billion a year into a government utility.

— Bret Swanson

The $66-billion Internet Expansion

Sixty-six billion dollars over the next three years. That’s AT&T’s new infrastructure plan, announced yesterday. It’s a bold commitment to extend fiber optics and 4G wireless to most of the country and thus dramatically expand the key platform for growth in the modern U.S. economy.

The company specifically will boost its capital investments by an additional $14 billion over previous estimates. This should enable coverage of 300 million Americans (around 97% of the population) with LTE wireless and 75% of AT&T’s residential service area with fast IP broadband. It’s adding 10,000 new cell towers, a thousand distributed antenna systems, and 40,000 “small cells” that augment and extend the wireless network to, for example, heavily trafficked public spaces. Also planned are fiber optic connections to an additional 1 million businesses.

As the company expands its fiber optic and wireless networks — to drive and accommodate the type of growth seen in the chart above — it will be retiring parts of its hundred-year-old copper telephone network. To do this, it will need cooperation from federal and state regulators. This is the end of phone network, the transition to all Internet, all the time, everywhere.

The Real Deal on U.S. Broadband

Is American broadband broken?

Tim Lee thinks so. Where he once leaned against intervention in the broadband marketplace, Lee says four things are leading him to rethink and tilt toward more government control.

First, Lee cites the “voluminous” 2009 Berkman Report. Which is surprising. The report published by Harvard’s Berkman Center may have been voluminous, but it lacked accuracy in its details and persuasiveness in its big-picture take-aways. Berkman used every trick in the book to claim “open access” regulation around the world boosted other nation’s broadband economies and lack of such regulation in the U.S. harmed ours. But the report’s data and methodology were so thoroughly discredited (especially in two detailed reports issued by economists Robert Crandall, Everett Ehrlich, and Jeff Eisenach and Robert Hahn) that the FCC, which commissioned the report, essentially abandoned it.  Here was my summary of the economists’ critiques:

The [Berkman] report botched its chief statistical model in half a dozen ways. It used loads of questionable data. It didn’t account for the unique market structure of U.S. broadband. It reversed the arrow of time in its country case studies. It ignored the high-profile history of open access regulation in the U.S. It didn’t conduct the literature review the FCC asked for. It excommunicated Switzerland.

. . .

Berkman’s qualitative analysis was, if possible, just as misleading. It passed along faulty data on broadband speeds and prices. It asserted South Korea’s broadband boom was due to open access regulation, but in fact most of South Korea’s surge happened before it instituted any regulation. The study said Japanese broadband, likewise, is a winner because of regulation. But regulated DSL is declining fast even as facilities-based (unshared, proprietary) fiber-to-the-home is surging.

Berkman also enjoyed comparing broadband speeds of tiny European and Asian countries to the whole U.S. But if we examine individual American states — New York or Arizona, for example — we find many of them outrank most European nations and Europe as a whole. In fact, applying the same Speedtest.com data Berkman used, the U.S. as a whole outpaces Europe as a whole! Comparing small islands of excellence to much larger, more diverse populations or geographies is bound to skew your analysis.

The Berkman report twisted itself in pretzels trying to paint a miserable picture of the U.S. Internet economy and a glowing picture of heavy regulation in foreign nations. Berkman, however, ignored the prima facie evidence of a vibrant U.S. broadband marketplace, manifest in the boom in Web video, mobile devices, the App Economy, cloud computing, and on and on.

How could the bulk of the world’s best broadband apps, services, and sites be developed and achieve their highest successes in the U.S. if American broadband were so slow and thinly deployed? We came up with a metric that seemed to refute the notion that U.S. broadband was lagging, namely, how much network traffic Americans generate vis-à-vis the rest of the world. It turned out the U.S. generates more network traffic per capita and per Internet user than any nation but South Korea and generates about two-thirds more per-user traffic than the closest advanced economy of comparable size, Western Europe.

Berkman based its conclusions almost solely on (incorrect) measures of “broadband penetration” — the number of broadband subscriptions per capita — but that metric turned out to be a better indicator of household size than broadband health. Lee acknowledges the faulty analysis but still assumes “broadband penetration” is the sine qua non measure of Internet health. Maybe we’re not awful, as Berkman claimed, Lee seems to be saying, but even if we correct for their methodological mistakes, U.S. broadband penetration is still just OK. “That matters,” Lee writes,

because a key argument for America’s relatively hands-off approach to broadband regulation has been that giving incumbents free rein would give them incentive to invest more in their networks. The United States is practically the only country to pursue this policy, so if the incentive argument was right, its advocates should have been able to point to statistics showing we’re doing much better than the rest of the world. Instead, the argument has been over just how close to the middle of the pack we are.

No, I don’t agree that the argument has consisted of bickering over whether the U.S. is more or less mediocre. Not at all. I do agree that advocates of government regulation have had to adjust their argument — U.S. broadband is awful mediocre. Yet they still hang their hat on “broadband penetration” because most other evidence on the health of the U.S. digital economy is even less supportive of their case.

In each of the last seven years, U.S. broadband providers have invested between $60 and $70 billion in their networks. Overall, the U.S. leads the world in info-tech investment — totaling nearly $500 billion last year. The U.S. now boasts more than 80 million residential broadband links and 200+ million mobile broadband subscribers. U.S. mobile operators have deployed more 4G mobile network capacity than anyone, and Verizon just announced its FiOS fiber service will offer 300 megabit-per-second residential connections — perhaps the fastest large-scale deployment in the world.

Eisenach and Crandall followed up their critique of the Berkman study with a fresh March 2012 analysis of “open access” regulation around the world (this time with Allan Ingraham). They found:

  • “it is clear that copper loop unbundling did not accelerate the deployment or increase the penetration of first-generation broadband networks, and that it had a depressing effect on network investment”
  • “By contrast, it seems clear that platform competition was very important in promoting broadband deployment and uptake in the earlier era of DSL and cable modem competition.”
  • “to the extent new fiber networks are being deployed in Europe, they are largely being deployed by unregulated, non-ILEC carriers, not by the regulated incumbent telecom companies, and not by entrants that have relied on copper-loop unbundling.”

Lee doesn’t mention the incisive criticisms of the Berkman study nor the voluminous literature, including this latest example, showing open access policies are ineffective at best, and more likely harmful.

In coming posts, I’ll address Lee’s three other worries.

— Bret Swanson

World Broadband Update

The OECD published its annual Communications Outlook last week, and the 390 pages offer a wealth of information on all-things-Internet — fixed line, mobile, data traffic, price comparisons, etc. Among other remarkable findings, OECD notes that:

In 1960, only three countries — Canada, Sweden and the United States — had more than one phone for every four inhabitants. For most of what would become OECD countries a year later, the figure was less than 1 for every 10 inhabitants, and less than 1 in 100 in a couple of cases. At that time, the 84 million telephones in OECD countries represented 93% of the global total. Half a century later there are 1.7 billion telephones in OECD countries and a further 4.1 billion around the world. More than two in every three people on Earth now have a mobile phone.

Very useful stuff. But in recent times the report has also served as a chance for some to misrepresent the relative health of international broadband markets. The common refrain the past several years was that the U.S. had fallen way behind many European and Asian nations in broadband. The mantra that the U.S. is “15th in the world in broadband” — or 16th, 21st, 24th, take your pick — became a sort of common lament. Except it wasn’t true.

As we showed here, the second half of the two-thousand-aughts saw an American broadband boom. The Phoenix Center and others showed that the most cited stat in those previous OECD reports — broadband connections per 100 inhabitants — actually told you more about household size than broadband. And we developed metrics to better capture the overall health of a nation’s Internet market — IP traffic per Internet user and per capita.

Below you’ll see an update of the IP traffic per Internet user chart, built upon Cisco’s most recent (June 1, 2011) Visual Networking Index report. The numbers, as they did last year, show the U.S. leads every region of the world in the amount of IP traffic we generate and consume both in per user and per capita terms. Among nations, only South Korea tops the U.S., and only Canada matches the U.S.

Although Asia contains broadband stalwarts like Korea, Japan, and Singapore, it also has many laggards. If we compare the U.S. to the most uniformly advanced region, Western Europe, we find the U.S. generates 62% more traffic per user. (These figures are based on Cisco’s 2010 traffic estimates and the ITU’s 2010 Internet user numbers.)

As we noted last year, it’s not possible for the U.S. to both lead the world by a large margin in Internet usage and lag so far behind in broadband. We think these traffic per user and per capita figures show that our residential, mobile, and business broadband networks are among the world’s most advanced and ubiquitous.

Lots of other quantitative and qualitative evidence — from our smart-phone adoption rates to the breakthrough products and services of world-leading device (Apple), software (Google, Apple), and content companies (Netflix) — reaffirms the fairly obvious fact that the U.S. Internet ecosystem is in fact healthy, vibrant, and growing. Far from lagging, it leads the world in most of the important digital innovation indicators.

— Bret Swanson

Data roaming mischief . . . Another pebble in the digital river?

Mobile communications is among the healthiest of U.S. industries. Through a time of economic peril and now merely uncertainty, mobile innovation hasn’t wavered. It’s been a too-rare bright spot. Huge amounts of infrastructure investment, wildly proliferating software apps, too many devices to count. If anything, the industry is moving so fast on so many fronts that we risk not keeping up with needed capacity.

Mobile, perhaps not coincidentally, has also been historically a quite lightly regulated industry. But emerging is a sort of slow boil of small but many rules, or proposed rules, that could threaten the sector’s success. I’m thinking of the “bill shock” proceeding, in which the FCC is looking at billing practices and various “remedies.” And the failure to settle the D block public safety spectrum issue in a timely manner. And now we have a group of  rural mobile providers who want the FCC to set prices in the data roaming market.

You remember that “roaming” is when service provider A pays provider B for access to B’s network so that A’s customers can get service when they are outside A’s service area, or where it has capacity constraints, or for redundancy. These roaming agreements are numerous and have always been privately negotiated. The system works fine.

But now a group of provider A’s, who may not want to build large amounts of new network capacity to meet rising demand for mobile data, like video, Facebook, Twitter, and app downloads, etc., want the FCC to mandate access to B’s networks at regulated prices. And in this case, the B’s have spent many tens of billions of dollars in spectrum and network equipment to provide fast data services, though even these investments can barely keep up with blazing demand.

The FCC has never regulated mobile phone rates, let alone data rates, let alone data roaming rates. And of course mobile voice and data rates have been dropping like rocks. These few rural providers are asking the FCC to step in where it hasn’t before. They are asking the FCC to impose old-time common carrier regulation in a modern competitive market – one in which the FCC has no authority to impose common carrier rules and prices.

In the chart above, we see U.S. info-tech investment in 2010 approached $500 billion. Communications equipment and structures (like cell phone towers) surpassed $105 billion. The fourth generation of mobile networks is just in its infancy. We will need to invest many tens of billions of dollars each year for the foreseeable future both to drive and accommodate Internet innovation, which spreads productivity enhancements and wealth across every sector in the economy.

It is perhaps not surprising that a small number of service providers who don’t invest as much in high-capacity networks might wish to gain artificially cheap access to the networks of the companies who invest tens of billions of dollars per year in their mobile networks alone. Who doesn’t like lower input prices? Who doesn’t like his competitors to do the heavy lifting and surf in his wake? But the also not surprising result of such a policy could be to reduce the amount that everyone invests in new networks. And this is simply an outcome the technology industry, and the entire country, cannot afford. The FCC itself has said that “broadband is the great infrastructure challenge of the early 21st century.”

Economist Michael Mandel has offered a useful analogy:

new regulations [are] like  tossing small pebbles into a stream. Each pebble by itself would have very little effect on the flow of the stream. But throw in enough small pebbles and you can make a very effective dam.

Why does this happen? The answer is that each pebble by itself is harmless. But each pebble, by diverting the water into an ever-smaller area,  creates a ‘negative externality’ that creates more turbulence and slows the water flow.

Similarly, apparently harmless regulations can create negative externalities that add up over time, by forcing companies to spending  time and energy meeting the new requirements. That reduces business flexibility and hurts innovation and growth.

It may be true that none of the proposed new rules for wireless could alone bring down the sector. But keep piling them up, and you can dangerously slow an important economic juggernaut. Price controls for data roaming are a terrible idea.

Mobile traffic grew 159% in 2010 . . . Tablets giving big boost

Among other findings in the latest version of Cisco’s always useful Internet traffic updates:

  • Mobile data traffic was even higher in 2010 than Cisco had projected in last year’s report. Actual growth was 159% (2.6x) versus projected growth of 149% (2.5x).
  • By 2015, we should see one mobile device per capita . . . worldwide. That means around 7.1 billion mobile devices compared to 7.2 billion people.
  • Mobile tablets (e.g., iPads) are likely to generate as much data traffic in 2015 as all mobile devices worldwide did in 2010.
  • Mobile traffic should grow at an annual compound rate of 92% through 2015. That would mean 26-fold growth between 2010 and 2015.

Did the FCC order get lots worse in last two weeks?

So, here we are. Today the FCC voted 3-2 to issue new rules governing the Internet. I expect the order to be struck down by the courts and/or Congress. Meantime, a few observations:

  • The order appears to be more intrusive on the topic of “paid prioritization” than was Chairman Genachowski’s outline earlier this month. (Keep in mind, we haven’t seen the text. The FCC Commissioners themselves only got access to the text at 11:42 p.m. last night.)
  • If this is true, if the “nondiscrimination” ban goes further than a simple reasonableness test, which itself would be subject to tumultuous legal wrangling, then the Net Neutrality order could cause more problems than I wrote about in this December 7 column.
  • A prohibition or restriction on “paid prioritization” is a silly rule that belies a deep misunderstanding of how our networks operate today and how they will need to operate tomorrow. Here’s how I described it in recent FCC comments:

In September 2010, a new network company that had operated in stealth mode digging ditches and boring tunnels for the previous 24 months, emerged on the scene. As Forbes magazine described it, this tiny new company, Spread Networks

“spent the last two years secretly digging a gopher hole from Chicago to New York, usurping the erstwhile fastest paths. Spread’s one-inch cable is the latest weapon in the technology arms race among Wall Street houses that use algorithms to make lightning-fast trades. Every day these outfits control bigger stakes of the markets – up to 70% now. “Anybody pinging both markets  has to be on this line, or they’re dead,” says Jon A. Najarian, cofounder of OptionMonster, which tracks high-frequency trading.

“Spread’s advantage lies in its route, which makes nearly a straight line from a data center  in Chicago’s South Loop to a building across the street from Nasdaq’s servers in Carteret, N.J. Older routes largely follow railroad rights-of-way through Indiana, Ohio and Pennsylvania. At 825 miles and 13.3 milliseconds, Spread’s circuit shaves 100 miles and 3 milliseconds off of the previous route of lowest latency, engineer-talk for length of delay.”

Why spend an estimated $300 million on an apparently duplicative route when numerous seemingly similar networks already exist? Because, Spread says, three milliseconds matters.

Spread offers guaranteed latency on its dark fiber product of no more than 13.33 milliseconds. Its managed wave product is guaranteed at no more than 15.75 milliseconds. It says competitors’ routes between Chicago and New York range from 16 to 20 milliseconds. We don’t know if Spread will succeed financially. But Spread is yet another demonstration that latency is of enormous and increasing importance. From entertainment to finance to medicine, the old saw is truer than ever: time is money. It can even mean life or death.

A policy implication arises. The Spread service is, of course, a form a “paid prioritization.” Companies are paying “eight to 10 times the going rate” to get their bits where they want them, when they want them.5 It is not only a demonstration of the heroic technical feats required to increase the power and diversity of our networks. It is also a prime example that numerous network users want to and will pay money to achieve better service.

One way to achieve better service is to deploy more capacity on certain links. But capacity is not always the problem. As Spread shows, another way to achieve better service is to build an entirely new 750-mile fiber route through mountains to minimize laser light delay. Or we might deploy a network of server caches that store non-realtime data closer to the end points of networks, as many Content Delivery Networks (CDNs) have done. But when we can’t build a new fiber route or store data – say, when we need to get real-time packets from point to point over the existing network – yet another option might be to route packets more efficiently with sophisticated QoS technologies. Each of these solutions fits a particular situation. They take advantage of, or submit to, the technological and economic trade-offs of the moment or the era. They are all legitimate options. Policy simply must allow for the diversity and flexibility of technical and economic options – including paid prioritization – needed to manage networks and deliver value to end-users.

Depending on how far the FCC is willing to take these misguided restrictions, it could actually lead to the very outcomes most reviled by “open Internet” fanatics — that is, more industry concentration, more “walled gardens,” more closed networks. Here’s how I described the possible effect of restrictions on the important voluntary network management tools and business partnerships needed to deliver robust multimedia services:

There has also been discussion of an exemption for “specialized services.” Like wireless, it is important that such specialized services avoid the possible innovation-sapping effects of a Net Neutrality regulatory regime. But the Commission should consider several unintended consequences of moving down the path of explicitly defining, and then exempting, particular “specialized” services while choosing to regulate the so-called “basic,” “best-effort,” or “entry level” “open Internet.”

Regulating the “basic” Internet but not “specialized” services will surely push most of the network and application innovation and investment into the unregulated sphere. A “specialized” exemption, although far preferable to a Net Neutrality world without such an exemption, would tend to incentivize both CAS providers and ISPs service providers to target the “specialized” category and thus shrink the scope of the “open Internet.”

In fact, although specialized services should and will exist, they often will interact with or be based on the “basic” Internet. Finding demarcation lines will be difficult if not impossible. In a world of vast overlap, convergence, integration, and modularity, attempting to decide what is and is not “the Internet” is probably futile and counterproductive. The very genius of the Internet is its ability to connect to, absorb, accommodate, and spawn new networks, applications and services. In a great compliment to its virtues, the definition of the Internet is constantly changing. Moreover, a regime of rigid quarantine would not be good for consumers. If a CAS provider or ISP has to build a new physical or logical network, segregate services and software, or develop new products and marketing for a specifically defined “specialized” service, there would be a very large disincentive to develop and offer simple innovations and new services to customers over the regulated “basic” Internet. Perhaps a consumer does not want to spend the extra money to jump to the next tier of specialized service. Perhaps she only wants the service for a specific event or a brief period of time. Perhaps the CAS provider or ISP can far more economically offer a compelling service over the “basic” Internet with just a small technical tweak, where a leap to a full-blown specialized service would require more time and money, and push the service beyond the reach of the consumer. The transactions costs of imposing a “specialized” quarantine would reduce technical and economic flexibility on both CAS providers and ISPs and, most crucially, on consumers.

Or, as we wrote in our previous Reply Comments about a related circumstance, “A prohibition of the voluntary partnerships that are likely to add so much value to all sides of the market – service provider, content creator, and consumer – would incentivize the service provider to close greater portions of its networks to outside content, acquire more content for internal distribution, create more closely held ‘managed services’ that meet the standards of the government’s ‘exclusions,’ and build a new generation of larger, more exclusive ‘walled gardens’ than would otherwise be the case. The result would be to frustrate the objective of the proceeding. The result would be a less open Internet.”

It is thus possible that a policy seeking to maintain some pure notion of a basic “open Internet” could severely devalue the open Internet the Commission is seeking to preserve.

All this said, the FCC’s legal standing is so tenuous and this order so rooted in reasoning already rejected by the courts, I believe today’s Net Neutrality rule will be overturned. Thus despite the numerous substantive and procedural errors committed on this “darkest day of the year,” I still expect the Internet to “survive and thrive.”

The Internet Survives, and Thrives, For Now

See my analysis of the FCC’s new “net neutrality” policy at RealClearMarkets:

Despite the Federal Communications Commission’s “net neutrality” announcement this week, the American Internet economy is likely to survive and thrive. That’s because the new proposal offered by FCC chairman Julius Genachowski is lacking almost all the worst ideas considered over the last few years. No one has warned more persistently than I against the dangers of over-regulating the Internet in the name of “net neutrality.”

In a better world, policy makers would heed my friend Andy Kessler’s advice to shutter the FCC. But back on earth this new compromise should, for the near-term at least, cap Washington’s mischief in the digital realm.

. . .

The Level 3-Comcast clash showed what many of us have said all along: “net neutrality” was a purposely ill-defined catch-all for any grievance in the digital realm. No more. With the FCC offering some definition, however imperfect, businesses will now mostly have to slug it out in a dynamic and tumultuous technology arena, instead of running to the press and politicians.

Next Page »