Category Archives: Broadband

Permission Slips for Internet Innovation

Screen Shot 2015-08-15 at 1.03.35 PM

See my commentary in The Wall Street Journal this weekend — “Permission Slips for Internet Innovation.”

Screen Shot 2015-08-16 at 1.06.49 PM

Continue reading . . .

Netflix, Mozilla, Google recant on Net Neutrality

Dept. of You Can’t Make This Stuff Up:

Three of the driving forces behind the 10-year effort to regulate the Internet — Netflix, Mozilla, and Google — have, in the last few days and in their own ways, all recanted their zealous support of Net Neutrality. It may have been helpful to have this information . . . before last week, when the FCC plunged the entire Internet industry into a years-long legal war.

First, on Monday, Netflix announced it had entered into a “sponsored data” deal with an Australian ISP, which violates the principles of “strong Net Neutrality,” Netflix’s preferred and especially robust flavor of regulation.

Then on Wednesday, Netflix CFO David Wells, speaking at an investor conference, said

“Were we pleased it pushed to Title II? Probably not,” Wells said at the conference. “We were hoping there might be a non-regulated solution.”

At this week’s huge Mobile World Congress in Barcelona, meanwhile, my AEI colleague Jeff Eisenach reported via Twitter that a Mozilla executive had backtracked:

JeffEisenach
Mozilla’s Dixon-Thayer is latest #netneutrality advocate to backpedal – “we don’t necessarily favor regulation” #repealtitleII #MWC15MP
3/4/15, 10:44 AM

Add these to the revelations about Google’s newfound reticence. Several weeks ago, in The Wall Street Journal‘s blockbuster exposé, we found out that Google Chairman Eric Schmidt called the White House to protest President Obama’s surprise endorsement of Title II regulation of the Internet. Then, just days before the February 26 vote at the FCC, Google urgently pleaded that the Commission remove the bizarre new regulatory provision known as broadband subscriber access service (BSAS), which would have created out of thin air a hereto unknown “service” between websites and ISP consumers — in order to regulate that previously nonexistent service. (Ironic, yes, that this BSAS provision was dreamt up by . . . Mozilla.) Google was successful, just 48 hours before the vote, in excising this menacing regulation of a phantom service. But Google and the others are waking up to the fact that Title II and broad Section 706 authority might contain more than a few nasty surprises.

Fred Campbell examined Netflix’s statements over the last year and concluded: “Netflix bluffed. And everybody lost.”

And Yet . . .

The bottom line of these infuriating reversals may actually be a positive for the Internet. These epiphanies — “Holy bit, we just gave the FCC the power do do what!?!” — may wake serious people from the superficial slumber of substance-free advocacy. The epiphanies may give new life to efforts in Congress to find a legislative compromise that would prohibit clear bad behavior (blocking, throttling, etc.) but which would also circumscribe the FCC’s regulatory ambitions and thus allow the Internet to continue on its mostly free and unregulated — and hugely successful — path.

The last refuge of Internet regulators: the theory of the “terminating access monopoly”

Net neutrality activists have deployed a long series of rationales in their quest for government control of the Internet. As each rationale is found wanting, they simply move onto the next, more exotic theory. The debate has gone on so long that they’ve even begun recycling through old theories that were discredited long ago.

In the beginning, the activists argued that there should be no pay for performance anywhere on the Net. We pointed out the most obvious example of a harmful consequence of their proposal: their rules, as originally written, would have banned widely used content delivery networks (CDNs), which speed delivery of packets (for a price).

Then they argued that without strong government rules, broadband service providers would block innovation at the “edges” of the network. But for the last decade, under minimal regulation, we’ve enjoyed an explosion of new technologies, products, and services from content and app firms like YouTube, Facebook, Netflix, Amazon, Twitter, WhatsApp, Etsy, Snapchat, Pinterest, Twitch, and a thousand others. Many of these firms have built businesses worth billions of dollars.

They said we needed new rules because the light-touch regulatory environment had left broadband in the U.S. lagging its international rivals, whose farsighted industrial policies had catapulted them far ahead of America. Oops. Turns out, the U.S. leads the world in broadband. (See my colleague Richard Bennett’s detailed report on global broadband and my own.)

Then they argued that, regardless of how well the U.S. is doing, do you really trust a monopoly to serve consumer needs? We need to stop the broadband monopolist — the cable company. Turns out most Americans have several choices in broadband providers, and the list of options is growing — see FiOS, U-verse, Google Fiber, satellite, broadband wireless from multiple carriers, etc. No, broadband service is not like peanut butter. Because of the massive investments required to build networks, there will never be many dozens of wires running to each home. But neither is broadband a monopoly.

Artificially narrowing the market is the first refuge of nearly all bureaucrats concerned with competition. It’s an easy way to conjure a monopoly in almost any circumstance. My favorite example was the Federal Trade Commission’s initial opposition in 2003 to the merger of Haagen-Dasz (Nestle) and Godiva (Dreyer’s). The government argued it would harmfully reduce competition in the market for “super premium ice cream.” The relevant market, in the agency’s telling, wasn’t food, or desserts, or sweets, or ice cream, or even premium ice cream, but super premium ice cream.

(more…)

The U.S. Leads the World in Broadband

See our Wall Street Journal op-ed from December 8, which summarizes our new research on Internet traffic and argues for a continued policy of regulatory humility for the digital economy.

Continue reading here. Or read the text below the fold . . . 

(more…)

How can U.S. broadband lag if it generates 2-3 times the traffic of other nations?

Is the U.S. broadband market healthy or not? This question is central to the efforts to change the way we regulate the Internet. In a short new paper from the American Enterprise Institute, we look at a simple way to gauge whether the U.S. has in fact fallen behind other nations in coverage, speed, and price . . . and whether consumers enjoy access to content. Here’s a summary:

  • Internet traffic volume is an important indicator of broadband health, as it encapsulates and distills the most important broadband factors, such as access, coverage, speed, price, and content availability.
  • US Internet traffic — a measure of the nation’s “digital output” — is two to three times higher than most advanced nations, and the United States generates more Internet traffic per capita and per Internet user than any major nation except for South Korea.
  • The US model of broadband investment and innovation — which operates in an environment that is largely free from government interference — has been a dramatic success.
  • Overturning this successful policy by imposing heavy regulation on the Internet puts one of America’s most vital industries at risk.

Twitch Proves the Net Is Working

Below find our Reply Comments in the Federal Communications Commission’s Open Internet proceeding:

September 15, 2014

Twitch Proves the Net Is Working

On August 25, 2014, Amazon announced its acquisition of Twitch for around $1 billion. Twitch  (twitch.tv) is a young but very large website that streams video games and the gamers who play them. The rise of Twitch demonstrates the Net is working and, we believe, also deals a severe blow to a central theory of the Order and NPRM.

The NPRM repeats the theory of the 2010 Open Internet Order that “providers of broadband Internet access service had multiple incentives to limit Internet openness.” The theory advances a concern that small start-up content providers might be discouraged or blocked from opportunities to grow. Neither the Order nor the current NPRM considers or even acknowledges evidence or arguments to the contrary — that broadband service providers (BSPs) may have substantial incentives to promote Internet openness. Nevertheless, the Commission now helpfully seeks comment “to update the record to reflect marketplace, technical, and other changes since the 2010 Open Internet Order was adopted that may have either exacerbated or mitigated broadband providers’ incentives and ability to limit Internet openness. We seek general comment on the Commission’s approach to analyzing broadband providers’ incentives and ability to engage in practices that would limit the open Internet.”

The continued growth of the Internet, and the general health of the U.S. Web, content, app, device, and Internet services markets — all occurring in the absence of Net Neutrality regulation — more than mitigate the Commission’s theory of BSP incentives. While there is scant evidence for the theory of bad BSP behavior, there is abundant evidence that openness generally benefits all players throughout the Internet value chain. The Commission cannot ignore this evidence.

The rise of Twitch is a perfect example. In three short years, Twitch went from brand new start-up to the fourth largest single source of traffic on the Internet. Google had previously signed a term sheet with Twitch, but so great was the momentum of this young, tiny company, that it could command a more attractive deal from Amazon. At the time of its acquisition by Amazon, Twitch said it had 55 million unique monthly viewers (consumers) and more than one million broadcasters (producers), generating 15 billion minutes of content viewed a month. According to measurements by the network scientist and Deepfield CEO Craig Labovitz, only Netflix, Google’s YouTube, and Apple’s iTunes generate more traffic.

(more…)

Digital Dynamism

See our new 20-page report — Digital Dynamism: Competition in the Internet Ecosystem:

The Internet is altering the communications landscape even faster than most imagined.

Data, apps, and content are delivered by a growing and diverse set of firms and platforms, interconnected in ever more complex ways. The new network, content, and service providers increasingly build their varied businesses on a common foundation — the universal Internet Protocol (IP). We thus witness an interesting phenomenon — the divergence of providers, platforms, services, content, and apps, and the convergence on IP.

The dynamism of the Internet ecosystem is its chief virtue. Infrastructure, services, and content are produced by an ever wider array of firms and platforms in overlapping and constantly shifting markets.

The simple, integrated telephone network, segregated entertainment networks, and early tiered Internet still exist, but have now been eclipsed by a far larger, more powerful phenomenon. A new, horizontal, hypercon- nected ecosystem has emerged. It is characterized by large investments, rapid innovation, and extreme product differentiation.

  • Consumers now enjoy at least five distinct, competing modes of broadband connectivity — cable modem, DSL, fiber optic, wireless broadband, and satellite — from at least five types of firms. Widespread wireless Wi- Fi nodes then extend these broadband connections.
  • Firms like Google, Microsoft, Amazon, Apple, Facebook, and Netflix are now major Internet infrastructure providers in the form of massive data centers, fiber networks, content delivery systems, cloud computing clusters, ecommerce and entertainment hubs, network protocols and software, and, in Google’s case, fiber optic access net- works. Some also build network devices and operating systems. Each competes to be the hub — or at least a hub — of the consumer’s digital life. So large are these new players that up to 80 percent of network traffic now bypasses the traditional public Internet backbone.
  • Billions of diverse consumer and enterprise devices plug into these networks, from PCs and laptops to smartphones and tablets, from game consoles and flat panel displays to automobiles, web cams, medical devices, and untold sensors and industrial machines.

The communications playing field is continually shifting. Cable disrupted telecom through broadband cable modem services. Mobile is a massively successful business, yet it is cannibalizing wireline services, with further disruptions from Skype and other IP communications apps. Mobile service providers used to control the handset market, but today handsets are mobile computers that wield their own substantial power with consumers. While the old networks typically delivered a single service — voice, video, or data — today’s broadband networks deliver multiple services, with the “Cloud” offering endless possibilities.

Also view the accompanying graphic, showing the progression of network innovation over time: Hyperconnected: The New Network Map.

U.S. Share of Internet Traffic Grows

Over the last half decade, during a protracted economic slump, we’ve documented the persistent successes of Digital America — for example the rise of the App Economy. Measuring the health of our tech sectors is important, in part because policy agendas are often based on assertions of market failure (or regulatory failure) and often include comparisons with other nations. Several years ago we developed a simple new metric that we thought better reflected the health of broadband in international comparisons. Instead of measuring broadband using “penetration rates,” or the number of  connections per capita, we thought a much better indicator was actual Internet usage. So we started looking at Internet traffic per capita and per Internet user (see here, here, here, and, for more context, here).

We’ve update the numbers here, using Cisco’s Visual Networking Index for traffic estimates and Internet user figures from the International Telecommunications Union. And the numbers suggest the U.S. digital economy, and its broadband networks, are healthy and extending their lead internationally. (Patrick Brogan of USTelecom has also done excellent work on this front; see his new update.)

If we look at regional comparisons of traffic per person, we see North America generates and consumes nearly seven times the world average and more around two and a half times that of Western Europe.

Looking at individual nations, and switching to the metric of traffic per user, we find that the U.S. is actually pulling away from the rest of the world. In our previous reports, the U.S. trailed only South Korea, was essentially tied with Canada, and generated around 60-70% more traffic than Western European nations. Now, the U.S. has separated itself from Canada and is generating two to three times the traffic per user of Western Europe and Japan.

Perhaps the most remarkable fact, as Brogan notes, is that the U.S. has nearly caught up with South Korea, which, for the last decade, was a real outlier — far and away the worldwide leader in Internet infrastructure and usage.

Traffic is difficult to measure and its nature and composition can change quickly. There are a number of factors we’ll talk more about later, such as how much of this traffic originates in the U.S. but is destined for foreign lands. Yet these are some of the best numbers we have, and the general magnitudes reinforce the idea that the U.S. digital economy, under a relatively light-touch regulatory model, is performing well.

Discussing Broadband and Economic Growth at AEI

On Tuesday this week, the American Enterprise Institute launched an exciting new project — the Center for Internet, Communications, and Technology. I was happy to participate in the inaugural event, which included talks by CEA chairman Jason Furman and Rep. Greg Walden (R-OR). We discussed broadband’s potential to boost economic productivity and focused on the importance and key questions of wireless spectrum policy. See the video below:

A Decade Later, Net Neutrality Goes To Court

Today the D.C. Federal Appeals Court hears Verizon’s challenge to the Federal Communications Commission’s “Open Internet Order” — better known as “net neutrality.”

Hard to believe, but we’ve been arguing over net neutrality for a decade. I just pulled up some testimony George Gilder and I prepared for a Senate Commerce Committee hearing in April 2004. In it, we asserted that a newish “horizontal layers” regulatory proposal, then circulating among comm-law policy wonks, would become the next big tech policy battlefield. Horizontal layers became net neutrality, the Bush FCC adopted the non-binding Four Principles of an open Internet in 2005, the Obama FCC pushed through actual regulations in 2010, and now today’s court challenge, which argues that the FCC has no authority to regulate the Internet and that, in fact, Congress told the FCC not to regulate the Internet.

Over the years we’ve followed the debate, and often weighed in. Here’s a sampling of our articles, reports, reply comments, and even some doggerel:

— Bret Swanson

U.S. Mobile: Effectively competitive? Probably. Positively healthy? Absolutely.

Each year the Federal Communications Commission is required to report on competition in the mobile phone market. Following Congress’s mandate to determine the level of industry competition, the FCC, for many years, labeled the industry “effectively competitive.” Then, starting a few years ago, the FCC declined to make such a determination. Yes, there had been some consolidation, it was acknowledged, yet the industry was healthier than ever — more subscribers, more devices, more services, lots of innovation. The failure to achieve the “effectively competitive” label was thus a point of contention.

This year’s “CMRS” — commercial mobile radio services — report again fails to make a designation, one way or the other. Yet whatever the report lacks in official labels, it more than makes up in impressive data.

For example, it shows that as of October 2012, 97.2% of Americans have access to three or more mobile providers, and 92.8% have access to four or more. As for mobile broadband data services, 97.8% have access to two or more providers, and 91.6% have access to three or more.

Rural America is also doing well. The FCC finds 87% of rural consumers have access to three or more mobile voice providers, and 69.1% have access to four or more. For mobile broadband, 89.9% have access to two or more providers, while 65.4% enjoy access to three or more.

Call this what you will — to most laypeople, these choices count as robust competition. Yet the FCC has a point when it

refrain[s] from providing any single conclusion because such an assessment would be incomplete and possibly misleading in light of the variations and complexities we observe.

The industry has grown so large, with so many interconnected and dynamic players, it may have outgrown Congress’s request for a specific label.

14. Given the Report’s expansive view of mobile wireless services and its examination of competition across the entire mobile wireless ecosystem, we find that the mobile wireless ecosystem is sufficiently complex and multi-faceted that it would not be meaningful to try to make a single, all-inclusive finding regarding effective competition that adequately encompasses the level of competition in the various interrelated segments, types of services, and vast geographic areas of the mobile wireless industry.

Or as economist George Ford of the Phoenix Center put it,

The statute wants a competitive analysis, but as the Commission correctly points out, competition is not the goal, it [is] the means. Better performance is the goal. When the evidence presented in the Sixteenth Report is viewed in this way, the conclusion to be reached about the mobile industry, at least to me, is obvious: the U.S. mobile wireless industry is performing exceptionally well for consumers, regardless of whether or not it satisfies someone’s arbitrarily-defined standard of “effective competition.”

I’m in good company.  Outgoing FCC Chairman Julius Genachowski lists among his proudest achievements that “the U.S. is now the envy of the world in advanced wireless networks, devices, applications, among other areas.

The report shows that in the last decade, U.S. mobile connections have nearly tripled. The U.S. now has more mobile connections than people.

The report also shows per user data consumption more than doubling year to year.

More important, the proliferation of smartphones, which are powerful mobile computers, is the foundation for a new American software industry widely known as the App Economy. We detailed the short but amazing history of the app and its impact on the economy in our report “Soft Power: Zero to 60 Billion in Four Years.” Likewise, these devices and software applications are changing industries that need changing. Last week, experts testified before Congress about mobile health, or mHealth, and we wrote about the coming health care productivity revolution in “The App-ification of Medicine.”

One factor that still threatens to limit mobile growth is the availability of spectrum. The report details past spectrum allocations that have borne fruit, but the pipeline of future spectrum allocations is uncertain. A more robust commitment to spectrum availability and a free-flowing spectrum market would ensure continued investment in networks, content, and services.

What Congress once called the mobile “phone” industry is now a sprawling global ecosystem and a central driver of economic advance. By most measures, the industry is effectively competitive. By any measure, it’s positively healthy.

— Bret Swanson

The Broadband Rooster

FCC chairman Julius Genachowski opens a new op-ed with a bang:

As Washington continues to wrangle over raising revenue and cutting spending, let’s not forget a crucial third element for reining in the deficit: economic growth. To sustain long-term economic health, America needs growth engines, areas of the economy that hold real promise of major expansion. Few sectors have more job-creating innovation potential than broadband, particularly mobile broadband.

Private-sector innovation in mobile broadband has been extraordinary. But maintaining the creative momentum in wireless networks, devices and apps will need an equally innovative wireless policy, or jobs and growth will be left on the table.

Economic growth is indeed the crucial missing link to employment, opportunity, and healthier government budgets. Technology is the key driver of long term growth, and even during the downturn the broadband economy has delivered. Michael Mandel estimates the “app economy,” for example, has created more than 500,000 jobs in less than five short years of existence.

We emphatically do need policies that will facilitate the next wave of digital innovation and growth. Chairman Genachowski’s top line assessment — that U.S. broadband is a success — is important. It rebuts the many false but persistent claims that U.S. broadband lags the world. Chairman Genachowski’s diagnosis of how we got here and his prescriptions for the future, however, are off the mark.

For example, he suggests U.S. mobile innovation is newer than it really is.

Over the past few years, after trailing Europe and Asia in mobile infrastructure and innovation, the U.S. has regained global leadership in mobile technology.

This American mobile resurgence did not take place in just the last “few years.” It began a little more than decade ago with smart decisions to:

(1) allow reasonable industry consolidation and relatively free spectrum allocation, after years of forced “competition,” which mandated network duplication and thus underinvestment in coverage and speed (we did in fact trail Europe in some important mobile metrics in the late 1990s and briefly into the 2000s);

(2) refrain from any but the most basic regulation of broadband in general and the mobile market in particular, encouraging experimental innovation; and

(3) finally implement the digital TV / 700 MHz transition in 2007, which put more of the best spectrum into the market.

These policies, among others, encouraged some $165 billion in mobile capital investment between 2001 and 2008 and launched a wave of mobile innovation. Development on the iPhone began in 2004, the iPhone itself arrived in 2007, and the app store in 2008. Google’s Android mobile OS came along in 2009, the year Mr. Genachowski arrived at the FCC. By this time, the American mobile juggernaut had already been in full flight for years, and the foundation was set — the U.S. topped the world in 3G mobile networks and device and software innovation. Wi-Fi, meanwhile surged from 2003 onward, creating an organic network of tens of millions of wireless nodes in homes, offices, and public spaces. Mr. Genachowski gets some points for not impeding the market as aggressively as some other more zealous regulators might have. But taking credit for America’s mobile miracle smacks of the rooster proudly puffing his chest at sunrise.

More important than who gets the credit, however, is determining what policies led to the current success . . . and which are likely to spur future growth. Chairman Genachowski is right to herald the incentive auctions that could unleash hundreds of megahertz of un- and under-used spectrum from the old TV broadcasters. Yet wrangling over the rules of the auctions could stretch on, delaying the the process. Worse, the rules themselves could restrict who can bid on or buy new spectrum, effectively allowing the FCC to favor certain firms, technologies, or friends at the expense of the best spectrum allocation. We’ve seen before that centrally planned spectrum allocations don’t work. The fact that the FCC is contemplating such an approach is worrisome. It runs counter to the policies that led to today’s mobile success.

The FCC also has a bad habit of changing the metrics and the rules in the middle of the game. For example, the FCC has been caught changing its “spectrum screen” to fit its needs. The screen attempts to show how much spectrum mobile operators hold in particular markets. During M&A reviews, however, the FCC has changed its screen procedures to make the data fit its opinion.

In a more recent example, Fred Campbell shows that the FCC alters its count of total available commercial spectrum to fit the argument it wants to make from day to day. We’ve shown that the U.S. trails other nations in the sum of currently available spectrum plus spectrum in the pipeline. Below, see a chart from last year showing how the U.S. compares favorably in existing commercially available spectrum but trails severely in pipeline spectrum. Translation: the U.S. did a pretty good job unleashing spectrum in 1990s through he mid-2000s. But, contrary to Chairman Genachowski’s implication, it has stalled in the last few years.

When the FCC wants to argue that particular companies shouldn’t be allowed to acquire more spectrum (whether through merger or secondary markets), it adopts this view that the U.S. trails in spectrum allocation. Yet when challenged on the more general point that the U.S. lags other nations, the FCC turns around and includes an extra 139 MHz in spectrum in the 2.5 GHz range to avoid the charge it’s fallen behind the curve.

Next, Chairman Genachowski heralds a new spectrum “sharing” policy where private companies would be allowed to access tiny portions of government-owned airwaves. This really is weak tea. The government, depending on how you measure, controls between 60% and 85% of the best spectrum for wireless broadband. It uses very little of it. Yet it refuses to part with meaningful portions, even though it would still be left with more than enough for its important uses — military and otherwise. If they can make it work (I’m skeptical), sharing may offer a marginal benefit. But it does not remotely fit the scale of the challenge.

Along the way, the FCC has been whittling away at mobile’s incentives for investment and its environment of experimentation. Chairman Genachowski, for example, imposed price controls on “data roaming,” even though it’s highly questionable he had the legal authority to do so. The Commission has also, with varied degrees of “success,” been attempting to impose its extralegal net neutrality framework to wireless. And of course the FCC has blocked, altered, and/or discouraged a number of important wireless mergers and secondary spectrum transactions.

Chairman Genachowski’s big picture is a pretty one: broadband innovation is key to economic growth. Look at the brush strokes, however, and there are reasons to believe sloppy and overanxious regulators are threatening to diminish America’s mobile masterpiece.

— Bret Swanson

Broadband Bullfeathers

Several years ago, some American lawyers and policymakers were looking for ways to boost government control of the Internet. So they launched a campaign to portray U.S. broadband as a pathetic patchwork of tin-cans-and-strings from the 1950s. The implication was that broadband could use a good bit of government “help.”

They initially had some success with a gullible press. The favorite tools were several reports that measured, nation by nation, the number of broadband connections per 100 inhabitants. The U.S. emerged from these reports looking very mediocre. How many times did we read, “The U.S. is 16th in the world in broadband”? Upon inspection, however, the reports weren’t very useful. Among other problems, they were better at measuring household size than broadband health. America, with its larger households, would naturally have fewer residential broadband subscriptions (not broadband users) than nations with smaller households (and thus more households per capita). And as the Phoenix Center demonstrated, rather hilariously, if the U.S. and other nations achieved 100% residential broadband penetration, America would actually fall to 20th from 15th.

In the fall of 2009, a voluminous report from Harvard’s Berkman Center tried to stitch the supposedly ominous global evidence into a case-closed indictment of U.S. broadband. The Berkman report, however, was a complete bust (see, for example, these thorough critiques: 1, 2, and 3 as well as my brief summary analysis).

Berkman’s statistical analyses had failed on their own terms. Yet it was still important to think about the broadband economy in a larger context. We asked the question, how could U.S. broadband be so backward if so much of the world’s innovation in broadband content, services, and devices was happening here?

To name just a few: cloud computing, YouTube, Twitter, Facebook, Netflix, iPhone, Android, ebooks, app stores, iPad. We also showed that the U.S. generates around 60% more network traffic per capita and per Internet user than Western Europe, the supposed world broadband leader. The examples multiply by the day. As FCC chairman Julius Genachowski likes to remind us, the U.S. now has more 4G LTE wireless subscribers than the rest of the world combined.

Yet here comes a new book with the same general thrust — that the structure of the U.S. communications market is delivering poor information services to American consumers. In several new commentaries summarizing the forthcoming book’s arguments, author Susan Crawford’s key assertion is that U.S. broadband is slow. It’s so bad, she thinks broadband should be a government utility. But is U.S. broadband slow?

According to actual network throughput measured by Akamai, the world’s largest content delivery network, the U.S. ranks in the top ten or 15 across a range of bandwidth metrics. It is ninth in average connection speed, for instance, and 13th in average peak speed. Looking at proportions of populations who enjoy speeds above a certain threshold, Akamai finds the U.S. is seventh in the percentage of connections exceeding 10 megabits per second (Mbps) and 13th in the percentage exceeding 4 Mbps. (See the State of the Internet report, 2Q 2012.)

You may not be impressed with rankings of seventh or 13th. But did you look at the top nations on the list? Hong Kong, South Korea, Latvia, Switzerland, the Netherlands, Japan, etc.

Each one of them is a relatively small, densely populated country. The national rankings are largely artifacts of geography and the size of the jurisdictions observed. Small nations with high population densities fare well. It is far more economical to build high-speed communications links in cities and other relatively dense populations. Accounting for this size factor, the U.S. actually looks amazingly good. Only Canada comes close to the U.S. among geographically larger nations.

But let’s look even further into the data. Akamai also supplies speeds for individual U.S. states. If we merge the tables of nations and states, the U.S. begins to look not like a broadband backwater or even a middling performer but an overwhelming success. Here are the two sets of Akamai data combined into tables that directly compare the successful small nations with their more natural counterparts, the U.S. states (shaded in blue).

Average Broadband Connection Speed — Nine of the top 15 entities are U.S. states.

Average Peak Connection Speed — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 10 Megabits per Second — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 4 Megabits per Second — Ten of the top 16 entities are U.S. states.

Among the 61 ranked entities on these four measures of broadband speed, 39, or almost two-thirds, are U.S. states. American broadband is not “pitifully slow.” In fact, if we were to summarize U.S. broadband, we’d have to say, compared to the rest of the world, it is very fast.

It is true that not every state or region in the U.S. enjoys top speeds. Yes, we need more, better, faster, wider coverage of wired and wireless broadband. In underserved neighborhoods as well as our already advanced areas. We need constant improvement both to accommodate today’s content and services and to drive tomorrow’s innovations. We should not, however, be making broad policy under the illusion that U.S. broadband, taken as a whole, is deficient. The quickest way to make U.S. broadband deficient is probably to enact policies that discourage investment and innovation — such as trying to turn a pretty successful and healthy industry that invests $60 billion a year into a government utility.

— Bret Swanson

The $66-billion Internet Expansion

Sixty-six billion dollars over the next three years. That’s AT&T’s new infrastructure plan, announced yesterday. It’s a bold commitment to extend fiber optics and 4G wireless to most of the country and thus dramatically expand the key platform for growth in the modern U.S. economy.

The company specifically will boost its capital investments by an additional $14 billion over previous estimates. This should enable coverage of 300 million Americans (around 97% of the population) with LTE wireless and 75% of AT&T’s residential service area with fast IP broadband. It’s adding 10,000 new cell towers, a thousand distributed antenna systems, and 40,000 “small cells” that augment and extend the wireless network to, for example, heavily trafficked public spaces. Also planned are fiber optic connections to an additional 1 million businesses.

As the company expands its fiber optic and wireless networks — to drive and accommodate the type of growth seen in the chart above — it will be retiring parts of its hundred-year-old copper telephone network. To do this, it will need cooperation from federal and state regulators. This is the end of phone network, the transition to all Internet, all the time, everywhere.

FCC’s 706 Broadband Report Does Not Compute

Yesterday the Federal Communications Commission issued 181 pages of metrics demonstrating, to any fair reader, the continuing rapid rise of the U.S. broadband economy — and then concluded, naturally, that “broadband is not yet being deployed to all Americans in a reasonable and timely fashion.” A computer, being fed the data and the conclusion, would, unable to process the logical contradictions, crash.

The report is a response to section 706(b) of the 1996 Telecom Act that asks the FCC to report annually whether broadband “is being deployed . . . in a reasonable and timely fashion.” From 1999 to 2008, the FCC concluded that yes, it was. But now, as more Americans than ever have broadband and use it to an often maniacal extent, the FCC has concluded for the third year in a row that no, broadband deployment is not “reasonable and timely.”

The FCC finds that 19 million Americans, mostly in very rural areas, don’t have access to fixed line terrestrial broadband. But Congress specifically asked the FCC to analyze broadband deployment using any technology.”

“Any technology” includes DSL, cable modems, fiber-to-the-x, satellite, and of course fixed wireless and mobile. If we include wireless broadband, the unserved number falls to 5.5 million from the FCC’s headline 19 million. Five and a half million is 1.74% of the U.S. population. Not exactly a headline-grabbing figure.

Even if we stipulate the FCC’s framework, data, and analysis, we’re still left with the FCC’s own admission that between June 2010 and June 2011, an additional 7.4 million Americans gained access to fixed broadband service. That dropped the portion of Americans without access to 6% in 2011 from around 8.55% in 2010 — a 30% drop in the unserved population in one year. Most Americans have had broadband for many years, and the rate of deployment will necessarily slow toward the tail-end of any build-out. When most American households are served, there just aren’t very many to go, and those that have yet to gain access are likely to be in the very most difficult to serve areas (e.g. “on tops of mountains in the middle of nowhere”). The fact that we still added 7.4 million broadband in the last year, lowering the unserved population by 30%, even using the FCC’s faulty framework, demonstrates in any rational world that broadband “is being deployed” in a “reasonable and timely fashion.”

But this is not the rational world — it’s D.C. in the perpetual political silly season.

One might conclude that because the vast majority of these unserved Americans live in very rural areas — Alaska, Montana, West Virginia — the FCC would, if anything, suggest policies tailored to boost infrastructure investment in these hard-to-reach geographies. We could debate whether these are sound investments and whether the government would do a good job expanding access, but if rural deployment is a problem, then presumably policy should attempt to target and remediate the rural underserved. Commissioner McDowell, however, knows the real impetus for the FCC’s tortured no-confidence vote — its regulatory agenda.

McDowell notes that the report repeatedly mentions the FCC’s net neutrality rules (now being contested in court), which are as far from a pro-broadband policy, let alone a targeted one, as you could imagine. If anything, net neutrality is an impediment to broader, faster, better broadband. But the FCC is using its thumbs-down on broadband deployment to prop up its intrusions into a healthy industry. As McDowell concluded, “the majority has used this process as an opportunity to create a pretext to justify more regulation.”

Misunderstanding the Mobile Ecosystem

Mobile communications and computing are among the most innovative and competitive markets in the world. They have created a new world of software and offer dramatic opportunities to improve productivity and creativity across the industrial spectrum.

Last week we published a tech note documenting the rapid growth of mobile and the importance of expanding wireless spectrum availability. More clean spectrum is necessary both to accommodate fast-rising demand and drive future innovations. Expanding spectrum availability might seem uncontroversial. In the report, however, we noted that one obstacle to expanding spectrum availability has been a cramped notion of what constitutes competition in the Internet era. As we wrote:

Opponents of open spectrum auctions and flexible secondary markets often ignore falling prices, expanding choices, and new features available to consumers. Instead they sometimes seek to limit new spectrum availability, or micromanage its allocation or deployment characteristics, charging that a few companies are set to dominate the market. Although the FCC found that 77% of the U.S. population has access to three or more 3G wireless providers, charges of a coming “duopoly” are now common.

This view, however, relies on the old analysis of static utility or commodity markets and ignores the new realities of broadband communications. The new landscape is one of overlapping competitors with overlapping products and services, multi-sided markets, network effects, rapid innovation, falling prices, and unpredictability.

Sure enough, yesterday Sprint CEO Dan Hesse made the duopoly charge and helped show why getting spectrum policy right has been so difficult.

Q: You were a vocal opponent of the AT&T/T-Mobile merger. Are you satisfied you can compete now that the merger did not go through?

A: We’re certainly working very hard. There’s no question that the industry does have an issue with the size of the duopoly of AT&T and Verizon. I believe that over time we’ll see more consolidation in the industry outside of the big two, because the gap in size between two and three is so enormous. Consolidation is healthy for the industry as long as it’s not AT&T and Verizon getting larger.

Hesse goes even further.

Hesse also seemed to be likening Sprint’s struggles in competing with AT&T-Rex and Big Red as a fight against good and evil. Sprint wants to wear the white hat, according to Hesse. “At Sprint, we describe it internally as being the good guys, of doing the right thing,” he said.

This type of thinking is always a danger if you’re trying to make sound policy. Picking winners and losers is inevitably — at best — an arbitrary exercise. Doing so based on some notion of corporate morality is plain silly, but even more reasonable sounding metrics and arguments — like those based on market share — are often just as misleading and harmful.

The mobile Internet ecosystem is growing so fast and changing with such rapidity and unpredictability that making policy based on static and narrow market definitions will likely yield poor policy. As we noted in our report:

It is, for example, worth emphasizing: Google and Apple were not in this business just a few short years ago.

Yet by the fourth quarter of 2011 Apple could boast an amazing 75% of the handset market’s profits. Apple’s iPhone business, it was widely noted after Apple’s historic 2011, is larger than all of Microsoft. In fact, Apple’s non-iPhone products are also larger than Microsoft.

Android, the mobile operating system of Google, has been growing even faster than Apple’s iOS. In December 2011, Google was activating 700,000 Android devices a day, and now, in the summer of 2012, it estimates 900,000 activations per day. From a nearly zero share at the beginning of 2009, Android today boasts roughly a 55% share of the global smartphone OS market.

. . .

Apple’s iPhone changed the structure of the industry in several ways, not least the relationships between mobile service providers and handset makers. Mobile operators used to tell handset makers what to make, how to make it, and what software and firmware could be loaded on it. They would then slap their own brand label on someone else’s phone.

Apple’s quick rise to mobile dominance has been matched by Blackberry maker Research In Motion’s fall. RIM dominated the 2000s with its email software, its qwerty keyboard, and its popularity with enterprise IT departments. But it  couldn’t match Apple’s or Android’s general purpose computing platforms, with user-friendly operating systems, large, bright touch-screens, and creative and diverse app communities.

Sprinkled among these developments were the rise, fall, and resurgence of Motorola, and then its sale to Google; the rise and fall of Palm; the rise of HTC; and the decline of once dominant Nokia.

Apple, Google, Amazon, Microsoft, and others are building cloud ecosystems, sometimes complemented with consumer devices, often tied to Web apps and services, multimedia content, and retail stores. Many of these products and services compete with each other, but they also compete with broadband service providers. Some of these business models rely primarily on hardware, some software, some subscriptions, some advertising. Each of the companies listed above — a computer company, a search company, an ecommerce company, and a software company — are now major Internet infrastructure companies.

As Jeffrey Eisenach concluded in a pathbreaking analysis of the digital ecosystem (“Theories of Broadband Competition”), there may be market concentration in one (or more) layer(s) of the industry (broadly considered), yet prices are falling, access is expanding, products are proliferating, and innovation is as rapid as in any market we know.

The Real Deal on U.S. Broadband

Is American broadband broken?

Tim Lee thinks so. Where he once leaned against intervention in the broadband marketplace, Lee says four things are leading him to rethink and tilt toward more government control.

First, Lee cites the “voluminous” 2009 Berkman Report. Which is surprising. The report published by Harvard’s Berkman Center may have been voluminous, but it lacked accuracy in its details and persuasiveness in its big-picture take-aways. Berkman used every trick in the book to claim “open access” regulation around the world boosted other nation’s broadband economies and lack of such regulation in the U.S. harmed ours. But the report’s data and methodology were so thoroughly discredited (especially in two detailed reports issued by economists Robert Crandall, Everett Ehrlich, and Jeff Eisenach and Robert Hahn) that the FCC, which commissioned the report, essentially abandoned it.  Here was my summary of the economists’ critiques:

The [Berkman] report botched its chief statistical model in half a dozen ways. It used loads of questionable data. It didn’t account for the unique market structure of U.S. broadband. It reversed the arrow of time in its country case studies. It ignored the high-profile history of open access regulation in the U.S. It didn’t conduct the literature review the FCC asked for. It excommunicated Switzerland.

. . .

Berkman’s qualitative analysis was, if possible, just as misleading. It passed along faulty data on broadband speeds and prices. It asserted South Korea’s broadband boom was due to open access regulation, but in fact most of South Korea’s surge happened before it instituted any regulation. The study said Japanese broadband, likewise, is a winner because of regulation. But regulated DSL is declining fast even as facilities-based (unshared, proprietary) fiber-to-the-home is surging.

Berkman also enjoyed comparing broadband speeds of tiny European and Asian countries to the whole U.S. But if we examine individual American states — New York or Arizona, for example — we find many of them outrank most European nations and Europe as a whole. In fact, applying the same Speedtest.com data Berkman used, the U.S. as a whole outpaces Europe as a whole! Comparing small islands of excellence to much larger, more diverse populations or geographies is bound to skew your analysis.

The Berkman report twisted itself in pretzels trying to paint a miserable picture of the U.S. Internet economy and a glowing picture of heavy regulation in foreign nations. Berkman, however, ignored the prima facie evidence of a vibrant U.S. broadband marketplace, manifest in the boom in Web video, mobile devices, the App Economy, cloud computing, and on and on.

How could the bulk of the world’s best broadband apps, services, and sites be developed and achieve their highest successes in the U.S. if American broadband were so slow and thinly deployed? We came up with a metric that seemed to refute the notion that U.S. broadband was lagging, namely, how much network traffic Americans generate vis-à-vis the rest of the world. It turned out the U.S. generates more network traffic per capita and per Internet user than any nation but South Korea and generates about two-thirds more per-user traffic than the closest advanced economy of comparable size, Western Europe.

Berkman based its conclusions almost solely on (incorrect) measures of “broadband penetration” — the number of broadband subscriptions per capita — but that metric turned out to be a better indicator of household size than broadband health. Lee acknowledges the faulty analysis but still assumes “broadband penetration” is the sine qua non measure of Internet health. Maybe we’re not awful, as Berkman claimed, Lee seems to be saying, but even if we correct for their methodological mistakes, U.S. broadband penetration is still just OK. “That matters,” Lee writes,

because a key argument for America’s relatively hands-off approach to broadband regulation has been that giving incumbents free rein would give them incentive to invest more in their networks. The United States is practically the only country to pursue this policy, so if the incentive argument was right, its advocates should have been able to point to statistics showing we’re doing much better than the rest of the world. Instead, the argument has been over just how close to the middle of the pack we are.

No, I don’t agree that the argument has consisted of bickering over whether the U.S. is more or less mediocre. Not at all. I do agree that advocates of government regulation have had to adjust their argument — U.S. broadband is awful mediocre. Yet they still hang their hat on “broadband penetration” because most other evidence on the health of the U.S. digital economy is even less supportive of their case.

In each of the last seven years, U.S. broadband providers have invested between $60 and $70 billion in their networks. Overall, the U.S. leads the world in info-tech investment — totaling nearly $500 billion last year. The U.S. now boasts more than 80 million residential broadband links and 200+ million mobile broadband subscribers. U.S. mobile operators have deployed more 4G mobile network capacity than anyone, and Verizon just announced its FiOS fiber service will offer 300 megabit-per-second residential connections — perhaps the fastest large-scale deployment in the world.

Eisenach and Crandall followed up their critique of the Berkman study with a fresh March 2012 analysis of “open access” regulation around the world (this time with Allan Ingraham). They found:

  • “it is clear that copper loop unbundling did not accelerate the deployment or increase the penetration of first-generation broadband networks, and that it had a depressing effect on network investment”
  • “By contrast, it seems clear that platform competition was very important in promoting broadband deployment and uptake in the earlier era of DSL and cable modem competition.”
  • “to the extent new fiber networks are being deployed in Europe, they are largely being deployed by unregulated, non-ILEC carriers, not by the regulated incumbent telecom companies, and not by entrants that have relied on copper-loop unbundling.”

Lee doesn’t mention the incisive criticisms of the Berkman study nor the voluminous literature, including this latest example, showing open access policies are ineffective at best, and more likely harmful.

In coming posts, I’ll address Lee’s three other worries.

— Bret Swanson

New iPad, Fellow Bandwidth Monsters Hungry for More Spectrum

Last week Apple unveiled its third-generation iPad. Yesterday the company said the LTE versions of the device, which can connect via Verizon and AT&T mobile broadband networks, are sold out.

It took 15 years for laptops to reach 50 million units sold in a year. It took smartphones seven years. For tablets (not including Microsoft’s clunky attempt a decade ago), just two years. Mobile device volumes are astounding. In each of the last five years, global mobile phone sales topped a billion units. Last year smartphones outsold PCs for the first time – 488 million versus 432 million. This year well over 500 million smartphones and perhaps 100 million tablets could be sold.

Smartphones and tablets represent the first fundamentally new consumer computing platforms since the PC, which arrived in the late ’70s and early ’80s. Unlike mere mobile phones, they’ve got serious processing power inside. But their game-changing potency is really based on their capacity to communicate via the Internet. And this power is, of course, dependent on the cloud infrastructure and wireless networks.

But are wireless networks today prepared for this new surge of bandwidth-hungry mobile devices? Probably not. When we started to build 3G mobile networks in the middle of last decade, many thought it was a huge waste. Mobile phones were used for talking, and some texting. They had small low-res screens and were terrible at browsing the Web. What in the world would we do with all this new wireless capacity? Then the iPhone came, and, boom — in big cities we went from laughable overcapacity to severe shortage seemingly overnight. The iPhone’s brilliant screen, its real Web browsing experience, and the world of apps it helped us discover totally changed the game. Wi-Fi helped supply the burgeoning iPhone with bandwidth, and Wi-Fi will continue to grow and play an important role. Yet Credit Suisse, in a 2011 survey of the industry, found that mobile networks overall were running at 80% of capacity and that many network nodes were tapped out.

Today, we are still expanding 3G networks and launching 4G in most cities. Verizon says it offers 4G LTE in 196 cities, while AT&T says it offers 4G LTE in 28 markets (and combined with its HSPA+ networks offers 4G-like speeds to 200 million people in the U.S.). Lots of things affect how fast we can build new networks — from cell site permitting to the fact that these things are expensive ($20 billion worth of wireless infrastructure in the U.S. last year). But another limiting factor is spectrum availability.

Do we have enough radio waves to efficiently and cost-effectively serve these hundreds of millions of increasingly powerful mobile devices, which generate and consume increasingly rich content, with ever more stringent latency requirements, and which depend upon robust access to cloud storage and computing resources?

Capacity is a function of money, network nodes, technology, and radio waves. But spectrum is grossly misallocated. The U.S. government owns 61% of the best airwaves, while mobile broadband providers — where all the action is — own just 10%. Another portion is controlled by the old TV broadcasters, where much of this beachfront spectrum lay fallow or underused.

They key is allowing spectrum to flow to its most valuable uses. Last month Congress finally authorized the FCC to conduct incentive auctions to free up some unused and underused TV spectrum. Good news. But other recent developments discourage us from too much optimism on this front.

In December the FCC and Justice Department vetoed AT&T’s attempt to augment its spectrum and cell-site position via merger with T-Mobile. Now the FCC and DoJ are questioning Verizon’s announced purchase of Spectrum Co. — valuable but unused spectrum owned by a consortium of cable TV companies. The FCC has also threatened to tilt any spectrum auctions so that it decides who can bid, how much bidders can buy, and what buyers may or may not do with their spectrum — pretending Washington knows exactly how this fast-changing industry should be structured, thus reducing the value of spectrum and probably delaying availability of new spectrum and possibly reducing the sector’s pace of innovation.

It’s very difficult to see how it’s at all productive for the government to block companies who desperately need more spectrum from buying it from those who don’t want it, don’t need it, or can’t make good use of it. The big argument against AT&T and Verizon’s attempted spectrum purchases is “competition.” But T-Mobile wanted to sell to AT&T because it admitted it didn’t have the financial (or spectrum) wherewithal to build a super expensive 4G network. Apparently the same for the cable companies, who chose to sell to Verizon. Last week Dish Network took another step toward entering the 4G market with the FCC’s approval of spectrum transfers from two defunct companies, TerreStar and DBSD.

Some people say the proliferation of Wi-Fi or the increased use of new wireless technologies that economize on spectrum will make more spectrum availability unnecessary. I agree Wi-Fi is terrific and will keep growing and that software radios, cognitive radios, mesh networks and all the other great technologies that increase the flexibility and power of wireless will make big inroads. So fine, let’s stipulate that perhaps these very real complements will reduce the need for more spectrum at the margin. Then the joke is on the big companies that want to overpay for unnecessary spectrum. We still allow big, rich companies to make mistakes, right? Why, then, do proponents of these complementary technologies still oppose allowing spectrum to flow to its highest use?

Free spectrum auctions would allow lots of companies to access spectrum — upstarts, middle tier, and yes, the big boys, who desperately need more capacity to serve the new iPad.

— Bret Swanson

Is the FCC serious about more wireless spectrum? Apparently not.

For the third year in a row, FCC chairman Julius Genachowski used his speech at the Consumer Electronics Show in Las Vegas to push for more wireless spectrum. He wants Congress to pass the incentive auction law that would unleash hundreds of megahertz of spectrum to new and higher uses. Most of Congress agrees: we need lots more wireless capacity and spectrum auctions are a good way to get there.

Genachowski, however, wants overarching control of the new spectrum and, by extension, the mobile broadband ecosystem. The FCC wants the authority to micromanage the newly available radio waves — who can buy it, how much they can buy, how they can use it, what content flows over it, what business models can be employed with it. But this is an arena that is growing wildly fast, where new technologies appear every day, and where experimentation is paramount to see which business models work. Auctions are supposed to be a way to get more spectrum into the marketplace, where lots of companies and entrepreneurs can find the best ways to use it to deliver new communications services. “Any restrictions” by Congress on the FCC “would be a real mistake,” said Genachowski. In other words, he doesn’t want Congress to restrict his ability to restrict the mobile business. It seems the liberty of regulators to act without restraint is a higher virtue than the liberty of private actors.

At the end of 2011, the FCC and Justice Department vetoed AT&T’s proposed merger with T-Mobile, a deal that would have immediately expanded 3G mobile capacity across the nation and accelerated AT&T’s next generation 4G rollout by several years. That deal was all about a more effective use of spectrum, more cell towers, more capacity to better serve insatiable smart-phone and tablet equipped consumers. Now the FCC is holding hostage the spectrum auction bill with its my-way-or-the-highway approach. And one has to ask: Is the FCC really serious about spectrum, mobile capacity, and a healthy broadband Internet?

— Bret Swanson

Why is the FCC playing procedural games?

America is in desperate need of economic growth. But as the U.S. economy limps along, with unemployment stuck at 9%, the Federal Communications Commission is playing procedural tiddlywinks with the nation’s largest infrastructure investor, in the sector of the economy that offers the most promise for innovation and 21st century jobs. In normal times, we might chalk this up to clever Beltway maneuvering. But do we really have the time or money to indulge bureaucratic gamesmanship?

On Thanksgiving Eve, the FCC surprised everyone. It hadn’t yet completed its investigation into the proposed AT&T-T-Mobile wireless merger, and the parties had not had a chance to discuss or rebut the agency’s initial findings. Yet the FCC preempted the normal process by announcing it would send the case to an administrative law judge — essentially a vote of no-confidence in the deal. I say “vote,” but  the FCC commissioners hadn’t actually voted on the order.

FCC Chairman Julius Genachowski called AT&T CEO Randall Stevenson, who, on Thanksgiving Day, had to tell investors he was setting aside $4 billion in case Washington blocked the deal.

The deal is already being scrutinized by the Department of Justice, which sued to block the merger last summer. The fact that telecom mergers and acquisitions must negotiate two levels of federal scrutiny, at DoJ and FCC, is already an extra burden on the Internet industry. But when one agency on this dual-track games the system by trying to influence the other track — maybe because the FCC felt AT&T had a good chance of winning its antitrust case — the obstacles to promising economic activity multiply.

After the FCC’s surprise move, AT&T and T-Mobile withdrew their merger application at the FCC. No sense in preparing for an additional hearing before an administrative law judge when they are already deep in preparation for the antitrust trial early next year. Moreover, the terms of the merger agreement are likely to have changed after the companies (perhaps) negotiate conditions with the DoJ. They’d have to refile an updated application anyway. Not so fast, said the FCC. We’re not going to allow AT&T and T-Mobile to withdraw their application. Or we if we do allow it, we will do so “with prejudice,” meaning the parties can’t refile a revised application at a later date. On Tuesday the FCC relented — the law is clear: an applicant has the right to withdraw an application without consent from the FCC. But the very fact the FCC initially sought to deny the withdrawal is itself highly unusual. Again, more procedural gamesmanship.

If that weren’t enough, the FCC then said it would release its “findings” in the case — another highly unusual (maybe unprecedented) action. The agency hadn’t completed its process, and there had been no vote on the matter. So the FCC instead released what it calls a “staff report” — a highly critical internal opinion that hadn’t been reviewed by the parties nor approved by the commissioners. We’re eager to analyze the substance of this “staff report,” but the fact the FCC felt the need to shove it out the door was itself remarkable.

It appears the FCC is twisting legal procedure any which way to fit its desired outcome, rather than letting the normal merger process play out. Indeed, “twisting legal procedure” may be too kind. It has now thrown law and procedure out the window and is in full public relations mode. These extralegal PR games tilt the playing field against the companies, against investment and innovation, and against the health of the U.S. economy.

— Bret Swanson

Next Page »