Category Archives: Mobile

Can Indiana lead in the 5G economy?

5G integrated network, many applications and services“A revival of economic growth in the U.S. and around the world will, to a not insignificant degree, depend on the successful deployment of the next generation of wireless technology.

“The Internet’s first few chapters transformed entertainment, news, telephony, and finance — in other words, the existing electronic industries. Going forward, however, the wireless Internet will increasingly reach out to the rest of the economy and transform every industry, from transportation to education to health care.

“To drive and accommodate this cascading wireless boom, we will need wireless connections that are faster, greater in number, and more robust, widespread, diverse, and flexible. We will need a new fifth generation, or 5G, wireless infrastructure. 5G will be the foundation of not just the digital economy but increasingly of the physical economy as well.”

That’s how I began a recent column summarizing my research on the potential for technology to drive economic growth. 5G networks will not only provide an additional residential broadband option. 5G will also be the basis for the Internet of Things (IoT), connected cars and trucks, mobile and personalized digital health care, and next generation educational content and tools. The good news is that Indiana is already poised to lead in 5G. AT&T, for example, has announced that Indianapolis is one of two sites nationwide that will get a major 5G trial. And Verizon is already deploying “small cells” – a key component of 5G networks – across the metro area, including in my hometown of Zionsville (see photo).

Zville small cell 1

Small cell lamppost in Zionsville, Indiana.

If Indiana is to truly lead in 5G, and all the next generation services, however, it will need to take the next step. That means modest legislation that makes it as easy as possible to deploy these networks. Streamlining the permitting process for small cells will not only encourage investment and construction jobs as we string fiber optics and erect small cells. It will also mean Indiana will be among the first to enjoy the fast and ubiquitous connectivity that will be the foundation of nearly every industry going forward. In many ways, 5G is the economic development opportunity of the next decade.

There is legislation currently moving in the Indiana General Assembly that could propel Indiana along its already favorable 5G path. Sponsored by Sen. Brandt Hershman, SB 213 is a common sense and simple way to encourage investment in these networks, and the multitude of services that will follow.

The great news is that 5G is one of the few economic and Internet policy issues that enjoys widespread bipartisan support. The current FCC chairman Ajit Pai supports these streamlining polices, but so did the former Democratic chairman Tom Wheeler:

The nature of 5G technology doesn’t just mean more antenna sites, it also means that without such sites the benefits of 5G may be sharply diminished. In the pre-5G world, fending off sites from the immediate neighborhood didn’t necessarily mean sacrificing the advantages of obtaining service from a distant cell site. With the anticipated 5G architecture, that would appear to be less feasible, perhaps much less feasible.

I have no doubt other states will copy Indiana, once they see what we’ve done – or leap ahead of us, if we don’t embrace this opportunity.

Here are a few of our reports, articles, and podcasts on 5G:

Imagining the 5G Wireless Future: Apps, Devices, Networks, Spectrum – Entropy Economics report, November 2016

5G Wireless Is a Platform for Economic Revival – summary of report in The Hill, November 2016

5G and the Internet of Everything – podcast with TechFreedom, December 2016

Opening the 5G Wireless Frontier – article in Computerworld, July 2016

– Bret Swanson

 

Should companies pay damages for invalid patents?

Screen Shot 2016-03-04 at 11.13.21 AMThat’s one of the questions in the recurring litigation between Apple and Samsung. Next month, the two firms will begin a fourth trial in their multiyear battle over intellectual property for smartphones. Two weeks ago Apple, according to Law360, “asked a judge…to bar Samsung from telling the jury about reexaminations that have tentatively found some of Apple’s patents invalid.”

Then on Friday, in a separate case, another court did in fact invalidate two of the many patents in question, tossing out a May 2014 verdict that had awarded Apple $120 million.

The U.S. Federal Circuit Court of Appeals dismantled a San Jose jury’s findings in the second trial between the two rivals, essentially concluding that the technology at the heart of Apple’s lawsuit was so obvious that Samsung could not be punished for incorporating it into its smartphones. The appeals court added salt to Apple’s wound by upholding a $158,000 judgment against the Cupertino company for infringing a Samsung tech patent involving camera features.

To most casual observers, paying damages for patents that no longer exist and should never have been granted might seem wrong. Denying jurors the knowledge that the patents no longer exist may also seem odd and unfair. But this is patent law, and over the last few decades common sense hasn’t always applied. We thus got an explosion of patents issued for questionable “inventions,” especially for obvious software code, business practices, and even graphic designs. We also suffered a corresponding explosion of patent litigation.

Fortunately, common sense has in the past few years been making something of a comeback. The Supreme Court has reined in some of the worst abuses of trolls and the over-issuance of software patents. The Apple-Samsung cases highlight some of the remaining relics of patent law left over from a pre-digital world. Such as how to handle products that contain tens of thousands of pieces of intellectual property. Or, as in this case, how to clean up after several decades of over-issuance of questionable IP.

If you were a juror, would you want to know if the IP at the heart of the case was highly suspect or nonexistent? The law in this realm may be complicated. But as a matter of right and wrong, it seems pretty straight forward.

Wi-Fi and LTE-U: What’s the real story on unlicensed spectrum?

Today The Wall Street Journal highlighted a debate over unlicensed wireless spectrum that’s been brewing for the last few months. On one side, mobile carriers like Verizon and T-Mobile are planning to roll out a new technology known as LTE-U that will make use of the existing unlicensed spectrum most commonly used for Wi-Fi. LTE-U is designed to deliver a similar capability as Wi-Fi, namely short-range connectivity to mobile devices. As billions of mobile devices and Web video continue to strain wireless networks and existing spectrum allocations (see “The Immersive Internet Needs More Wireless Spectrum”), mobile service providers (and everyone else) is looking for good sources of spectrum. For the meantime, they’ve found it in the 5 GHz ISM band. The 5 GHz band is a good place in which to deploy “small cells” (think miniature cell towers delivering transmissions over a much smaller area) which can greatly enhance the capacity, reach, and overall functionality of wireless services.

Google and the cable companies, such as Comcast, however, are opposed to the use of LTE-U because they say  LTE-U could interfere with Wi-Fi. The engineering department at the Federal Communications Commission (FCC) has been looking into the matter for the last few months to see whether the objections are valid, but the agency has not yet reported any firm conclusions.

Is this a technical issue? Or a business dispute?

Until I see some compelling technical evidence that LTE-U interferes with Wi-Fi, this looks like a business dispute. Meaning the FCC probably should not get involved. The 2.4 GHz and 5 GHz spectrum in which Wi-Fi (and Bluetooth and other technologies) operate is governed by just a few basic rules. Most crucially, devices must not exceed certain power thresholds, and they can’t actively interfere with one another. Wi-Fi was designed to share nicely, but as everyone knows, large numbers of devices in one area, or large numbers of Wi-Fi hotspots can cause interference and thus degrade performance. The developers of LTE-U have spent the last couple years designing it specifically to play by the rules of the unlicensed spectrum and to play nicely with Wi-Fi.

The early results are encouraging. In real world tests so far,

  • LTE-U delivers better performance than Wi-Fi,
  • doesn’t degrade nearby Wi-Fi performance, and
  • may in fact improve the performance of nearby Wi-Fi networks.

For more commentary and technical analysis, see Richard Bennett’s recent posts here and here. Bennett was an early co-inventor of Wi-Fi and thus knows what he’s talking about. Also, Qualcomm has a white paper here and some good technical reports here and here.

Another line of opposition to LTE-U says that the mobile service providers like Verizon and T-Mobile will use LTE-U to deliver services that compete with Wi-Fi and will thus disadvantage competitive service providers. But the mobile service providers already operate lots of Wi-Fi hotspots. They are some of the biggest operators of Wi-Fi hotspots anywhere. In other words, they already compete (if that’s the right word) with Google and cable firms in this unlicensed space. LTE-U is merely a different protocol that makes use of the same unlicensed spectrum, and must abide by the same rules, as Wi-Fi.  The mobile providers just think LTE-U can deliver better performance and better integrate with their wide area LTE-based cellular networks. Consider an analogy: the rental fleets of Hertz and Avis are both made up of Ford vehicles. Hertz then decides to start renting Fords and Chevys. The new Chevys don’t push Fords off the road. They are both cars that must obey the rules of the road and the laws of physics. The two types of vehicles can coexist and operate just as they did before. Hertz is not crowding out Avis because it is now using Chevys.

I’ll be looking for more real world tests that either confirm or contradict the initially encouraging evidence. Until then, we shouldn’t prejudge and block a potentially useful new technology.

FCC — the Federal Crony Commission?

Ok, maybe that’s a little harsh. But watch this video of T-Mobile CEO John Legere boasting that’s he’s “spent a lot of time in Washington lately, with the new chairman of the FCC,” and that “they love T-Mobile.”

Ah, spring. Love is in the air. Great for twenty-somethings, not so great for federal agencies. The FCC, however, is thinking about handing over valuable wireless spectrum to T-Mobile and denying it to T-Mobile’s rivals. This type of industrial policy is partially responsible for the sluggish economy.

From taxpayer subsidies for connected Wall Street banks to favored green energy firms with the right political allies, cronyism prevents the best firms from serving consumers with the best products in the most efficient way. Cronyism is good (at least temporarily) for a few at the top. But it hurts everyone else. Government favors ensure that bad ideas and business models are supported even if they would have proved wanting in a more neutral market. They transfer scarce taxpayer dollars to friends and family. They also hurt firms who aren’t fortunate enough to have the right friends in the right places at the right time. It’s hard to compete against a rival who has the backing of Washington. The specter of arbitrary government then hangs over the economy as firms and investors make decisions not on the merits but on a form of kremlinology — what will Washington do? In the case at hand, cronyism could blow up the whole spectrum auction, an act of wild irresponsibility in the service of a narrow special interest (we’ve written about it herehere, and here).

The U.S. has never been perfectly free of such cronyism, but our system was better than most and over the centuries attracted the world’s financial and human capital because investors and entrepreneurs knew that in the U.S. the best ideas and the hardest work tend to win out. Effort, smarts, and risk capital won’t be snuffed out by some arbitrary bureaucratic decision or favor. That was the stuff of Banana Republics — the reason financial and human capital fled those spots for America, preferring the Rule of Law to the Whim of Man.

The FCC’s prospective auction rules are perplexing in part because the U.S. mobile industry is healthy — world-leading healthy. More usage, faster speeds, plummeting prices, etc. Why risk interrupted that string of success? Economist Hal Singer shows that in the FCC’s voluminous reports on the wireless industry, it has failed to present any evidence of monopoly power that would justify its rigging of the spectrum auctions. On the other hand, an overly complex auction could derail spectrum policy for a decade.

How much would an iPhone have cost in 1991?

Amazing! An iPhone is more capable than 13 distinct electronics gadgets, worth more than $3,000, from a 1991 Radio Shack ad. Buffalo writer Steve Cichon first dug up the old ad and made the point about the seemingly miraculous pace of digital advance, noting that an iPhone incorporates the features of the computer, CD player, phone, “phone answerer,” and video camera, among other items in the ad, all at a lower price. The Washington Post‘s tech blog The Switch picked up the analysis, and lots of people then ran with it on Twitter. Yet the comparison was, unintentionally, a huge dis to the digital economy. It massively underestimates the true pace of technological advance and, despite its humor and good intentions, actually exposes a shortcoming that plagues much economic and policy analysis.

To see why, let’s do a very rough, back-of-the-envelope estimate of what an iPhone would have cost in 1991.

In 1991, a gigabyte of hard disk storage cost around $10,000, perhaps a touch less. (Today, it costs around four cents ($0.04).) Back in 1991, a gigabyte of flash memory, which is what the iPhone uses, would have cost something like $45,000, or more. (Today, it’s around 55 cents ($0.55).)

The mid-level iPhone 5S has 32 GB of flash memory. Thirty-two GB, multiplied by $45,000, equals $1.44 million.

The iPhone 5S uses Apple’s latest A7 processor, a powerful CPU, with an integrated GPU (graphics processing unit), that totals around 1 billion transistors, and runs at a clock speed of 1.3 GHz, producing something like 20,500 MIPS (millions of instructions per second). In 1991, one of Intel’s top microprocessors, the 80486SX, oft used in Dell desktop computers, had 1.185 million transistors and ran at 20 MHz, yielding around 16.5 MIPS. (The Tandy computer in the Radio Shack ad used a processor not nearly as powerful.) A PC using the 80486SX processor at the time might have cost $3,000. The Apple A7, by the very rough measure of MIPS, which probably underestimates the true improvement, outpaces that leading edge desktop PC processor by a factor of 1,242. In 1991, the price per MIPS was something like $30.

So 20,500 MIPS in 1991 would have cost around $620,000.

But there’s more. The 5S also contains the high-resolution display, the touchscreen, Apple’s own M7 motion processing chip, Qualcomm’s LTE broadband modem and its multimode, multiband broadband transceiver, a Broadcom Wi-Fi processor, the Sony 8 megapixel iSight (video) camera, the fingerprint sensor, power amplifiers, and a host of other chips and motion-sensing MEMS devices, like the gyroscope and accelerometer.

In 1991, a mobile phone used the AMPS analog wireless network to deliver kilobit voice connections. A 1.44 megabit T1 line from the telephone company cost around $1,000 per month. Today’s LTE mobile network is delivering speeds in the 15 Mbps range. Wi-Fi delivers speeds up to 100 Mbps (limited, of course, by its wired connection). Safe to say, the iPhone’s communication capacity is at least 10,000 times that of a 1991 mobile phone. Almost the entire cost of a phone back then was dedicated to merely communicating. Say the 1991 cost of mobile communication (only at the device/component level, not considering the network infrastructure or monthly service) was something like $100 per kilobit per second.

Fifteen thousand Kbps (15 Mbps), multiplied by $100, is $1.5 million.

Considering only memory, processing, and broadband communications power, duplicating the iPhone back in 1991 would have (very roughly) cost: $1.44 million + $620,000 + $1.5 million = $3.56 million.

This doesn’t even account for the MEMS motion detectors, the camera, the iOS operating system, the brilliant display, or the endless worlds of the Internet and apps to which the iPhone connects us.

This account also ignores the crucial fact that no matter how much money one spent, it would have been impossible in 1991 to pack that much technological power into a form factor the size of the iPhone, or even a refrigerator.*

Tim Lee at The Switch noted the imprecision of the original analysis and correctly asked how typical analyses of inflation can hope to account for such radical price drops. (Harvard economist Larry Summers recently picked up on this point as well.)

But the fact that so many were so impressed by an assertion that an iPhone possesses the capabilities of $3,000 worth of 1991 electronics products — when the actual figure exceeds $3 million — reveals how fundamentally difficult it is to think in exponential terms.

Innovation blindness, I’ve long argued, is a key obstacle to sound economic and policy thinking. And this is a perfect example. When we make policy based on today’s technology, we don’t just operate mildly sub-optimally. No, we often close off entire pathways to amazing innovation.

Consider the way education policy has mostly enshrined a 150-year-old model, and in recent decades has thrown more money at the same broken system while blocking experimentation. The other day, the venture capitalist Marc Andreessen (@pmarca) noted in a Twitter missive the huge, but largely unforeseen, impact digital technologies are having on this industry that so desperately needs improvement:

“Four biggest K-12 education breakthroughs in last 20 years: (1) Google, (2) Wikipedia, (3) Khan Academy, (4) Wolfram Alpha.”

Maybe the biggest breakthroughs of the last 50 years. Point made, nonetheless. California is now closing down “coding bootcamps” — courses that teach people how to build apps and other software — because many of them are not state certified. This is crazy.

The importance of understanding the power of innovation applies to health care, energy, education, and fiscal policy, but no where is it more applicable than in Internet and technology policy, which is, at the moment, the subject of a much needed rethink by the House Energy and Commerce Committee.

— Bret Swanson

* To be fair, we do not account for the fact that back in 1991, had engineers tried to design and build chips and components with faster speeds and greater capacities than the consumer items mentioned, they could have in some cases scaled the technology in a more efficient manner than, for example, simply adding up consumer microprocessors totaling 20,500 MIPS. On the other hand, the extreme volumes of the consumer products in these memory, processing, and broadband communications categories, are what make the price drops possible. So this acknowledgment doesn’t change the analysis too much, if at all.

Why the fuss over “sponsored data”?

Today, at the Consumer Electronics Show in Las Vegas, AT&T said it would begin letting content firms — Google, ESPN, Netflix, Amazon, a new app, etc. — pay for a portion of the mobile data used by consumers of this content. If a mobile user has a 2 GB plan but likes to watch lots of Yahoo! news video clips, which consume a lot of data, Yahoo! can now subsidize that user by paying for that data usage, which won’t count against the user’s data limit.

Lots of people were surprised — or “surprised” — at the announcement and reacted violently. They charged AT&T with “double dipping,” imposing “taxes,” and of course the all-purpose net neutrality violation.

But this new sponsored data program is typical of multisided markets where a platform provider offers value to two or more parties — think magazines who charge both subscribers and advertisers. We addressed this topic before the idea was a reality. Back in June 2013, we argued that sponsored data would make lots of mobile consumers better off and no one worse off.

Two weeks ago, for example, we got word ESPN had been talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

Sounds like a reasonable deal all around. But not to everyone. “This is what a net neutrality violation looks like,” wrote Public Knowledge, a key backer of Internet regulation.

The idea that ESPN would pay to exempt its bits from data caps offends net neutrality’s abstract notion that all bits must be treated equal. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data — plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially if a non-watcher of ESPN, am worse off.

The critics’ real worry, then, is that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But is this government’s role — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms? This was our warning. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

What if magazines were barred from carrying advertisements? They’d have to make all their money from subscribers and thus (attempt to) charge much higher prices or change their business model. Consumers would lose, either through higher prices or less diversity of product offerings. And advertisers, deprived of an outlet to reach an audience, would lose. That’s what we call a lose-lose-lose proposition.

Maybe sponsored data will take off. Maybe not. It’s clear, however, in the highly dynamic mobile Internet business, we should allow such voluntary experiments.

Discussing Broadband and Economic Growth at AEI

On Tuesday this week, the American Enterprise Institute launched an exciting new project — the Center for Internet, Communications, and Technology. I was happy to participate in the inaugural event, which included talks by CEA chairman Jason Furman and Rep. Greg Walden (R-OR). We discussed broadband’s potential to boost economic productivity and focused on the importance and key questions of wireless spectrum policy. See the video below:

Simple Rules For Spectrum

Washington is getting closer to unleashing more spectrum to fuel the digital economy and stay ahead of capacity constraints that will stymie innovation and raise prices for consumers. Ahead of the July 23 Congressional hearing on spectrum auctions, we should keep a couple things in mind. First and foremost, we need “Simple Rules for a Complex World.” It’s a basic idea that should apply to all policymaking. But especially in the exceedingly complex and fast-moving digital ecosystem.

A number of firms are seeking special rules that would complicate — and possibly undermine — the auctions. They want to exclude some rival firms from bidding in the auctions. They are suggesting exclusions, triggers, “one-third caps,” and other Rube Goldberg mechanisms they hope will tip the auction scales in their favor.

Using examples from the labor markets and capital markets, we showed in a recent paper that complex policies — even though well intended and designed by smart people — often yield perverse results. Laws and regulations should be few, simple, and neutral. Those advocating the special auction rules favor a process that is complex and biased.

They are also using complicated arguments to back their preferred complicated process. Some are asserting a “less is more” theory of auctions — the idea that fewer bidders can yield higher auction revenues. If it seems counterintuitive, it is. Their theory is based on a very specific, hypothetical auction where a dominant monopolist might scare off a potential new market entrant from bidding at all and walk away with the underpriced auction items. This hypothetical does not apply to America’s actual wireless spectrum market.

The U.S. has four national mobile service providers and a number of regional providers. We have lots of existing players, most of whom plan to bid in the auctions. As all the theory and evidence shows, in this situation, an open process with more bidders means a better auction — spectrum flowing to its highest value uses and more overall revenue.

Some studies show a policy excluding the top two bidders in the auction could reduce revenue by up to 40% — or $12 billion. This would not only prevent spectrum from flowing to its best use but could also jeopardize the whole purpose of the incentive auction, because lower prices could discourage TV broadcasters from “selling” their valuable airwaves. If the auction falls short, that means less spectrum, less mobile capacity, slower mobile broadband, and higher consumer prices. (See our recent Forbes article on the topic.)

Fortunately, several Members of Congress are adhering to the Simple Rules idea. They want to keep the spectrum auction open and competitive. They think this will  yield the most auction revenues and ensure the maximum amount of underutilized broadcast spectrum is repurposed for wireless broadband.

The argument for simple auction rules is simple. The argument for complex auction rules is very complicated.

Net ‘Neutrality’ or Net Dynamism? Easy Choice.

Consumers beware. A big content company wants to help pay for the sports you love to watch.

ESPN is reportedly talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

As good a deal as this may be for consumers (and the companies involved), the potential arrangement offends some people’s very particular notion of “network neutrality.” They often have trouble defining what they mean by net neutrality, but they know rule breakers when they see them. Sure enough, long time net neutrality advocate Public Knowledge noted, “This is what a network neutrality violation looks like.”

The basic notion is that all bits on communications networks should be treated the same. No prioritization, no discrimination, and no partnerships between content companies and conduit companies. Over the last decade, however, as we debated net neutrality in great depth and breadth, we would point out that such a notional rule would likely result in many perverse consequences. For example, we noted that, had net neutrality existed at the time, the outlawing of pay-for-prioritization would have banned the rise of content delivery networks (CDNs), which have fundamentally improved the user experience for viewing online content. When challenged in this way, the net neutrality proponents would often reply, Well, we didn’t mean that. Of course that should be allowed. We also would point out that yesterday’s and today’s networks discriminate among bits in all sorts of ways, and that we would continue doing so in the future. Their arguments often deteriorated into a general view that Bad things should be banned. Good things should be allowed. And who do you think would be the arbiter of good and evil? You guessed it.

So what is the argument in the case of ESPN? The idea that ESPN would pay to exempt its bits from data caps apparently offends the abstract all-bits-equal notion. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially as an ESPN non-watcher, am worse off.

So the critics’ real worry is, I suppose, that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But this is NOT what government policy should be — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms. This is what we warned would happen. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

These practical and utilitarian arguments about technology and economics are important. Yet they ignore perhaps the biggest point of all: the FCC has no authority to regulate the Internet. The Internet is perhaps the greatest free-flowing, fast-growing, dynamic engine of cultural and economic value we’ve known. The Internet’s great virtue is its ability to change and grow, to foster experimentation and innovation. Diversity in networks, content, services, apps, and business models is a feature, not a bug. Regulation necessarily limits this freedom and diversity, making everything more homogeneous and diminishing the possibilities for entrepreneurship and innovation. Congress has given the FCC no authority to regulate the Internet. The FCC invented this job for itself and is now being challenged in court.

Possible ESPN-mobile partnerships are just the latest reminder of why we don’t want government limiting our choices — and all the possibilities — on the Internet.

— Bret Swanson

U.S. Mobile: Effectively competitive? Probably. Positively healthy? Absolutely.

Each year the Federal Communications Commission is required to report on competition in the mobile phone market. Following Congress’s mandate to determine the level of industry competition, the FCC, for many years, labeled the industry “effectively competitive.” Then, starting a few years ago, the FCC declined to make such a determination. Yes, there had been some consolidation, it was acknowledged, yet the industry was healthier than ever — more subscribers, more devices, more services, lots of innovation. The failure to achieve the “effectively competitive” label was thus a point of contention.

This year’s “CMRS” — commercial mobile radio services — report again fails to make a designation, one way or the other. Yet whatever the report lacks in official labels, it more than makes up in impressive data.

For example, it shows that as of October 2012, 97.2% of Americans have access to three or more mobile providers, and 92.8% have access to four or more. As for mobile broadband data services, 97.8% have access to two or more providers, and 91.6% have access to three or more.

Rural America is also doing well. The FCC finds 87% of rural consumers have access to three or more mobile voice providers, and 69.1% have access to four or more. For mobile broadband, 89.9% have access to two or more providers, while 65.4% enjoy access to three or more.

Call this what you will — to most laypeople, these choices count as robust competition. Yet the FCC has a point when it

refrain[s] from providing any single conclusion because such an assessment would be incomplete and possibly misleading in light of the variations and complexities we observe.

The industry has grown so large, with so many interconnected and dynamic players, it may have outgrown Congress’s request for a specific label.

14. Given the Report’s expansive view of mobile wireless services and its examination of competition across the entire mobile wireless ecosystem, we find that the mobile wireless ecosystem is sufficiently complex and multi-faceted that it would not be meaningful to try to make a single, all-inclusive finding regarding effective competition that adequately encompasses the level of competition in the various interrelated segments, types of services, and vast geographic areas of the mobile wireless industry.

Or as economist George Ford of the Phoenix Center put it,

The statute wants a competitive analysis, but as the Commission correctly points out, competition is not the goal, it [is] the means. Better performance is the goal. When the evidence presented in the Sixteenth Report is viewed in this way, the conclusion to be reached about the mobile industry, at least to me, is obvious: the U.S. mobile wireless industry is performing exceptionally well for consumers, regardless of whether or not it satisfies someone’s arbitrarily-defined standard of “effective competition.”

I’m in good company.  Outgoing FCC Chairman Julius Genachowski lists among his proudest achievements that “the U.S. is now the envy of the world in advanced wireless networks, devices, applications, among other areas.

The report shows that in the last decade, U.S. mobile connections have nearly tripled. The U.S. now has more mobile connections than people.

The report also shows per user data consumption more than doubling year to year.

More important, the proliferation of smartphones, which are powerful mobile computers, is the foundation for a new American software industry widely known as the App Economy. We detailed the short but amazing history of the app and its impact on the economy in our report “Soft Power: Zero to 60 Billion in Four Years.” Likewise, these devices and software applications are changing industries that need changing. Last week, experts testified before Congress about mobile health, or mHealth, and we wrote about the coming health care productivity revolution in “The App-ification of Medicine.”

One factor that still threatens to limit mobile growth is the availability of spectrum. The report details past spectrum allocations that have borne fruit, but the pipeline of future spectrum allocations is uncertain. A more robust commitment to spectrum availability and a free-flowing spectrum market would ensure continued investment in networks, content, and services.

What Congress once called the mobile “phone” industry is now a sprawling global ecosystem and a central driver of economic advance. By most measures, the industry is effectively competitive. By any measure, it’s positively healthy.

— Bret Swanson

The Broadband Rooster

FCC chairman Julius Genachowski opens a new op-ed with a bang:

As Washington continues to wrangle over raising revenue and cutting spending, let’s not forget a crucial third element for reining in the deficit: economic growth. To sustain long-term economic health, America needs growth engines, areas of the economy that hold real promise of major expansion. Few sectors have more job-creating innovation potential than broadband, particularly mobile broadband.

Private-sector innovation in mobile broadband has been extraordinary. But maintaining the creative momentum in wireless networks, devices and apps will need an equally innovative wireless policy, or jobs and growth will be left on the table.

Economic growth is indeed the crucial missing link to employment, opportunity, and healthier government budgets. Technology is the key driver of long term growth, and even during the downturn the broadband economy has delivered. Michael Mandel estimates the “app economy,” for example, has created more than 500,000 jobs in less than five short years of existence.

We emphatically do need policies that will facilitate the next wave of digital innovation and growth. Chairman Genachowski’s top line assessment — that U.S. broadband is a success — is important. It rebuts the many false but persistent claims that U.S. broadband lags the world. Chairman Genachowski’s diagnosis of how we got here and his prescriptions for the future, however, are off the mark.

For example, he suggests U.S. mobile innovation is newer than it really is.

Over the past few years, after trailing Europe and Asia in mobile infrastructure and innovation, the U.S. has regained global leadership in mobile technology.

This American mobile resurgence did not take place in just the last “few years.” It began a little more than decade ago with smart decisions to:

(1) allow reasonable industry consolidation and relatively free spectrum allocation, after years of forced “competition,” which mandated network duplication and thus underinvestment in coverage and speed (we did in fact trail Europe in some important mobile metrics in the late 1990s and briefly into the 2000s);

(2) refrain from any but the most basic regulation of broadband in general and the mobile market in particular, encouraging experimental innovation; and

(3) finally implement the digital TV / 700 MHz transition in 2007, which put more of the best spectrum into the market.

These policies, among others, encouraged some $165 billion in mobile capital investment between 2001 and 2008 and launched a wave of mobile innovation. Development on the iPhone began in 2004, the iPhone itself arrived in 2007, and the app store in 2008. Google’s Android mobile OS came along in 2009, the year Mr. Genachowski arrived at the FCC. By this time, the American mobile juggernaut had already been in full flight for years, and the foundation was set — the U.S. topped the world in 3G mobile networks and device and software innovation. Wi-Fi, meanwhile surged from 2003 onward, creating an organic network of tens of millions of wireless nodes in homes, offices, and public spaces. Mr. Genachowski gets some points for not impeding the market as aggressively as some other more zealous regulators might have. But taking credit for America’s mobile miracle smacks of the rooster proudly puffing his chest at sunrise.

More important than who gets the credit, however, is determining what policies led to the current success . . . and which are likely to spur future growth. Chairman Genachowski is right to herald the incentive auctions that could unleash hundreds of megahertz of un- and under-used spectrum from the old TV broadcasters. Yet wrangling over the rules of the auctions could stretch on, delaying the the process. Worse, the rules themselves could restrict who can bid on or buy new spectrum, effectively allowing the FCC to favor certain firms, technologies, or friends at the expense of the best spectrum allocation. We’ve seen before that centrally planned spectrum allocations don’t work. The fact that the FCC is contemplating such an approach is worrisome. It runs counter to the policies that led to today’s mobile success.

The FCC also has a bad habit of changing the metrics and the rules in the middle of the game. For example, the FCC has been caught changing its “spectrum screen” to fit its needs. The screen attempts to show how much spectrum mobile operators hold in particular markets. During M&A reviews, however, the FCC has changed its screen procedures to make the data fit its opinion.

In a more recent example, Fred Campbell shows that the FCC alters its count of total available commercial spectrum to fit the argument it wants to make from day to day. We’ve shown that the U.S. trails other nations in the sum of currently available spectrum plus spectrum in the pipeline. Below, see a chart from last year showing how the U.S. compares favorably in existing commercially available spectrum but trails severely in pipeline spectrum. Translation: the U.S. did a pretty good job unleashing spectrum in 1990s through he mid-2000s. But, contrary to Chairman Genachowski’s implication, it has stalled in the last few years.

When the FCC wants to argue that particular companies shouldn’t be allowed to acquire more spectrum (whether through merger or secondary markets), it adopts this view that the U.S. trails in spectrum allocation. Yet when challenged on the more general point that the U.S. lags other nations, the FCC turns around and includes an extra 139 MHz in spectrum in the 2.5 GHz range to avoid the charge it’s fallen behind the curve.

Next, Chairman Genachowski heralds a new spectrum “sharing” policy where private companies would be allowed to access tiny portions of government-owned airwaves. This really is weak tea. The government, depending on how you measure, controls between 60% and 85% of the best spectrum for wireless broadband. It uses very little of it. Yet it refuses to part with meaningful portions, even though it would still be left with more than enough for its important uses — military and otherwise. If they can make it work (I’m skeptical), sharing may offer a marginal benefit. But it does not remotely fit the scale of the challenge.

Along the way, the FCC has been whittling away at mobile’s incentives for investment and its environment of experimentation. Chairman Genachowski, for example, imposed price controls on “data roaming,” even though it’s highly questionable he had the legal authority to do so. The Commission has also, with varied degrees of “success,” been attempting to impose its extralegal net neutrality framework to wireless. And of course the FCC has blocked, altered, and/or discouraged a number of important wireless mergers and secondary spectrum transactions.

Chairman Genachowski’s big picture is a pretty one: broadband innovation is key to economic growth. Look at the brush strokes, however, and there are reasons to believe sloppy and overanxious regulators are threatening to diminish America’s mobile masterpiece.

— Bret Swanson

The $66-billion Internet Expansion

Sixty-six billion dollars over the next three years. That’s AT&T’s new infrastructure plan, announced yesterday. It’s a bold commitment to extend fiber optics and 4G wireless to most of the country and thus dramatically expand the key platform for growth in the modern U.S. economy.

The company specifically will boost its capital investments by an additional $14 billion over previous estimates. This should enable coverage of 300 million Americans (around 97% of the population) with LTE wireless and 75% of AT&T’s residential service area with fast IP broadband. It’s adding 10,000 new cell towers, a thousand distributed antenna systems, and 40,000 “small cells” that augment and extend the wireless network to, for example, heavily trafficked public spaces. Also planned are fiber optic connections to an additional 1 million businesses.

As the company expands its fiber optic and wireless networks — to drive and accommodate the type of growth seen in the chart above — it will be retiring parts of its hundred-year-old copper telephone network. To do this, it will need cooperation from federal and state regulators. This is the end of phone network, the transition to all Internet, all the time, everywhere.

FCC’s 706 Broadband Report Does Not Compute

Yesterday the Federal Communications Commission issued 181 pages of metrics demonstrating, to any fair reader, the continuing rapid rise of the U.S. broadband economy — and then concluded, naturally, that “broadband is not yet being deployed to all Americans in a reasonable and timely fashion.” A computer, being fed the data and the conclusion, would, unable to process the logical contradictions, crash.

The report is a response to section 706(b) of the 1996 Telecom Act that asks the FCC to report annually whether broadband “is being deployed . . . in a reasonable and timely fashion.” From 1999 to 2008, the FCC concluded that yes, it was. But now, as more Americans than ever have broadband and use it to an often maniacal extent, the FCC has concluded for the third year in a row that no, broadband deployment is not “reasonable and timely.”

The FCC finds that 19 million Americans, mostly in very rural areas, don’t have access to fixed line terrestrial broadband. But Congress specifically asked the FCC to analyze broadband deployment using any technology.”

“Any technology” includes DSL, cable modems, fiber-to-the-x, satellite, and of course fixed wireless and mobile. If we include wireless broadband, the unserved number falls to 5.5 million from the FCC’s headline 19 million. Five and a half million is 1.74% of the U.S. population. Not exactly a headline-grabbing figure.

Even if we stipulate the FCC’s framework, data, and analysis, we’re still left with the FCC’s own admission that between June 2010 and June 2011, an additional 7.4 million Americans gained access to fixed broadband service. That dropped the portion of Americans without access to 6% in 2011 from around 8.55% in 2010 — a 30% drop in the unserved population in one year. Most Americans have had broadband for many years, and the rate of deployment will necessarily slow toward the tail-end of any build-out. When most American households are served, there just aren’t very many to go, and those that have yet to gain access are likely to be in the very most difficult to serve areas (e.g. “on tops of mountains in the middle of nowhere”). The fact that we still added 7.4 million broadband in the last year, lowering the unserved population by 30%, even using the FCC’s faulty framework, demonstrates in any rational world that broadband “is being deployed” in a “reasonable and timely fashion.”

But this is not the rational world — it’s D.C. in the perpetual political silly season.

One might conclude that because the vast majority of these unserved Americans live in very rural areas — Alaska, Montana, West Virginia — the FCC would, if anything, suggest policies tailored to boost infrastructure investment in these hard-to-reach geographies. We could debate whether these are sound investments and whether the government would do a good job expanding access, but if rural deployment is a problem, then presumably policy should attempt to target and remediate the rural underserved. Commissioner McDowell, however, knows the real impetus for the FCC’s tortured no-confidence vote — its regulatory agenda.

McDowell notes that the report repeatedly mentions the FCC’s net neutrality rules (now being contested in court), which are as far from a pro-broadband policy, let alone a targeted one, as you could imagine. If anything, net neutrality is an impediment to broader, faster, better broadband. But the FCC is using its thumbs-down on broadband deployment to prop up its intrusions into a healthy industry. As McDowell concluded, “the majority has used this process as an opportunity to create a pretext to justify more regulation.”

Misunderstanding the Mobile Ecosystem

Mobile communications and computing are among the most innovative and competitive markets in the world. They have created a new world of software and offer dramatic opportunities to improve productivity and creativity across the industrial spectrum.

Last week we published a tech note documenting the rapid growth of mobile and the importance of expanding wireless spectrum availability. More clean spectrum is necessary both to accommodate fast-rising demand and drive future innovations. Expanding spectrum availability might seem uncontroversial. In the report, however, we noted that one obstacle to expanding spectrum availability has been a cramped notion of what constitutes competition in the Internet era. As we wrote:

Opponents of open spectrum auctions and flexible secondary markets often ignore falling prices, expanding choices, and new features available to consumers. Instead they sometimes seek to limit new spectrum availability, or micromanage its allocation or deployment characteristics, charging that a few companies are set to dominate the market. Although the FCC found that 77% of the U.S. population has access to three or more 3G wireless providers, charges of a coming “duopoly” are now common.

This view, however, relies on the old analysis of static utility or commodity markets and ignores the new realities of broadband communications. The new landscape is one of overlapping competitors with overlapping products and services, multi-sided markets, network effects, rapid innovation, falling prices, and unpredictability.

Sure enough, yesterday Sprint CEO Dan Hesse made the duopoly charge and helped show why getting spectrum policy right has been so difficult.

Q: You were a vocal opponent of the AT&T/T-Mobile merger. Are you satisfied you can compete now that the merger did not go through?

A: We’re certainly working very hard. There’s no question that the industry does have an issue with the size of the duopoly of AT&T and Verizon. I believe that over time we’ll see more consolidation in the industry outside of the big two, because the gap in size between two and three is so enormous. Consolidation is healthy for the industry as long as it’s not AT&T and Verizon getting larger.

Hesse goes even further.

Hesse also seemed to be likening Sprint’s struggles in competing with AT&T-Rex and Big Red as a fight against good and evil. Sprint wants to wear the white hat, according to Hesse. “At Sprint, we describe it internally as being the good guys, of doing the right thing,” he said.

This type of thinking is always a danger if you’re trying to make sound policy. Picking winners and losers is inevitably — at best — an arbitrary exercise. Doing so based on some notion of corporate morality is plain silly, but even more reasonable sounding metrics and arguments — like those based on market share — are often just as misleading and harmful.

The mobile Internet ecosystem is growing so fast and changing with such rapidity and unpredictability that making policy based on static and narrow market definitions will likely yield poor policy. As we noted in our report:

It is, for example, worth emphasizing: Google and Apple were not in this business just a few short years ago.

Yet by the fourth quarter of 2011 Apple could boast an amazing 75% of the handset market’s profits. Apple’s iPhone business, it was widely noted after Apple’s historic 2011, is larger than all of Microsoft. In fact, Apple’s non-iPhone products are also larger than Microsoft.

Android, the mobile operating system of Google, has been growing even faster than Apple’s iOS. In December 2011, Google was activating 700,000 Android devices a day, and now, in the summer of 2012, it estimates 900,000 activations per day. From a nearly zero share at the beginning of 2009, Android today boasts roughly a 55% share of the global smartphone OS market.

. . .

Apple’s iPhone changed the structure of the industry in several ways, not least the relationships between mobile service providers and handset makers. Mobile operators used to tell handset makers what to make, how to make it, and what software and firmware could be loaded on it. They would then slap their own brand label on someone else’s phone.

Apple’s quick rise to mobile dominance has been matched by Blackberry maker Research In Motion’s fall. RIM dominated the 2000s with its email software, its qwerty keyboard, and its popularity with enterprise IT departments. But it  couldn’t match Apple’s or Android’s general purpose computing platforms, with user-friendly operating systems, large, bright touch-screens, and creative and diverse app communities.

Sprinkled among these developments were the rise, fall, and resurgence of Motorola, and then its sale to Google; the rise and fall of Palm; the rise of HTC; and the decline of once dominant Nokia.

Apple, Google, Amazon, Microsoft, and others are building cloud ecosystems, sometimes complemented with consumer devices, often tied to Web apps and services, multimedia content, and retail stores. Many of these products and services compete with each other, but they also compete with broadband service providers. Some of these business models rely primarily on hardware, some software, some subscriptions, some advertising. Each of the companies listed above — a computer company, a search company, an ecommerce company, and a software company — are now major Internet infrastructure companies.

As Jeffrey Eisenach concluded in a pathbreaking analysis of the digital ecosystem (“Theories of Broadband Competition”), there may be market concentration in one (or more) layer(s) of the industry (broadly considered), yet prices are falling, access is expanding, products are proliferating, and innovation is as rapid as in any market we know.

New iPad, Fellow Bandwidth Monsters Hungry for More Spectrum

Last week Apple unveiled its third-generation iPad. Yesterday the company said the LTE versions of the device, which can connect via Verizon and AT&T mobile broadband networks, are sold out.

It took 15 years for laptops to reach 50 million units sold in a year. It took smartphones seven years. For tablets (not including Microsoft’s clunky attempt a decade ago), just two years. Mobile device volumes are astounding. In each of the last five years, global mobile phone sales topped a billion units. Last year smartphones outsold PCs for the first time – 488 million versus 432 million. This year well over 500 million smartphones and perhaps 100 million tablets could be sold.

Smartphones and tablets represent the first fundamentally new consumer computing platforms since the PC, which arrived in the late ’70s and early ’80s. Unlike mere mobile phones, they’ve got serious processing power inside. But their game-changing potency is really based on their capacity to communicate via the Internet. And this power is, of course, dependent on the cloud infrastructure and wireless networks.

But are wireless networks today prepared for this new surge of bandwidth-hungry mobile devices? Probably not. When we started to build 3G mobile networks in the middle of last decade, many thought it was a huge waste. Mobile phones were used for talking, and some texting. They had small low-res screens and were terrible at browsing the Web. What in the world would we do with all this new wireless capacity? Then the iPhone came, and, boom — in big cities we went from laughable overcapacity to severe shortage seemingly overnight. The iPhone’s brilliant screen, its real Web browsing experience, and the world of apps it helped us discover totally changed the game. Wi-Fi helped supply the burgeoning iPhone with bandwidth, and Wi-Fi will continue to grow and play an important role. Yet Credit Suisse, in a 2011 survey of the industry, found that mobile networks overall were running at 80% of capacity and that many network nodes were tapped out.

Today, we are still expanding 3G networks and launching 4G in most cities. Verizon says it offers 4G LTE in 196 cities, while AT&T says it offers 4G LTE in 28 markets (and combined with its HSPA+ networks offers 4G-like speeds to 200 million people in the U.S.). Lots of things affect how fast we can build new networks — from cell site permitting to the fact that these things are expensive ($20 billion worth of wireless infrastructure in the U.S. last year). But another limiting factor is spectrum availability.

Do we have enough radio waves to efficiently and cost-effectively serve these hundreds of millions of increasingly powerful mobile devices, which generate and consume increasingly rich content, with ever more stringent latency requirements, and which depend upon robust access to cloud storage and computing resources?

Capacity is a function of money, network nodes, technology, and radio waves. But spectrum is grossly misallocated. The U.S. government owns 61% of the best airwaves, while mobile broadband providers — where all the action is — own just 10%. Another portion is controlled by the old TV broadcasters, where much of this beachfront spectrum lay fallow or underused.

They key is allowing spectrum to flow to its most valuable uses. Last month Congress finally authorized the FCC to conduct incentive auctions to free up some unused and underused TV spectrum. Good news. But other recent developments discourage us from too much optimism on this front.

In December the FCC and Justice Department vetoed AT&T’s attempt to augment its spectrum and cell-site position via merger with T-Mobile. Now the FCC and DoJ are questioning Verizon’s announced purchase of Spectrum Co. — valuable but unused spectrum owned by a consortium of cable TV companies. The FCC has also threatened to tilt any spectrum auctions so that it decides who can bid, how much bidders can buy, and what buyers may or may not do with their spectrum — pretending Washington knows exactly how this fast-changing industry should be structured, thus reducing the value of spectrum and probably delaying availability of new spectrum and possibly reducing the sector’s pace of innovation.

It’s very difficult to see how it’s at all productive for the government to block companies who desperately need more spectrum from buying it from those who don’t want it, don’t need it, or can’t make good use of it. The big argument against AT&T and Verizon’s attempted spectrum purchases is “competition.” But T-Mobile wanted to sell to AT&T because it admitted it didn’t have the financial (or spectrum) wherewithal to build a super expensive 4G network. Apparently the same for the cable companies, who chose to sell to Verizon. Last week Dish Network took another step toward entering the 4G market with the FCC’s approval of spectrum transfers from two defunct companies, TerreStar and DBSD.

Some people say the proliferation of Wi-Fi or the increased use of new wireless technologies that economize on spectrum will make more spectrum availability unnecessary. I agree Wi-Fi is terrific and will keep growing and that software radios, cognitive radios, mesh networks and all the other great technologies that increase the flexibility and power of wireless will make big inroads. So fine, let’s stipulate that perhaps these very real complements will reduce the need for more spectrum at the margin. Then the joke is on the big companies that want to overpay for unnecessary spectrum. We still allow big, rich companies to make mistakes, right? Why, then, do proponents of these complementary technologies still oppose allowing spectrum to flow to its highest use?

Free spectrum auctions would allow lots of companies to access spectrum — upstarts, middle tier, and yes, the big boys, who desperately need more capacity to serve the new iPad.

— Bret Swanson

Prof. Krugman misses the App Economy

Steve Jobs designed great products. It’s very, very hard to make the case that he created large numbers of jobs in this country.

— Prof. Paul Krugman, New York Times, January 25, 2012

Turns out, not very hard at all.

The App Economy now is responsible for roughly 466,000 jobs in the United States, up from zero in 2007 when the iPhone was introduced.

— Dr. Michael Mandel, TechNet study, February 7, 2012

See our earlier rough estimate of Apple’s employment effects: “Jobs: Steve vs. the Stimulus.”

— Bret Swanson

Is the FCC serious about more wireless spectrum? Apparently not.

For the third year in a row, FCC chairman Julius Genachowski used his speech at the Consumer Electronics Show in Las Vegas to push for more wireless spectrum. He wants Congress to pass the incentive auction law that would unleash hundreds of megahertz of spectrum to new and higher uses. Most of Congress agrees: we need lots more wireless capacity and spectrum auctions are a good way to get there.

Genachowski, however, wants overarching control of the new spectrum and, by extension, the mobile broadband ecosystem. The FCC wants the authority to micromanage the newly available radio waves — who can buy it, how much they can buy, how they can use it, what content flows over it, what business models can be employed with it. But this is an arena that is growing wildly fast, where new technologies appear every day, and where experimentation is paramount to see which business models work. Auctions are supposed to be a way to get more spectrum into the marketplace, where lots of companies and entrepreneurs can find the best ways to use it to deliver new communications services. “Any restrictions” by Congress on the FCC “would be a real mistake,” said Genachowski. In other words, he doesn’t want Congress to restrict his ability to restrict the mobile business. It seems the liberty of regulators to act without restraint is a higher virtue than the liberty of private actors.

At the end of 2011, the FCC and Justice Department vetoed AT&T’s proposed merger with T-Mobile, a deal that would have immediately expanded 3G mobile capacity across the nation and accelerated AT&T’s next generation 4G rollout by several years. That deal was all about a more effective use of spectrum, more cell towers, more capacity to better serve insatiable smart-phone and tablet equipped consumers. Now the FCC is holding hostage the spectrum auction bill with its my-way-or-the-highway approach. And one has to ask: Is the FCC really serious about spectrum, mobile capacity, and a healthy broadband Internet?

— Bret Swanson

FCC wireless mischief: On to the substance

Here’s a critique of the FCC’s new “staff report” from AT&T itself. Obviously, AT&T is an interested party and has a robust point of view. But it’s striking the FCC was so sloppy in a staff report — for instance not addressing the key issue at hand: spectrum — let alone releasing this not-ready-for-prime-time report to the public.

Surely, it is neither fair nor logical for the FCC to trumpet a national spectrum crisis for much of the past year, and then draft a report claiming that two major wireless companies face no such constraints despite sworn declarations demonstrating the opposite.

The report is so off-base and one-sided that the FCC may actually have hurt its own case.

Why is the FCC playing procedural games?

America is in desperate need of economic growth. But as the U.S. economy limps along, with unemployment stuck at 9%, the Federal Communications Commission is playing procedural tiddlywinks with the nation’s largest infrastructure investor, in the sector of the economy that offers the most promise for innovation and 21st century jobs. In normal times, we might chalk this up to clever Beltway maneuvering. But do we really have the time or money to indulge bureaucratic gamesmanship?

On Thanksgiving Eve, the FCC surprised everyone. It hadn’t yet completed its investigation into the proposed AT&T-T-Mobile wireless merger, and the parties had not had a chance to discuss or rebut the agency’s initial findings. Yet the FCC preempted the normal process by announcing it would send the case to an administrative law judge — essentially a vote of no-confidence in the deal. I say “vote,” but  the FCC commissioners hadn’t actually voted on the order.

FCC Chairman Julius Genachowski called AT&T CEO Randall Stevenson, who, on Thanksgiving Day, had to tell investors he was setting aside $4 billion in case Washington blocked the deal.

The deal is already being scrutinized by the Department of Justice, which sued to block the merger last summer. The fact that telecom mergers and acquisitions must negotiate two levels of federal scrutiny, at DoJ and FCC, is already an extra burden on the Internet industry. But when one agency on this dual-track games the system by trying to influence the other track — maybe because the FCC felt AT&T had a good chance of winning its antitrust case — the obstacles to promising economic activity multiply.

After the FCC’s surprise move, AT&T and T-Mobile withdrew their merger application at the FCC. No sense in preparing for an additional hearing before an administrative law judge when they are already deep in preparation for the antitrust trial early next year. Moreover, the terms of the merger agreement are likely to have changed after the companies (perhaps) negotiate conditions with the DoJ. They’d have to refile an updated application anyway. Not so fast, said the FCC. We’re not going to allow AT&T and T-Mobile to withdraw their application. Or we if we do allow it, we will do so “with prejudice,” meaning the parties can’t refile a revised application at a later date. On Tuesday the FCC relented — the law is clear: an applicant has the right to withdraw an application without consent from the FCC. But the very fact the FCC initially sought to deny the withdrawal is itself highly unusual. Again, more procedural gamesmanship.

If that weren’t enough, the FCC then said it would release its “findings” in the case — another highly unusual (maybe unprecedented) action. The agency hadn’t completed its process, and there had been no vote on the matter. So the FCC instead released what it calls a “staff report” — a highly critical internal opinion that hadn’t been reviewed by the parties nor approved by the commissioners. We’re eager to analyze the substance of this “staff report,” but the fact the FCC felt the need to shove it out the door was itself remarkable.

It appears the FCC is twisting legal procedure any which way to fit its desired outcome, rather than letting the normal merger process play out. Indeed, “twisting legal procedure” may be too kind. It has now thrown law and procedure out the window and is in full public relations mode. These extralegal PR games tilt the playing field against the companies, against investment and innovation, and against the health of the U.S. economy.

— Bret Swanson

Up-is-down data roaming vote could mean mobile price controls

Section 332(c)(2) of the Communications Act says that “a private mobile service shall not . . . be treated as a common carrier for any purpose under this Act.”

So of course the Federal Communications Commission on Thursday declared mobile data roaming (which is a private mobile service) a common carrier. Got it? The law says “shall not.” Three FCC commissioners say, We know better.

This up-is-down determination could allow the FCC to impose price controls on the dynamic broadband mobile Internet industry. Up-is-down legal determinations for the FCC are nothing new. After a decade trying, I’ve still not been able to penetrate the legal realm where “shall not” means “may.” Clearly the FCC operates in some alternate jurisprudential universe.

I do know the decision’s practical effect could be to slow mobile investment and innovation. It takes lots of money and know-how to build the Internet and beam real-time videos from anywhere in the world to an iPad as you sit on your comfy couch or a speeding train. Last year the U.S. invested $489 billion in info-tech, which made up 47% of all non-structure capital expenditures. Two decades ago, info-tech comprised just 33% of U.S. non-structure capital investment. This is a healthy, growing sector.

As I noted a couple weeks ago,

You remember that “roaming” is when service provider A pays provider B for access to B’s network so that A’s customers can get service when they are outside A’s service area, or where it has capacity constraints, or for redundancy. These roaming agreements are numerous and have always been privately negotiated. The system works fine.

But now a group of provider A’s, who may not want to build large amounts of new network capacity to meet rising demand for mobile data, like video, Facebook, Twitter, and app downloads, etc., want the FCC to mandate access to B’s networks at regulated prices. And in this case, the B’s have spent many tens of billions of dollars in spectrum and network equipment to provide fast data services, though even these investments can barely keep up with blazing demand. . . .

It is perhaps not surprising that a small number of service providers who don’t invest as much in high-capacity networks might wish to gain artificially cheap access to the networks of the companies who invest tens of billions of dollars per year in their mobile networks alone. Who doesn’t like lower input prices? Who doesn’t like his competitors to do the heavy lifting and surf in his wake? But the also not surprising result of such a policy could be to reduce the amount that everyone invests in new networks. And this is simply an outcome the technology industry, and the entire country, cannot afford. The FCC itself has said that “broadband is the great infrastructure challenge of the early 21st century.”

But if Washington actually wants more infrastructure investment, it has a funny way of showing it. On Sunday at a Boston conference organized by Free Press, former Obama White House technology advisor Susan Crawford talked about America’s major communications companies.  “[R]egulating these guys into to an inch of their life is exactly what needs to happen,” she said. You’d think the topic was tobacco or human trafficking rather than the companies that have pretty successfully brought us the wonders of the Internet.

It’s the view of an academic lawyer who has never visited that exotic place called the real world. Does she think that the management, boards, and investors of these companies will continue to fund massive  infrastructure projects in the tens of billions of dollars if Washington dangles them within “an inch of their life”? Investment would dry up long before we ever saw the precipice. This is exactly what’s happened economy-wide over the last few years as every company, every investor, in every industry worried about Washington marching them off the cost cliff. The White House supposedly has a newfound appreciation for the harms of over-regulation and has vowed to rein in the regulators. But in case after case, it continues to toss more regulatory pebbles into the economic river.

Perhaps Nick Schulz of the American Enterprise Institute has it right. Take a look. He calls it the Tommy Boy theory of regulation, and just maybe it explains Washington’s obsession — yes, obsession; when you watch the video, you will note that is the correct word — with managing every nook and cranny of the economy.

Next Page »