How can U.S. broadband lag if it generates 2-3 times the traffic of other nations?

November 24th, 2014

Is the U.S. broadband market healthy or not? This question is central to the efforts to change the way we regulate the Internet. In a short new paper from the American Enterprise Institute, we look at a simple way to gauge whether the U.S. has in fact fallen behind other nations in coverage, speed, and price . . . and whether consumers enjoy access to content. Here’s a summary:

  • Internet traffic volume is an important indicator of broadband health, as it encapsulates and distills the most important broadband factors, such as access, coverage, speed, price, and content availability.
  • US Internet traffic — a measure of the nation’s “digital output” — is two to three times higher than most advanced nations, and the United States generates more Internet traffic per capita and per Internet user than any major nation except for South Korea.
  • The US model of broadband investment and innovation — which operates in an environment that is largely free from government interference — has been a dramatic success.
  • Overturning this successful policy by imposing heavy regulation on the Internet puts one of America’s most vital industries at risk.

Share/Bookmark

M-Lab: The Real Source of the Web Slow-Down

November 5th, 2014

Last week, M-Lab, a group that monitors select Internet network links, issued a report claiming interconnection disputes caused significant declines in consumer broadband speeds in 2013 and 2014.

This was not news. Everyone knew the disputes between Netflix and Comcast/Verizon/AT&T and others affected consumer speeds. We wrote about the controversy here, here, and here, and our “How the Net Works” report offered broader context.

The M-Lab study, “ISP Interconnection and Its Impact on Consumer Internet Performance,” however, does have some good new data. Although M-Lab says it doesn’t know who was “at fault,” advocates seized on the report as evidence of broadband provider mischief at big interconnection points.

But the M-Lab data actually show just the opposite. As you can see in the three graphs below, Comcast, Time Warner, Verizon, and to a lesser extent AT&T all show sharp drops in performance in May of 2013. Then network performance of all four networks at the three monitoring points in New York, Dallas, and Los Angeles all show sudden improvements in March of 2014.

The simultaneous drops and spikes for all four suggest these firms could not have been the cause. It would have required some sort of amazingly precise coordination among the four firms. Rather, the simultaneous action suggests the cause was some outside entity or event. Dan Rayburn of StreamingMedia agrees and offers very useful commentary on the M-Lab study here.

As it happens, Netflix was moving much of its content away from third-party content delivery networks (CDNs) in the spring of 2013 and onto its own OpenConnect platform, which used Cogent and Level 3 to connect to many broadband providers. The smaller cable firms, Cablevision and Cox, meanwhile, had agreed to connect to Netflix for free and unsurprisingly show no degradation.

Nine months of degraded performance ensued. Then, in the spring of 2014, Netflix agreed to connect directly to the networks of the major BSPs, first Comcast, then Verizon, then Time Warner and AT&T. General performance on most networks across the Internet improved. The M-Lab data actually reinforce what we had already pieced together: Netflix moving its traffic onto the networks of Cogent and Level 3 overwhelmed both the capacity of those networks and blew through their peering agreements with the broadband providers.

Network analyst George Ou offered his perspective of what had happened: “Cogent purchased on the order of 1 terabit of capacity from Comcast and got 4 terabits from Comcast. When I say ‘purchased’, I mean bartered. Then they turned around and sold 100 terabits of capacity to [their] own customers. That’s why Cogent customers are suffering slow performance. Then they demanded 8x the bandwidth from Comcast at no extra cost and when Comcast refused, they blamed their slow service on Comcast.”

When Netflix and Comcast finally agreed to connect in the late winter of 2014, it improved performance for everyone. Around 30% of all Netflix’s traffic (which itself can account for a third of all peak-hour U.S. traffic) had been removed from the oversold Cogent and Level 3 networks, thus opening up capacity for those backbones’ connections to the other broadband providers. Additional direct connections between Netflix and the other BSPs then improved things further.

A previous analysis by Peter Sevcik of NetForecast last summer showed this phenomenon of a rising tide after the Comcast/Netflix hook-up (see below). M-Lab thus confirmed what many had already revealed, a non-broadband provider cause of the slow down, exactly the opposite of the media interpretation of the M-Lab study.

But wait! As we were about to post this analysis, we got word via Dan Rayburn that Cogent — quite amazingly — just admitted that in February 2014 it instituted a new fast-lane / slow-lane quality-of-service regime on its network. M-Lab’s monitoring equipment was actually given fast-lane status! This is ironic on a number of levels — and we still need to figure out if this was coordinated or coincident with the Comcast-Netflix hookup — but it may help explain why the performance improvement across Cogent-BSP links as shown by M-Lab was even more sudden and dramatic than the Netflix improvement noticed elsewhere. And of course, it reinforces, yet again, the real source of the slow-down.

Share/Bookmark

Interconnection: Arguing for Inefficiency

October 6th, 2014

Last week Level 3 posted some new data from interconnection points with three large broadband service providers. The first column of the chart, with data from last spring, shows lots of congestion between Level 3 and the three BSPs. You might recall the battles of last winter and early spring when Netflix streaming slowed down and it accused Comcast and other BSPs of purposely “throttling” its video traffic. (We wrote about the incident here, here, here, and here.)

The second column of the Level 3 chart, with data from September, shows that traffic with two of the three BSPs is much less congested today. Level 3 says, reasonably, the cause for the change is Netflix’s on-net transit (or paid peering) agreements with Comcast and (presumably) Verizon, in which Netflix and the broadband firms established direct connections with one another. As Level 3 writes, “You might say that it’s good news overall.” And it is: these on-net transit agreements, which have been around for at least 15 years, and which are used by Google, Amazon, Microsoft, all the content delivery networks (CDNs), and many others, make the Net work better and more efficiently, cutting costs for content providers and delivering better, faster, more robust services to consumers.

But Level 3 says despite this apparent improvement, the data really shows the broadband providers demanding “tolls” and that this is bad for the Internet overall. It thinks Netflix and the broadband providers should be forced to employ an indirect A–>B–>C architecture even when a direct A–>C architecture is more efficient.

The Level 3 charts make another probably unintended point. Recall that Netflix, starting around two years ago, began building its own CDN called OpenConnect. Its intention was always to connect directly to the broadband providers (A–>C) and to bypass Level 3 and other backbone providers (B). This is exactly what happened. Netflix connected to Comcast, Verizon, and others (although for a small fee, rather than for free, as it had hoped). And it looks like the broadband providers were smart not to build out massive new interconnection capacity with Level 3 to satisfy a peering agreement that was out of balance, and which, as soon as Netflix left, regained balance. It would have been a huge waste (what they used to call stranded investment).

Share/Bookmark

Twitch Proves the Net Is Working

October 1st, 2014

Below find our Reply Comments in the Federal Communications Commission’s Open Internet proceeding:

September 15, 2014

Twitch Proves the Net Is Working

On August 25, 2014, Amazon announced its acquisition of Twitch for around $1 billion. Twitch  (twitch.tv) is a young but very large website that streams video games and the gamers who play them. The rise of Twitch demonstrates the Net is working and, we believe, also deals a severe blow to a central theory of the Order and NPRM.

The NPRM repeats the theory of the 2010 Open Internet Order that “providers of broadband Internet access service had multiple incentives to limit Internet openness.” The theory advances a concern that small start-up content providers might be discouraged or blocked from opportunities to grow. Neither the Order nor the current NPRM considers or even acknowledges evidence or arguments to the contrary — that broadband service providers (BSPs) may have substantial incentives to promote Internet openness. Nevertheless, the Commission now helpfully seeks comment “to update the record to reflect marketplace, technical, and other changes since the 2010 Open Internet Order was adopted that may have either exacerbated or mitigated broadband providers’ incentives and ability to limit Internet openness. We seek general comment on the Commission’s approach to analyzing broadband providers’ incentives and ability to engage in practices that would limit the open Internet.”

The continued growth of the Internet, and the general health of the U.S. Web, content, app, device, and Internet services markets — all occurring in the absence of Net Neutrality regulation — more than mitigate the Commission’s theory of BSP incentives. While there is scant evidence for the theory of bad BSP behavior, there is abundant evidence that openness generally benefits all players throughout the Internet value chain. The Commission cannot ignore this evidence.

The rise of Twitch is a perfect example. In three short years, Twitch went from brand new start-up to the fourth largest single source of traffic on the Internet. Google had previously signed a term sheet with Twitch, but so great was the momentum of this young, tiny company, that it could command a more attractive deal from Amazon. At the time of its acquisition by Amazon, Twitch said it had 55 million unique monthly viewers (consumers) and more than one million broadcasters (producers), generating 15 billion minutes of content viewed a month. According to measurements by the network scientist and Deepfield CEO Craig Labovitz, only Netflix, Google’s YouTube, and Apple’s iTunes generate more traffic.

The Commission’s theory said providers of video content, because of the large bandwidth requirements compared to other content types, were especially vulnerable to bad BSP behavior. Twitch is just such an online video player, yet it achieved hyper-growth and spectacular financial success in the absence of Net Neutrality rules. A firm that didn’t exist at the time of the 2010 Order is born and blossoms to become an Internet giant, courted by at least two of the world’s very largest Internet companies — all in the short time that courts, commissions, and companies are haggling over the rules. This is just one of many pieces of evidence demonstrating start-up firms — specifically start-ups that consume massive amounts of bandwidth — are thriving on the Internet.

Another piece of recent evidence bolsters the case that BSPs have incentives to promote, and in fact maintain, openness. In the second quarter of 2014, cable broadband subscribers for the first time ever outnumbered cable TV subscribers. Broadband is now not just the cable industry’s best product, it is its biggest product. It is popular because consumers can access the diverse bounty of the Web and the Net, and subscribers are voting with their feet.

The health of the Internet economy is a major blow to the theory. In an attempted rebuttal, the Commission might argue that although enforceable rules were not in place, BSPs were operating in an environment in which new rules were a possibility. This possibility, the Commission might assert, encouraged good behavior. Perhaps. Yet new rules to combat or discourage anticompetitive or anti-consumer behavior are always on the table. And many general laws and rules already exist to protect competition and consumers no matter the industry. Perhaps the theory is far less powerful than the NPRM assumes.

The theory of future bad behavior continues to be just that. The Commission is grasping at “might be’s.” But the reality of a healthy Internet economy demonstrates the success of the open Internet every day. The Commission should more heavily weight the mountains of accumulating evidence that BSPs have major incentives to promote openness. Similarly, as evidence piles up against it, the Commission should discount its previous theory of BSP behavior. We may argue over the relative incentives for BSPs to constrain or promote Internet openness. But no legitimate rule making can ignore the substantial incentives in favor of openness.

Given the manifest success of the entire value chain, the Chairman’s proposed case-by-case review process, under Section 706, is far preferable to the intrusive omni-regulatory regime of Title II.

Wireless Is Different

The Commission has so far wisely chosen not to apply its heaviest Net Neutrality rules to wireless networks. But it has asked for comment on the proposal that it do so.

A new paper by Jeffrey H. Reed and Nishith D. Tripathi shows just how complex today’s mobile networks are — and how they require even more intensive network management than wired networks. It adds to the overwhelming testimony of the technical community that “wireless is different” and that wireless networks, businesses, and devices would be especially harmed by intrusive Net Neutrality rules.

The number of wireless devices is moving quickly past 10 billion connections. In several years, the Internet of Everything could grow to 30, 50, or even 100 billion devices, nearly all connected wirelessly. The sheer numbers will only exacerbate the existing complexity of wireless networking. “From millisecond to millisecond,” write Reed and Tripathi,

handsets with differing capabilities, consumers with different usage patterns, applications that utilize different aspects and capabilities of both the handset and the network, and content consumption, including video, must be integrated with the network and managed adroitly to deliver a world-class broadband experience for the customer. Now imagine that millisecond to millisecond process happening while the consumer is in motion, while the handsets vary in capability (think flip-phone to smartphone), while the available network changes from 3G to 4G and from one available spectrum band to another, while traffic moves into and out of a cell sector, and while spectrum capacity is limited. This entire process — the integration of all these different variables — is unique to mobile broadband.

Now imagine adding dozens of new types of devices to the network, generating and consuming many types of data, with varied capacity, latency, and jitter requirements. All interacting on and moving between networks using licensed and unlicensed spectrum. All posing increasingly intense challenges of radio interference and data congestion.

Like the example of Twitch, the mobile Internet is a demonstrable success story. It is, however, even more vulnerable to misguided regulation. The burden of proof is on those who would impose regulation to show that new rules would somehow improve wireless from its existing position of strength, and that new rules, contrary to the overwhelming witness of the technical community, would not harm the mobile arena.

Netflix, Mozilla, and Title II

Two of the most prominent and forceful advocates of new Internet regulation are Netflix, the movie and TV-show streaming firm, and Mozilla, maker of the Firefox web browser. Though differing on a few details, each organization has proposed regulating the Internet as a Title II monopoly telephone service.

We admire both organizations for their innovative contributions to the digital universe. Because they are leading the charge for the government to oversee the Internet as never before, however, it is important to understand — and to refute, where warranted — their positions. Here we select and scrutinize just a few of the technical and economic arguments and assertions from their first-round comments.

Mozilla says: the FCC should “recognize a new type of service” — a so-called “remote delivery service,” defined as the connection between an “edge provider” and a broadband ISP’s subscriber. This downstream link would be regulated as a common carrier under Title II.

Mozilla thinks defining a new remote delivery service can both avoid the fraught re-classification of traditional broadband links and also wall off the rest of the Internet from the very real burdens of Title II. It seems to us not just a bad idea substantively, but too clever for its own good. For starters, in the many-to-many world that Mozilla describes, everyone is an edge provider in some sense. This makes it hard to avoid that, despite its best intentions, every network link will get swallowed up by Title II. Even Netflix says, correctly, that the “universe of potential edge providers is extremely heterogeneous.”

Mozilla uses an analogy in which a “doorman in a high-end condominium” holds package deliveries for the condo residents. The broadband ISP is the doorman, in Mozilla’s story, and his only job is to forward the packages to the residents. He may not charge the sender of the package to speed the delivery to Mrs. Smith on the 18th floor, nor can he threaten to slow down the package absent payment. But ISPs are not passive doormen or toll booth operators, and their broadband policy statements all commit not to degrade anyone’s service. They invest $60 billion in the U.S. each year to build networks, data centers, software, and services. The analogy isn’t perfect, but an ISP is in reality more like FedEx. It takes a lot of money to build the infrastructure to transport packages, or bits, and customers pay for the service.

One of the motivations behind Mozilla’s “remote delivery service” definition, it says, is to protect everyone else in the ecosystem from the ravages of Title II. Such an admission is a deep self-indictment. It is difficult to see how the proposal is anything more than a tool to regulate one’s business rivals and/or suppliers — a decidedly non-neutral policy.

Mozilla says: a determination that bans prioritization “would not prevent network operators from seeking new revenue models, or enabling services that require higher standards for delivery. It would instead require these services to be separated from the access service and structured as specialized services. So long as such services do not generate congestion or degrade traffic for the access service, they would fall outside the scope of Title II classification proposed in the Mozilla petition.”

The 2010 Open Internet rules addressed this point and made room for specialized or managed services outside the scope of net neutrality. We suppose this is better than not allowing room for special services that might require higher levels of capacity, or lower latency tolerances, or other premium options. We addressed this carve out idea in Reply Comments in November of 2010:

“The Commission should consider several unintended consequences of moving down the path of explicitly defining, and then exempting, particular ‘specialized’ services while choosing to regulate the so-called ‘basic,’‘best-effort,’ or ‘entry level’‘open Internet.’

“Regulating the ‘basic’ Internet but not ‘specialized’ services will surely push most of the network and application innovation and investment into the unregulated sphere. A ‘specialized’ exemption, although far preferable to a Net Neutrality world without such an exemption, would tend to incentivize both CAS [content, application, and service] providers and ISP service providers to target the ‘specialized’ category and thus shrink the scope of the ‘open Internet.’

“In fact, although specialized services should and will exist, they often will interact with or be based on the ‘basic’ Internet. Finding demarcation lines will be difficult if not impossible. In a world of vast overlap, convergence, integration, and modularity, attempting to decide what is and is not ‘the Internet’ is probably futile and counterproductive. The very genius of the Internet is its ability to connect to, absorb, accommodate, and spawn new networks, applications and services. In a great compliment to its virtues, the definition of the Internet is constantly changing.

“Moreover, a regime of rigid quarantine would not be good for consumers. If a CAS provider or ISP has to build a new physical or logical network, segregate services and software, or develop new products and marketing for a specifically defined ‘specialized’ service, there would be a very large disincentive to develop and offer simple innovations and new services to customers over the regulated ‘basic’ Internet. Perhaps a consumer does not want to spend the extra money to jump to the next tier of specialized service. Perhaps she only wants the service for a specific event or a brief period of time. Perhaps the CAS provider or ISP can far more economically offer a compelling service over the ‘basic’ Internet with just a small technical tweak, where a leap to a full-blown specialized service would require more time and money, and push the service beyond the reach of the consumer. The transactions costs of imposing a ‘specialized’ quarantine would reduce technical and economic flexibility on both CAS providers and ISPs and, most crucially, on consumers.

“Or, as we wrote in our previous Reply Comments about a related circumstance, ‘A prohibition of the voluntary partnerships that are likely to add so much value to all sides of the market – service provider, content creator, and consumer – would incentivize the service provider to close greater portions of its networks to outside content, acquire more content for internal distribution, create more closely held “managed services” that meet the standards of the government’s “exclusions,” and build a new generation of larger, more exclusive “walled gardens” than would otherwise be the case. The result would be to frustrate the objective of the proceeding. The result would be a less open Internet.’

“It is thus possible that a policy seeking to maintain some pure notion of a basic ‘open Internet’ could severely devalue the open Internet the Commission is seeking to preserve.”

Mozilla says: it urges “the Commission to ban paid prioritization and to apply the same open Internet rules to mobile wireless access services as to fixed services.”

Even technicians who have supported robust net neutrality regulation say applying the rules to wireless would be a mistake. The 2010 Open Internet rules exempted wireless. And for good reason. Wireless is a tricky and constrained environment. Wireless technologies use all sorts of prioritization schemes to ration capacity on what are shared networks. Mozilla says it would allow for reasonable network management techniques. But a host of other technical and commercial arrangements could be put in jeopardy. For example, what about “sponsored data” plans where content firms like ESPN could subsidize a user? In January, AT&T announced a sponsored data template, and in the past month T-Mobile has partnered with several digital music providers. The Mozilla and Netflix proposals could ban such partnerships that provide value to all three parties — consumer, network, and content provider.

Mozilla says: “To contend that edge providers offer nothing of value to access service providers would go against the Commission’s core broadband tenets as well as common sense.”

No one contends this.

Mozilla says: failure to enact its favored policy could produce “an outcry from public interest organizations and technology companies citing promises that were broken.”

This is an odd justification for a push to regulate a healthy industry.

Netflix says: “There can be no doubt that Verizon owns and controls the interconnections that mediate how fast Netflix servers respond to a Verizon Internet access customer’s request.”

This is false. As Netflix correctly notes just paragraphs before, “It is called the Inter-net for a reason. That is, the Inter-net comprises interconnections between many autonomous networks” An inter-connection between two networks means precisely that the two “autonomous” networks have agreed to terms to connect. By its nature, no single entity “owns and controls the interconnections.” It is a partnership. The journey of an Internet data packet, or stream of many packets, moreover, usually takes place over multiple networks, thus traversing several interconnections. In fact, factors outside the ownership and control of last mile ISPs are often most crucial to the quality and speed of Netflix streams (see “Netflix and the Net Neutrality Promotional Vehicle”).

Netflix says: “ISPs, not online content providers, set the universe of available pathways into their networks.”

This is only partially true. Yes, ISPs determine with whom they interconnect. But the existence of other successful networks sets the universe of possible pathways, and the economics and culture of the Net mean broadband ISPs want their customers to reach as much content as possible, so ISPs in general want to connect to lots of other networks. Regardless, Netflix has often chosen to use congested pathways into the broadband ISPs, even though a large number of other well known, capacious pathways (CDNs, transit providers) were also available. In most of the cases when Netflix’s service seemed slow, it was these poor network architecture choices that caused deterioration in “how fast Netflix servers respond[ed]” to an “Internet access customer’s request.”

Netflix says: “There is still one and only one way to reach Comcast’s subscribers: through Comcast.”

Netflix similarly has a monopoly in the market of Netflix customers.

Netflix says: “Prioritization has value only in a congested network.” The ability to prioritize “creates a perverse incentive for ISPs to forego network upgrades in order to give prioritization value.” And in a similar vein, “Prioritization is inherently a zero-sum practice.”

First, it must be said that paid priority is getting far too much attention. It’s not really the key question. We may use prioritization techniques for some applications in the future — HD video conferencing, gaming, remote medical procedures — but most broadband ISPs do not today prioritize much, if any, traffic on their last mile access links. It’s just not the central point of contention so many have made it to be.

Second, priority is a commonplace concept. It’s true, in a world of unlimited supply, priority doesn’t matter. In the real world, it does. We prioritize in every business setting, and in everyday life. We certainly prioritize on the Internet. Voice over IP packets get tagged. Websites and online video providers use content delivery networks (CDNs) for faster delivery. Financial firms build direct fiber links to speed stock market trades. The examples are endless: FedEx’s next morning delivery versus three-day ground. First class versus coach. Airplane versus automobile. Now versus later. It’s crucial that we’re allowed to pay more — and that we’re allowed to pay less when we don’t want or need immediacy.

Third, the argument is a bit circular. And it’s not supported by good economics. The theory is that ISPs will offer an increasingly dilapidated product to consumers so that they can charge content providers for fast lanes. But consumers do have other choices, and dilapidated products aren’t popular. We have multiple wireline choices, and multiple wireless choices that are increasingly robust substitutes. Are broadband service providers really eager to anger their huge customer base in order to make a few extra bucks from a relatively small number of content providers? The math doesn’t look good.

The FCC NPRM, however, asserted, without empirical or theoretical foundation, that ISPs have an incentive to underinvest, congest the network, and degrade service. The FCC did not contemplate, let alone give ample weight to, counter arguments and facts showing incentives working in just the opposite, and much happier, direction.

If we make broadband a highly regulated industry, however, we can expect less market entry, less competition, less investment, less new capacity. (See the experience of Europe today.) A world of artificial scarcity will prompt more stingy prioritization schemes (rationing) than a world of investment and innovation, though some forms of priority will exist in any world this side of heaven.

Priority, price discrimination, product differentiation — these things actually allow us to match consumers with their needs and to create an economically rational system that can support growth.

Contrary to blanket assertions, there are many small start-ups who might value various forms of paid priority, sponsored data, or premium services. Perhaps these tools will help them launch into markets faster than they otherwise would. They may not have the large in-house data centers and CDN networks of a Google or Netflix, so perhaps they utilize third party CDN services or establish partnerships or buy super-fast connections.

Lastly, priority is not zero-sum. To the extent consumers and businesses are allowed to pay for priority (and save money when we don’t need it), the value of the entire system increases and allows further investment. Don’t force grandma who checks her email once a day to subsidize the affluent round-the-clock video gamer.

Share/Bookmark

FCC — the Federal Crony Commission?

April 29th, 2014

Ok, maybe that’s a little harsh. But watch this video of T-Mobile CEO John Legere boasting that’s he’s “spent a lot of time in Washington lately, with the new chairman of the FCC,” and that “they love T-Mobile.”

Ah, spring. Love is in the air. Great for twenty-somethings, not so great for federal agencies. The FCC, however, is thinking about handing over valuable wireless spectrum to T-Mobile and denying it to T-Mobile’s rivals. This type of industrial policy is partially responsible for the sluggish economy.

From taxpayer subsidies for connected Wall Street banks to favored green energy firms with the right political allies, cronyism prevents the best firms from serving consumers with the best products in the most efficient way. Cronyism is good (at least temporarily) for a few at the top. But it hurts everyone else. Government favors ensure that bad ideas and business models are supported even if they would have proved wanting in a more neutral market. They transfer scarce taxpayer dollars to friends and family. They also hurt firms who aren’t fortunate enough to have the right friends in the right places at the right time. It’s hard to compete against a rival who has the backing of Washington. The specter of arbitrary government then hangs over the economy as firms and investors make decisions not on the merits but on a form of kremlinology — what will Washington do? In the case at hand, cronyism could blow up the whole spectrum auction, an act of wild irresponsibility in the service of a narrow special interest (we’ve written about it herehere, and here).

The U.S. has never been perfectly free of such cronyism, but our system was better than most and over the centuries attracted the world’s financial and human capital because investors and entrepreneurs knew that in the U.S. the best ideas and the hardest work tend to win out. Effort, smarts, and risk capital won’t be snuffed out by some arbitrary bureaucratic decision or favor. That was the stuff of Banana Republics — the reason financial and human capital fled those spots for America, preferring the Rule of Law to the Whim of Man.

The FCC’s prospective auction rules are perplexing in part because the U.S. mobile industry is healthy — world-leading healthy. More usage, faster speeds, plummeting prices, etc. Why risk interrupted that string of success? Economist Hal Singer shows that in the FCC’s voluminous reports on the wireless industry, it has failed to present any evidence of monopoly power that would justify its rigging of the spectrum auctions. On the other hand, an overly complex auction could derail spectrum policy for a decade.

Share/Bookmark

GDP, Unemployment, and the ‘Quaternary Society’

March 19th, 2014

On February 28, the Bureau of Economic Analysis revised fourth quarter U.S. GDP growth downward to just 2.4% from an initial estimate of 3.2%. For 2013, the economy expanded just 1.9%, nearly a point lower than the lackluster 2.8% growth of 2012. Five years after the sharp downturn of 2008-09, we are still just limping along.

Granted, the stock market keeps making all-time highs. That is not insignificant, and in the past rising stocks often signaled growth ahead. Another important consideration weighing against depressingly slow growth is a critique of our economic measures themselves. Does gross domestic product (GDP), for example, accurately capture output, let alone value, technical progress, and overall wellbeing? A new book GDP: A Brief But Affectionate History, by Diane Coyle, examines some of the shortcomings of GDP-the-measure. And lots of smart commentary has been written on the ways that technologies that improve standards of living often don’t show up on official ledgers — from anesthesia to the massive consumer surpluses afforded by information technology. In addition, although income inequality is said by many to have grown, consumption inequality has, by many measures, substantially fallen. All true and interesting and important, and worthy of much further discussion at a later date.

For now, however, we still must pay the butcher, the baker, and the aircraft carrier maker — with real dollars. And the dollar economy is not growing nearly fast enough. We’ve sliced and diced the poor employment data a thousand ways these last few years, but one of the most striking recent figures is the fall in the portion of American men 25-54 who are working. Looking at this cohort tends to minimize the possible retirement and schooling factors that could skew the analysis. We simply presume that most able-bodied men in this range should be working. And the numbers are bad. As Binyamin Appelbaum of the New York Times Economix blog writes:

In February 2008, 87.4 percent of men in that demographic had jobs. Six years later, only 83.2 percent of men in that bracket are working.

Are these working-age men not working because they are staying home with children? Because they don’t have the right skills for today’s economy? Because the economy is not growing fast enough and creating enough opportunities? Because they are discouraged? Because policies have actively discouraged work in favor of leisure, or at least non-work?

The polymathic thinker Herman Kahn, back in the 1970s book The Next 200 Years, suggested another possibility. Kahn first recounted the standard phases of economic history: a primary economy that focused on extraction — agriculture, mining, forestry; a secondary economy focused on construction and manufacturing; and a tertiary economy, primarily composed of services, management, and knowledge work. But Kahn went further, pointing toward a “quaternary society,” where work would be beside the point and various types of personal fulfillment would rise in importance. Where the primary society conducted games against nature, the secondary society conducted games against materials, and the tertiary society pitted organizations against other organizations, people in the quaternary society would play “games with and against themselves, . . . each other, and . . . communities.” He said much of this activity, from obsessions with gourmet cooking and interior design to hunting, hiking, and fishing, to exercise, adventures, and public campaigns and causes. He said quaternary activities would look a lot like leisure, or hobbies. He predicted many of us in the future would see this as “stagnation.”

If any of you have checked out twitch.tv, you might think Kahn was on to something. Twitch.tv is a website that broadcasts other people playing and commentating on video games in real-time. It appears to be an entirely “meta” activity. But twitch.tv is no tiny fringe curiosity. It is the fourth largest consumer of bandwidth on the Internet.

Is twitch.tv responsible for millions of American men dropping out of the labor force? No. But the Kahn hypothesis is, nevertheless, provocative and worth thinking about.

The possibility of a quaternary economy, however, depends in some measure on substantial wealth. And here one could make a case either way. Is it possible the large consumer surpluses of the modern knowledge economy allow us to provide for our basic needs quite easily and, if we are not driven by other ambitions or internal drives, live somewhat comfortably without sustained effort in a conventional job? Perhaps some of this is going on. Is America really so wealthy, however, that large portions of society — not merely the super wealthy — can drop out of work and pursue hobbies full time? Unlikely. There is evidence that many Baby Boomers near retirement, or even those who had retired, are working more than they’d planned to make up for lost savings. Kahn’s quaternary economy will have to wait.

I say we won’t know the answers to many of these questions until we remove the shackles around the economy’s neck and see what happens. If we start fresh with a simple tax code, substantially deregulate health, education, energy, and communications, and remove other barriers to work, investment, and entrepreneurship, will just 83% of working-age men continue choosing to work? And will GDP, as imperfect a measure as it is, limp along around 2%? (Charles Murray, presenting a new paper at a recent Hudson Institute roundtable on the future of American innovation, hit us with some seriously pessimistic cultural indicators. More on that next time.)

I doubt it. I don’t think human kind has permanently sloughed off its internal ambition toward improvement, growth, and (indirectly) GDP generation. I think new policy and new optimism could unleash an enormous boom.

Share/Bookmark

Phone Company Screws Everyone: Forces Rural Simpletons and Elderly Into Broadband, Locks Young Suburbanites in Copper Cage

February 28th, 2014

Big companies must often think, damned if we do, damned if we don’t.

Share/Bookmark

Netflix-Comcast Coverage

February 28th, 2014

See our coverage of Comcast-Netflix, which really began before any deal was announced. Two weeks ago we wrote about the stories that Netflix traffic had slowed, and we suggested a more plausible explanation (interconnection disputes negotiations) than the initial suspicion (so called “throttling”). Soon after, we released a short paper, long in the works, describing “How the Net Works” — a brief history of interconnection and peering. And this week we wrote about it all at TechPolicyDaily, Forbes, and USNews.

Netflix, Verizon, and the Interconnection Question – TechPolicyDaily.com – February 13, 2014

How the Net Works: A Brief History of Internet Interconnection – Entropy Economics – February 21, 2014

Comcast, Netflix Prove Internet Is Working – TechPolicyDaily.com – February 24, 2014

Netflix, Comcast Hook Up Sparks Web Drama – Forbes.com – February 26, 2014

Comcast, Netflix and the Future of the Internet – U.S. News & World Report – February 27, 2014

— Bret Swanson

Share/Bookmark

How much would an iPhone have cost in 1991?

February 3rd, 2014

Amazing! An iPhone is more capable than 13 distinct electronics gadgets, worth more than $3,000, from a 1991 Radio Shack ad. Buffalo writer Steve Cichon first dug up the old ad and made the point about the seemingly miraculous pace of digital advance, noting that an iPhone incorporates the features of the computer, CD player, phone, “phone answerer,” and video camera, among other items in the ad, all at a lower price. The Washington Post’s tech blog The Switch picked up the analysis, and lots of people then ran with it on Twitter. Yet the comparison was, unintentionally, a huge dis to the digital economy. It massively underestimates the true pace of technological advance and, despite its humor and good intentions, actually exposes a shortcoming that plagues much economic and policy analysis.

To see why, let’s do a very rough, back-of-the-envelope estimate of what an iPhone would have cost in 1991.

In 1991, a gigabyte of hard disk storage cost around $10,000, perhaps a touch less. (Today, it costs around four cents ($0.04).) Back in 1991, a gigabyte of flash memory, which is what the iPhone uses, would have cost something like $45,000, or more. (Today, it’s around 55 cents ($0.55).)

The mid-level iPhone 5S has 32 GB of flash memory. Thirty-two GB, multiplied by $45,000, equals $1.44 million.

The iPhone 5S uses Apple’s latest A7 processor, a powerful CPU, with an integrated GPU (graphics processing unit), that totals around 1 billion transistors, and runs at a clock speed of 1.3 GHz, producing something like 20,500 MIPS (millions of instructions per second). In 1991, one of Intel’s top microprocessors, the 80486SX, oft used in Dell desktop computers, had 1.185 million transistors and ran at 20 MHz, yielding around 16.5 MIPS. (The Tandy computer in the Radio Shack ad used a processor not nearly as powerful.) A PC using the 80486SX processor at the time might have cost $3,000. The Apple A7, by the very rough measure of MIPS, which probably underestimates the true improvement, outpaces that leading edge desktop PC processor by a factor of 1,242. In 1991, the price per MIPS was something like $30.

So 20,500 MIPS in 1991 would have cost around $620,000.

But there’s more. The 5S also contains the high-resolution display, the touchscreen, Apple’s own M7 motion processing chip, Qualcomm’s LTE broadband modem and its multimode, multiband broadband transceiver, a Broadcom Wi-Fi processor, the Sony 8 megapixel iSight (video) camera, the fingerprint sensor, power amplifiers, and a host of other chips and motion-sensing MEMS devices, like the gyroscope and accelerometer.

In 1991, a mobile phone used the AMPS analog wireless network to deliver kilobit voice connections. A 1.44 megabit T1 line from the telephone company cost around $1,000 per month. Today’s LTE mobile network is delivering speeds in the 15 Mbps range. Wi-Fi delivers speeds up to 100 Mbps (limited, of course, by its wired connection). Safe to say, the iPhone’s communication capacity is at least 10,000 times that of a 1991 mobile phone. Almost the entire cost of a phone back then was dedicated to merely communicating. Say the 1991 cost of mobile communication (only at the device/component level, not considering the network infrastructure or monthly service) was something like $100 per kilobit per second.

Fifteen thousand Kbps (15 Mbps), multiplied by $100, is $1.5 million.

Considering only memory, processing, and broadband communications power, duplicating the iPhone back in 1991 would have (very roughly) cost: $1.44 million + $620,000 + $1.5 million = $3.56 million.

This doesn’t even account for the MEMS motion detectors, the camera, the iOS operating system, the brilliant display, or the endless worlds of the Internet and apps to which the iPhone connects us.

This account also ignores the crucial fact that no matter how much money one spent, it would have been impossible in 1991 to pack that much technological power into a form factor the size of the iPhone, or even a refrigerator.*

Tim Lee at The Switch noted the imprecision of the original analysis and correctly asked how typical analyses of inflation can hope to account for such radical price drops. (Harvard economist Larry Summers recently picked up on this point as well.)

But the fact that so many were so impressed by an assertion that an iPhone possesses the capabilities of $3,000 worth of 1991 electronics products — when the actual figure exceeds $3 million — reveals how fundamentally difficult it is to think in exponential terms.

Innovation blindness, I’ve long argued, is a key obstacle to sound economic and policy thinking. And this is a perfect example. When we make policy based on today’s technology, we don’t just operate mildly sub-optimally. No, we often close off entire pathways to amazing innovation.

Consider the way education policy has mostly enshrined a 150-year-old model, and in recent decades has thrown more money at the same broken system while blocking experimentation. The other day, the venture capitalist Marc Andreessen (@pmarca) noted in a Twitter missive the huge, but largely unforeseen, impact digital technologies are having on this industry that so desperately needs improvement:

“Four biggest K-12 education breakthroughs in last 20 years: (1) Google, (2) Wikipedia, (3) Khan Academy, (4) Wolfram Alpha.”

Maybe the biggest breakthroughs of the last 50 years. Point made, nonetheless. California is now closing down “coding bootcamps” — courses that teach people how to build apps and other software — because many of them are not state certified. This is crazy.

The importance of understanding the power of innovation applies to health care, energy, education, and fiscal policy, but no where is it more applicable than in Internet and technology policy, which is, at the moment, the subject of a much needed rethink by the House Energy and Commerce Committee.

— Bret Swanson

* To be fair, we do not account for the fact that back in 1991, had engineers tried to design and build chips and components with faster speeds and greater capacities than the consumer items mentioned, they could have in some cases scaled the technology in a more efficient manner than, for example, simply adding up consumer microprocessors totaling 20,500 MIPS. On the other hand, the extreme volumes of the consumer products in these memory, processing, and broadband communications categories, are what make the price drops possible. So this acknowledgment doesn’t change the analysis too much, if at all.

Share/Bookmark

Reaction to “net neutrality” ruling

January 14th, 2014

My AEI tech policy colleagues and I discussed today’s net neutrality ruling, which upheld the FCC’s basic ability to oversee broadband but vacated the two major, specific regulations.

Share/Bookmark

Federal Court strikes down FCC “net neutrality” order

January 14th, 2014

Today, the D.C. Federal Appeals Court struck down the FCC’s “net neutrality” regulations, arguing the agency cannot regulate the Internet as a “common carrier” (that is, the way we used to regulate telephones). Here, from a pre-briefing I and several AEI colleagues did for reporters yesterday, is a summary of my statement:

Chairman Wheeler has emphasized importance of Open Internet. We agree. The Internet is more open than ever — we’ve got more people, connected via more channels and more devices, to more content and more services than ever. And we will continue to enjoy an Open Internet because it benefits all involved — consumers, BSPs, content companies, software and device makers.

Chairman Wheeler has also emphasized recently that he believes innovation in multi-sided markets is important. At his Ohio State speech, he said we should allow experimentation, and when pressed on this apparent endorsement of multi-sided market innovation, he did not back down.

The AT&T “sponsored data” is a good example of such multi-sided market innovation, but one that many Net Neutrality supporters say violates NN. Sponsored Data, in which a content firm might pay for a portion of the data used by a consumer, increases total capacity, expands consumer choice, and would help keep prices lower than they would otherwise be. It also offers content firms a way to reach consumers. And it helps pay for cost of expensive broadband infrastructure. It is win-win-win.

Firms have already used this method — Amazon, for example, pays for the data downloads of Kindle ebooks.

Across the landscape, allowing technical and business model innovation is important to keep delivering diverse products to consumers at the best prices. Prohibiting “sponsored data” or tiered data plans or content partnerships or quality-of service based networking will reduce the flexibility of networks, reduce product differentiation, and reduce consumer choice. A rule that requires only one product or only one price level for a range of products could artificially inflate the price that many consumers pay. Low-level users may end up paying for high-end users. Entire classes of products might not come into being because a rule bans a crucial partnership that would have helped the product at its inception. Network architectures that can deliver better performance at lower prices might not arise.

Common carriage style regulation is not appropriate for the Internet. The Internet is a fast changing, multipurpose network, built and operated by numerous firms, with many types of data, content, products, and services flowing over it, all competing and cooperating in a healthy and dynamic environment. Old telephone style regulation, meant to regulate a monopoly utility that used a single purpose network to deliver one type of service, would be a huge (and possibly catastrophic) step backward for what is today a vibrant Internet economy.

The Court, though not ruling on the wisdom of Net Neutrality, essentially agreed and vacated the old-style common carriage rules. It’s a near-term win for the Internet. The court’s grant to the FCC of regulatory authority over the Internet, save common carriage, is, however, potentially problematic. We don’t know how broad this grant is or what the FCC might do with it. A fundamental rethink of our communications laws and regulations may thus be in order.
Share/Bookmark

Why the fuss over “sponsored data”?

January 6th, 2014

Today, at the Consumer Electronics Show in Las Vegas, AT&T said it would begin letting content firms — Google, ESPN, Netflix, Amazon, a new app, etc. — pay for a portion of the mobile data used by consumers of this content. If a mobile user has a 2 GB plan but likes to watch lots of Yahoo! news video clips, which consume a lot of data, Yahoo! can now subsidize that user by paying for that data usage, which won’t count against the user’s data limit.

Lots of people were surprised — or “surprised” — at the announcement and reacted violently. They charged AT&T with “double dipping,” imposing “taxes,” and of course the all-purpose net neutrality violation.

But this new sponsored data program is typical of multisided markets where a platform provider offers value to two or more parties — think magazines who charge both subscribers and advertisers. We addressed this topic before the idea was a reality. Back in June 2013, we argued that sponsored data would make lots of mobile consumers better off and no one worse off.

Two weeks ago, for example, we got word ESPN had been talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

Sounds like a reasonable deal all around. But not to everyone. “This is what a net neutrality violation looks like,” wrote Public Knowledge, a key backer of Internet regulation.

The idea that ESPN would pay to exempt its bits from data caps offends net neutrality’s abstract notion that all bits must be treated equal. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data — plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially if a non-watcher of ESPN, am worse off.

The critics’ real worry, then, is that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But is this government’s role — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms? This was our warning. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

What if magazines were barred from carrying advertisements? They’d have to make all their money from subscribers and thus (attempt to) charge much higher prices or change their business model. Consumers would lose, either through higher prices or less diversity of product offerings. And advertisers, deprived of an outlet to reach an audience, would lose. That’s what we call a lose-lose-lose proposition.

Maybe sponsored data will take off. Maybe not. It’s clear, however, in the highly dynamic mobile Internet business, we should allow such voluntary experiments.

Share/Bookmark

Crisis of Complexity

December 9th, 2013

[W]e have these big agencies, some of which are outdated, some of which are not designed properly . . . . The White House is just a tiny part of what is a huge, widespread organization with increasingly complex tasks in a complex world.

That was President Obama, last week, explaining Obamacare’s failed launch. We couldn’t have said it better ourselves.

Where Washington thinks this is a reason to give itself more to do, with more resources, however, we see it as a blaring signal of overreach.

The Administration now says Healthcare.gov is operating with “private sector velocity and effectiveness.” But why seek to further governmentalize one-seventh of the economy if the private sector is faster and more effective than government?

Meanwhile, the New York Times notes that

The technology troubles that plagued the HealthCare.gov website rollout, may not have come as a shock to people who work for certain agencies of the government — especially those who still use floppy disks, the cutting-edge technology of the 1980s.

Every day, The Federal Register, the daily journal of the United States government, publishes on its website and in a thick booklet around 100 executive orders, proclamations, proposed rule changes and other government notices that federal agencies are mandated to submit for public inspection.

So far, so good.

It turns out, however, that the Federal Register employees who take in the information for publication from across the government still receive some of it on the 3.5-inch plastic storage squares that have become all but obsolete in the United States.

Floppy disks make us chuckle. But the costs of complexity are all too real.

A Bloomberg study found the six largest U.S. banks, between 2008 and August of this year, spent $103 billion on lawyers and related legal expenses. These costs pale compared to the far larger economic distortions imposed by metastasizing financial regulation. Even Barney Frank is questioning whether his signature law, Dodd-Frank, is a good idea. The bureaucracy’s decision to push regulations intended for big banks onto money managers and mutual funds seems to have tipped his thinking.

This is not an aberration. This is what happens with vast, complex, ambiguous laws, which ask “huge, widespread” bureaucracies to implement them.

It is the norm of today’s sprawling Administrative State and of Congress’s penchant for 2,000-page wish lists, which ineluctably empower that Administrative State.

We resist, however, the idea that the problem is merely “outdated” or “inefficient” bureaucracy.

We do not need better people to administer these “laws.” With laws and regulations this extensive and ambiguous, they are inherently political. The best managers would seek efficient and effective outcomes based on common-sense readings and would resist political tampering. Effective implementation of conflicting and economically irrational rules would still yield big problems. Regardless, the goal is not effective management — it is political control.

Agency “reform” is not the answer, although in most cases reform is preferable to no reform. Even reformed agencies do not possess the information to manage a “complex world.” Anyway, “competent” management is not what the political branches want. Agencies routinely evade existing controls — such as procurement rules — when convenient. The largest Healthcare.gov contractor, for example, reportedly got the work without any contesting bids. That is not an oversight, it is a decision.

The laws and rules are uninterpretable by the courts. Depending on which judges hear the cases, we get dramatically and unpredictably divergent analyses, or the type of baby splitting Chief Justice Roberts gave us on Obamacare. Judges thus end up either making their own law or throwing the question back into the political arena.

Infinite complexity of law means there is no law.

“With great power,” Peter Parker’s (aka Spiderman’s) uncle told us, “comes great responsibility.” For Washington, however, ambiguity and complexity are features, not bugs. Ambiguity and complexity promote control without accountability, power without responsibility.

The only solution to this crisis of complexity is to reform the very laws, rules, scope, and aims of government itself.

In a paper last spring called “Keep It Simple,” we highlighted two instances — one from the labor markets and one from the capital markets — where even the most well-intended rules yielded catastrophic results. We showed how the interactions among these rules and the supporting bureaucracies produced unintended consequences. And we outlined a basic framework for assessing “good rules and bad rules.”

As our motto and objective, we adopted Richard Epstein’s aspiration of “simple rules for a complex world.” Which, you will notice, is the just opposite of the problem so incisively outlined by the President — Washington’s failed attempts to perform “complex tasks in a complex world.”

As we wrote elsewhere,

The private sector is good at mastering complexity and turning it into apparent simplicity — it’s the essence of wealth creation. At its best, the government is a neutral arbiter of basic rules. The Administration says it is ‘discovering’ how these ‘complicated’ things can blow up. We’ll see if government is capable of learning.

Share/Bookmark

Digital Dynamism

November 13th, 2013

See our new 20-page report – Digital Dynamism: Competition in the Internet Ecosystem:

The Internet is altering the communications landscape even faster than most imagined.

Data, apps, and content are delivered by a growing and diverse set of firms and platforms, interconnected in ever more complex ways. The new network, content, and service providers increasingly build their varied businesses on a common foundation — the universal Internet Protocol (IP). We thus witness an interesting phenomenon — the divergence of providers, platforms, services, content, and apps, and the convergence on IP.

The dynamism of the Internet ecosystem is its chief virtue. Infrastructure, services, and content are produced by an ever wider array of firms and platforms in overlapping and constantly shifting markets.

The simple, integrated telephone network, segregated entertainment networks, and early tiered Internet still exist, but have now been eclipsed by a far larger, more powerful phenomenon. A new, horizontal, hypercon- nected ecosystem has emerged. It is characterized by large investments, rapid innovation, and extreme product differentiation.

  • Consumers now enjoy at least five distinct, competing modes of broadband connectivity — cable modem, DSL, fiber optic, wireless broadband, and satellite — from at least five types of firms. Widespread wireless Wi- Fi nodes then extend these broadband connections.
  • Firms like Google, Microsoft, Amazon, Apple, Facebook, and Netflix are now major Internet infrastructure providers in the form of massive data centers, fiber networks, content delivery systems, cloud computing clusters, ecommerce and entertainment hubs, network protocols and software, and, in Google’s case, fiber optic access net- works. Some also build network devices and operating systems. Each competes to be the hub — or at least a hub — of the consumer’s digital life. So large are these new players that up to 80 percent of network traffic now bypasses the traditional public Internet backbone.
  • Billions of diverse consumer and enterprise devices plug into these networks, from PCs and laptops to smartphones and tablets, from game consoles and flat panel displays to automobiles, web cams, medical devices, and untold sensors and industrial machines.

The communications playing field is continually shifting. Cable disrupted telecom through broadband cable modem services. Mobile is a massively successful business, yet it is cannibalizing wireline services, with further disruptions from Skype and other IP communications apps. Mobile service providers used to control the handset market, but today handsets are mobile computers that wield their own substantial power with consumers. While the old networks typically delivered a single service — voice, video, or data — today’s broadband networks deliver multiple services, with the “Cloud” offering endless possibilities.

Also view the accompanying graphic, showing the progression of network innovation over time: Hyperconnected: The New Network Map.

Share/Bookmark

U.S. Share of Internet Traffic Grows

October 10th, 2013

Over the last half decade, during a protracted economic slump, we’ve documented the persistent successes of Digital America — for example the rise of the App Economy. Measuring the health of our tech sectors is important, in part because policy agendas are often based on assertions of market failure (or regulatory failure) and often include comparisons with other nations. Several years ago we developed a simple new metric that we thought better reflected the health of broadband in international comparisons. Instead of measuring broadband using “penetration rates,” or the number of  connections per capita, we thought a much better indicator was actual Internet usage. So we started looking at Internet traffic per capita and per Internet user (see here, here, here, and, for more context, here).

We’ve update the numbers here, using Cisco’s Visual Networking Index for traffic estimates and Internet user figures from the International Telecommunications Union. And the numbers suggest the U.S. digital economy, and its broadband networks, are healthy and extending their lead internationally. (Patrick Brogan of USTelecom has also done excellent work on this front; see his new update.)

If we look at regional comparisons of traffic per person, we see North America generates and consumes nearly seven times the world average and more around two and a half times that of Western Europe.

Looking at individual nations, and switching to the metric of traffic per user, we find that the U.S. is actually pulling away from the rest of the world. In our previous reports, the U.S. trailed only South Korea, was essentially tied with Canada, and generated around 60-70% more traffic than Western European nations. Now, the U.S. has separated itself from Canada and is generating two to three times the traffic per user of Western Europe and Japan.

Perhaps the most remarkable fact, as Brogan notes, is that the U.S. has nearly caught up with South Korea, which, for the last decade, was a real outlier — far and away the worldwide leader in Internet infrastructure and usage.

Traffic is difficult to measure and its nature and composition can change quickly. There are a number of factors we’ll talk more about later, such as how much of this traffic originates in the U.S. but is destined for foreign lands. Yet these are some of the best numbers we have, and the general magnitudes reinforce the idea that the U.S. digital economy, under a relatively light-touch regulatory model, is performing well.

Share/Bookmark

The Need For Speed: How’s U.S. Broadband Doing?

September 29th, 2013

My TechPolicyDaily colleague Roslyn Layton has begun a series comparing the European and U.S. broadband markets.

As a complement to her work, I thought I’d address a common misperception — the notion that American broadband networks are “pathetically slow.” Backers of heavier regulation of the communications market have used this line over the past several years, and for a time it achieved a sort of conventional wisdom. But is it true? I don’t think so.

Real-time speed data collected by the Internet infrastructure firm Akamai shows U.S. broadband is the fastest of any large nation, and trails only a few tiny, densely populated countries. Akamai lists the top 10 nations in categories such as average connection speed; average peak speed; percent of connections with “fast” broadband; and percent of connections with broadband. The U.S., for example, ranks eighth among nations in average connection speed. And this is the number that is oft quoted. (This is a bit better than the no-longer-oft-used broadband penetration figures, which perennially showed the U.S. further down the list, at 15th or 26th place, for example.) Nearly all the the nations on these speed lists, however, with the exception of the U.S., are small, densely populated countries where it is far easier and more economical to build high-speed networks.

How to fix this? Well, Akamai also lists the top 10 American states in these categories. Because states are smaller, like the small nations that top the global list, they are a more appropriate basis for comparison. Last winter I combined the national and state figures and compiled a more appropriate comparative list. Using the newest data, I’ve updated the tables, which show that U.S. states (highlighted in green) dominate.

Average Connection SpeedAverage Peak Connection SpeedPercent Above 10 MbpsPercent Above 4 Mbps

Summarizing:

  • Ten of the top 13 entities for “average connection speed” are U.S. states.
  • Ten of the top 15 in “average peak connection speed” are U.S. states.
  • Ten of the top 12 in “percent of connections above 10 megabits per second” are U.S. states.
  • Ten of the top 20 in “percent of connections above 4 megabits per second” are U.S. states.

U.S. states thus account for 40 of the top 60 slots — or two-thirds — in these measures of actual global broadband speeds.

This is not a comprehensive analysis of the entire U.S. Less populated geographic areas, where it is more expensive to build networks, don’t enjoy speeds this high. But the same is true throughout the world.

Share/Bookmark

Discussing Broadband and Economic Growth at AEI

September 22nd, 2013

On Tuesday this week, the American Enterprise Institute launched an exciting new project — the Center for Internet, Communications, and Technology. I was happy to participate in the inaugural event, which included talks by CEA chairman Jason Furman and Rep. Greg Walden (R-OR). We discussed broadband’s potential to boost economic productivity and focused on the importance and key questions of wireless spectrum policy. See the video below:

Share/Bookmark

Risk and Resilience

September 11th, 2013
The U.S. Chamber of Commerce Foundation’s summer edition of the Business Horizon Quarterly is a special issue on the topic of “Resilience.” Our contribution is an essay called “Long Live the Risk Takers.”

Wealth, however, can be a double-edged sword. With wealth comes resilience and thus an increased capacity to take risk. More risk can lead to further riches. Yet greater wealth also increases potential losses. In other words, we have a lot more to gain and a lot more to lose.

Perhaps it is not surprising then that many modern elites and policymakers see danger around every corner—from terrorism to climate change to financial calamity. In one sense, an obsession with risk is a luxury of wealth. It is prudent to identify present shortcomings and contemplate future problems and attempt to avoid them. Preventing hunger, unemployment, bomb plots, wars, and financial panics is a good thing.

What happens, though, when we develop a hyper-focus on shortcomings and potential losses? What happens when we seek a public policy remedy for every perceived problem? This kind of obsession with risk, danger, and downside may be counterproductive. It may exacerbate known problems and unleash dangers never dreamed of.  . . . read the entire article.

Share/Bookmark

A Decade Later, Net Neutrality Goes To Court

September 9th, 2013

Today the D.C. Federal Appeals Court hears Verizon’s challenge to the Federal Communications Commission’s “Open Internet Order” — better known as “net neutrality.”

Hard to believe, but we’ve been arguing over net neutrality for a decade. I just pulled up some testimony George Gilder and I prepared for a Senate Commerce Committee hearing in April 2004. In it, we asserted that a newish “horizontal layers” regulatory proposal, then circulating among comm-law policy wonks, would become the next big tech policy battlefield. Horizontal layers became net neutrality, the Bush FCC adopted the non-binding Four Principles of an open Internet in 2005, the Obama FCC pushed through actual regulations in 2010, and now today’s court challenge, which argues that the FCC has no authority to regulate the Internet and that, in fact, Congress told the FCC not to regulate the Internet.

Over the years we’ve followed the debate, and often weighed in. Here’s a sampling of our articles, reports, reply comments, and even some doggerel:

— Bret Swanson

Share/Bookmark

Simple Rules For Spectrum

July 18th, 2013

Washington is getting closer to unleashing more spectrum to fuel the digital economy and stay ahead of capacity constraints that will stymie innovation and raise prices for consumers. Ahead of the July 23 Congressional hearing on spectrum auctions, we should keep a couple things in mind. First and foremost, we need “Simple Rules for a Complex World.” It’s a basic idea that should apply to all policymaking. But especially in the exceedingly complex and fast-moving digital ecosystem.

A number of firms are seeking special rules that would complicate — and possibly undermine — the auctions. They want to exclude some rival firms from bidding in the auctions. They are suggesting exclusions, triggers, “one-third caps,” and other Rube Goldberg mechanisms they hope will tip the auction scales in their favor.

Using examples from the labor markets and capital markets, we showed in a recent paper that complex policies — even though well intended and designed by smart people — often yield perverse results. Laws and regulations should be few, simple, and neutral. Those advocating the special auction rules favor a process that is complex and biased.

They are also using complicated arguments to back their preferred complicated process. Some are asserting a “less is more” theory of auctions — the idea that fewer bidders can yield higher auction revenues. If it seems counterintuitive, it is. Their theory is based on a very specific, hypothetical auction where a dominant monopolist might scare off a potential new market entrant from bidding at all and walk away with the underpriced auction items. This hypothetical does not apply to America’s actual wireless spectrum market.

The U.S. has four national mobile service providers and a number of regional providers. We have lots of existing players, most of whom plan to bid in the auctions. As all the theory and evidence shows, in this situation, an open process with more bidders means a better auction — spectrum flowing to its highest value uses and more overall revenue.

Some studies show a policy excluding the top two bidders in the auction could reduce revenue by up to 40% — or $12 billion. This would not only prevent spectrum from flowing to its best use but could also jeopardize the whole purpose of the incentive auction, because lower prices could discourage TV broadcasters from “selling” their valuable airwaves. If the auction falls short, that means less spectrum, less mobile capacity, slower mobile broadband, and higher consumer prices. (See our recent Forbes article on the topic.)

Fortunately, several Members of Congress are adhering to the Simple Rules idea. They want to keep the spectrum auction open and competitive. They think this will  yield the most auction revenues and ensure the maximum amount of underutilized broadcast spectrum is repurposed for wireless broadband.

The argument for simple auction rules is simple. The argument for complex auction rules is very complicated.

Share/Bookmark

Congress Asking Questions on IP Pools

June 11th, 2013

A couple weeks ago I singled out one case (among many) that showed how important it is to improve our intellectual property institutions. Here’s that Forbes article.

I see Congress is now asking questions, too.

Congressional to Asst AG Baer 04.18.13 by Bret Swanson

Share/Bookmark

Regulatory Complexity Gone Wild

May 30th, 2013

The complexities of the Affordable Care Act (aka Obamacare) are multiple, metastasizing, and increasingly well-known. Less known is an additional layer of health care regulation slated for implementation next year: the system by which doctors and hospitals code for conditions, injuries, and treatments. By way of illustration, in the old system, a broken arm might get the code 156; pneumonia might be 397. The new system is much more advanced. As Ben Domenech notes:

In all, the new system, known as ICD-10, will boast 140,000 codes, a near-eight-fold rise over the mere 18,000 codes in ICD-9. It is a good example of the way bureaucracies grow in size and complexity in an attempt to match the complexity of society and the economy. This temptation, however, is usually perverse.

Complexity in the economy means new technologies, more specialized goods and services, and more consumer and vocational choice. Economic complexity, however, is built upon a foundation of simplicity – clear, basic rules and institutions. Simple rules encourage experimentation, promote long-term investments and entrepreneurial ventures, and allow the flexibility to drive and accommodate diversity.

Complex rules, on the other hand, often lead to just the opposite: less experimentation, investment, entrepreneurship, diversity, and choice.

It is difficult to quantify the effects of the metastasizing Administrative State. It is impossible to calculate, say, the cost of regulations that prohibit, discourage, or delay innovation. Likewise, what is the cost of the regulations that, arguably, helped cause the Financial Panic of 2008 and its policy fallout? No one can say with precision. For twenty years, though, the Competitive Enterprise Institute has catalogued regulatory complexity as well as anyone, and its latest report is astonishing.

Federal regulation, CEI’s latest “10,000 Commandments” survey finds, costs the U.S. economy some $1.8 trillion annually. That’s more than 10% of GDP, or nearly $15,000 per citizen. These estimates largely predate the implementation of the ACA, Dodd-Frank, and new rounds of EPA intervention. In other words, it’s only getting worse.

The legal scholar Richard Epstein argues that

The dismal performance of the IRS is but a symptom of a much larger disease which has taken root in the charters of many of the major administrative agencies in the United States today: the permit power. Private individuals are not allowed to engage in certain activities or to claim certain benefits without the approval of some major government agency. The standards for approval are nebulous at best, which makes it hard for any outside reviewer to overturn the agency’s decision on a particular application.

That power also gives the agency discretion to drag out its review, since few individuals or groups are foolhardy enough to jump the gun and set up shop without obtaining the necessary approvals first. It takes literally a few minutes for a skilled government administrator to demand information that costs millions of dollars to collect and that can tie up a project for years. That delay becomes even longer for projects that need approval from multiple agencies at the federal or state level, or both.

The beauty of all of this (for the government) is that there is no effective legal remedy. Any lawsuit that protests the improper government delay only delays the matter more. Worse still, it also invites that agency (and other agencies with which it has good relations) to slow down the clock on any other applications that the same party brings to the table. Faced with this unappetizing scenario, most sophisticated applicants prefer quiet diplomacy to frontal assault, especially if their solid connections or campaign contributions might expedite the application process. Every eager applicant may also be stymied by astute competitors intent on slowing the approval process down, in order to protect their own financial profits. So more quiet diplomacy leads to further social waste.

Epstein argues the FDA, EPA, FCC, and other agencies all use this permit power to control firms, industries, and people beyond any reasonable belief they are providing a net benefit to society.

The agencies are guilty of overreach and promoting their own metastasis. Yet without Congress and the President, they would have little or no power. Congresses and Presidents increasingly pass thousand-page laws that ask agencies to produce tens of thousands of pages of attendant rules and regulations. Complexity grows. Accountability is lost. The economy suffers. And the corrective paths, which fix mistakes and promote renewal in the rest of the economy and society, are blocked. We then pile on tomorrow’s complexity to “fix” the flaws created by yesterday’s complexity.

The hyper-regulation of the economy is not merely an annoyance, not merely about inefficient paperwork. It is damaging our innovative and productive capacity — and thus employment, the budget, and our standard of living.

The McKinsey Global Institute helps us understand why this matters from a macro perspective. McKinsey chose a dozen existing and emerging technologies and estimated their potential economic impact in the year 2025. It found industries like the mobile Internet, cloud computing, self-driving cars, and genomics could produce economic benefits of up to $33 trillion worldwide. The operative word, however, is “might.” Innovation is all about what’s new. And regulation is often about disallowing or discouraging what’s new. The growing complexity of the regulatory state, therefore, can only block some number of these innovations and is thus likely to leave us with a simpler, and thus poorer, world.

— Bret Swanson

Share/Bookmark

Net ‘Neutrality’ or Net Dynamism? Easy Choice.

May 14th, 2013

Consumers beware. A big content company wants to help pay for the sports you love to watch.

ESPN is reportedly talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

As good a deal as this may be for consumers (and the companies involved), the potential arrangement offends some people’s very particular notion of “network neutrality.” They often have trouble defining what they mean by net neutrality, but they know rule breakers when they see them. Sure enough, long time net neutrality advocate Public Knowledge noted, “This is what a network neutrality violation looks like.”

The basic notion is that all bits on communications networks should be treated the same. No prioritization, no discrimination, and no partnerships between content companies and conduit companies. Over the last decade, however, as we debated net neutrality in great depth and breadth, we would point out that such a notional rule would likely result in many perverse consequences. For example, we noted that, had net neutrality existed at the time, the outlawing of pay-for-prioritization would have banned the rise of content delivery networks (CDNs), which have fundamentally improved the user experience for viewing online content. When challenged in this way, the net neutrality proponents would often reply, Well, we didn’t mean that. Of course that should be allowed. We also would point out that yesterday’s and today’s networks discriminate among bits in all sorts of ways, and that we would continue doing so in the future. Their arguments often deteriorated into a general view that Bad things should be banned. Good things should be allowed. And who do you think would be the arbiter of good and evil? You guessed it.

So what is the argument in the case of ESPN? The idea that ESPN would pay to exempt its bits from data caps apparently offends the abstract all-bits-equal notion. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially as an ESPN non-watcher, am worse off.

So the critics’ real worry is, I suppose, that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But this is NOT what government policy should be — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms. This is what we warned would happen. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

These practical and utilitarian arguments about technology and economics are important. Yet they ignore perhaps the biggest point of all: the FCC has no authority to regulate the Internet. The Internet is perhaps the greatest free-flowing, fast-growing, dynamic engine of cultural and economic value we’ve known. The Internet’s great virtue is its ability to change and grow, to foster experimentation and innovation. Diversity in networks, content, services, apps, and business models is a feature, not a bug. Regulation necessarily limits this freedom and diversity, making everything more homogeneous and diminishing the possibilities for entrepreneurship and innovation. Congress has given the FCC no authority to regulate the Internet. The FCC invented this job for itself and is now being challenged in court.

Possible ESPN-mobile partnerships are just the latest reminder of why we don’t want government limiting our choices — and all the possibilities — on the Internet.

— Bret Swanson

Share/Bookmark

Debt Dynamics and the Growth Imperative

April 23rd, 2013

A critique of Carmen Reinhart and Ken Rogoff’s paper examining debt’s effect on growth dominated the economic news over the last week. Reinhart and Rogoff’s 2010 offering, Growth in a Time of Debt, compiled lots of data on debt-to-GDP ratios from nations around the globe and found that higher debt ratios, especially those at 90% or above, tended to be associated with slower growth. Three UMass-Amherst economists, however, noticed an error in R&R’s spreadsheet and argued that it (along with two other statistical choices) significantly altered the results. R&R acknowledged the spreadsheet error in a reply but defended the thrust of their work and its conclusions.

Champions of government spending jumped on the critique, charging that the R&R paper had given aid and comfort to widespread “austerity” policies and that their now-discredited ideas had sunk the world economy. They dubbed it “The Excel Depression.”

Others who have reviewed all the evidence, however, found R&R’s research holds up rather well. Harvard’s Greg Mankiw basically agrees and thinks

The coding error in Reinhart and Rogoff has gotten a lot more media attention than it deserves.

Then there is the entertaining contrarian Nassim Nicholas Taleb, who, in a tweet, goes further:

The coding error, I agree, is not remotely dispositive in this very big debate. So where does that leave us? We’ve still got these enormous debts, slow growth, and a still-yawning intellectual chasm on all the big public finance and monetary policy issues. As some have pointed out, a problem with this type of research is causation. Even if R&R are correct about the correlation, in other words, does high debt cause slow growth, or does slow growth cause high debt? These questions really get to the heart of economics and, like Taleb, I’m skeptical conventional macro is very enlightening.

We’ve been debating these very topics for centuries, or millennia. In The History of England, for example, Thomas Babington Macaulay reminded us of his nation’s apparently insurmountable debts following the interminable wars of the seventeenth and eighteenth centuries.*

When the great contest with Lewis the Fourteenth was finally terminated by the Peace of Utrecht the nation owed about fifty million; and that debt was considered not merely by the rude multitude, not merely by fox hunting squires and coffee-house orators, but by acute and profound thinkers, as an encumbrance which would permanently cripple the body politic . . . .

Soon war again broke forth; and under the energetic and prodigal administration of the first William Pitt, the debt rapidly swelled to a hundred and forty million. As soon as the first intoxication of victory was over, men of theory and men of business almost unanimously pronounced that the fatal day had now really arrived.

David Hume said the nation’s madness exceeded that of the Crusades. Among the intellectuals, only Edmund Burke demurred. “Adam Smith,” Macaulay continued,

saw a little, and but a little further. He admitted that, immense as the pressure was, the nation did actually sustain it and thrive. . . . But he warned his countrymen even a small increase [in debt] might be fatal.

Thus Britain’s attempt to tax its American colonies to pay down its debts. And thus another war — the Revolutionary — and thus another 100 million in new debts. More wars stemming from the French Revolution pushed Britain’s debts to 800 million, surely beyond any possibility of repayment.

Yet like Addison’s valetudinarian, who continued to whimper that he was dying of consumption till he became sofat that he was shamed into silence, [England] went on complaining that she was sunk in poverty till her wealth showed itself by tokens which made her complaints ridiculous . . . .

The beggared, the bankrupt society not only proved able to meet all its obligations, but while meeting these obligations, grew richer and richer so fast that the growth could almost be discerned by the eye . . . . While shallow politicians were repeating that the energies of the people were borne down by the weight of public burdens, the first journey was performed by steam on a railway. Soon the island was intersected by railways. A sum exceeding the whole amount of he national debt at the end of the American war was, in a few years, voluntarily expended by this ruined people on viaducts, tunnels, embankments, bridges, stations, engines. Meanwhile, taxation was almost constantly becoming lighter and lighter, yet still the Exchequer was full . . . .

Macaulay pinpointed the chief defect in the thinking of the alarmists.

They made no allowance for the effect produced by the incessant progress of every experimental science, and by the incessant effort of every man to get on in life. They saw that the debt grew and they forgot that other things grew as well as the debt.

Does this mean the spendthrifts are right? That we can — indeed, should — spend our way out of our predicaments, without much regard for the growing debt?

No.

A defect of the debt alarmists may be their curmudgeonly suspicion that budget imbalances always drive the economy downward. An even more egregious defect of the debt apologists, however, is their assumption that budget imbalances lift the economy upward and that spending is equal to growth, rather than a result of growth. The debt alarmists too often forget the possibilities of human achievement that are the basis for wealth. The debt apologists, however, assume wealth is inevitable, that it can be redistributed, and that their policies will have no harmful impact on wealth creation. The crucial point in Macaulay is not that any nation can sustain growing debts but that vibrantly growing economies (like that of the scientifically-advanced, exploratory, industrial British Empire) can sustain debts in larger amounts than is commonly assumed.

The debt alarmists, moreover, play into the hands of the spendthrifts. By making budget balance their sine qua non of policy, they equate spending restraint with tax increases. The spendthrifts say “fine, if budget balance is so important, let’s raise taxes.” Never mind the possible negative growth effects of higher tax rates (and regulations and the like). This is what has happened in much of Europe and now to some extent in the U.S. An obsession with debt too often impels policies that slow economic growth — real economic growth, based on productivity and innovation, not spending — thus greatly exacerbating the burden of debt. And make no mistake, the burdens of debt are real. Defaults, inflations, and bankruptcies happen. If interest rates rise several percentage points, the U.S. might be paying hundreds of billions more in interest. And this is why the shortened term structure of our debt is an even bigger concern. We should have been locking in very long terms at these historically low rates.

Like the British Empire, with its pound sterling, the U.S. has a great advantage in the dollar’s status as world reserve currency. We are probably able to sustain higher debts than would otherwise be the case because our debts are in our own currency and the safe haven status of Treasurys. Yet, how did the pound sterling or the dollar achieve reserve status? Through powerful economic growth of the currencies’ issuers.

In the current growth and policy environment, America’s debts are a substantial worry. Yet no policy should focus first on debt. We should ask whether each policy encourages or discourages entrepreneurship and real productivity enhancements. And whether each spending program is legitimate, effective, and efficient. If policy were driven, more often than not, by thoughtful answers to these questions, then the debt question would answer itself. Our debt ratio would likely decline, yet the amount of debt our economy could sustain would rise.

Here is David Malpass concisely making the point on CNBC:


— Bret Swanson

* Macaulay quotes from George Gilder’s book Wealth & Poverty.

Share/Bookmark

U.S. Mobile: Effectively competitive? Probably. Positively healthy? Absolutely.

March 26th, 2013

Each year the Federal Communications Commission is required to report on competition in the mobile phone market. Following Congress’s mandate to determine the level of industry competition, the FCC, for many years, labeled the industry “effectively competitive.” Then, starting a few years ago, the FCC declined to make such a determination. Yes, there had been some consolidation, it was acknowledged, yet the industry was healthier than ever — more subscribers, more devices, more services, lots of innovation. The failure to achieve the “effectively competitive” label was thus a point of contention.

This year’s “CMRS” — commercial mobile radio services — report again fails to make a designation, one way or the other. Yet whatever the report lacks in official labels, it more than makes up in impressive data.

For example, it shows that as of October 2012, 97.2% of Americans have access to three or more mobile providers, and 92.8% have access to four or more. As for mobile broadband data services, 97.8% have access to two or more providers, and 91.6% have access to three or more.

Rural America is also doing well. The FCC finds 87% of rural consumers have access to three or more mobile voice providers, and 69.1% have access to four or more. For mobile broadband, 89.9% have access to two or more providers, while 65.4% enjoy access to three or more.

Call this what you will — to most laypeople, these choices count as robust competition. Yet the FCC has a point when it

refrain[s] from providing any single conclusion because such an assessment would be incomplete and possibly misleading in light of the variations and complexities we observe.

The industry has grown so large, with so many interconnected and dynamic players, it may have outgrown Congress’s request for a specific label.

14. Given the Report’s expansive view of mobile wireless services and its examination of competition across the entire mobile wireless ecosystem, we find that the mobile wireless ecosystem is sufficiently complex and multi-faceted that it would not be meaningful to try to make a single, all-inclusive finding regarding effective competition that adequately encompasses the level of competition in the various interrelated segments, types of services, and vast geographic areas of the mobile wireless industry.

Or as economist George Ford of the Phoenix Center put it,

The statute wants a competitive analysis, but as the Commission correctly points out, competition is not the goal, it [is] the means. Better performance is the goal. When the evidence presented in the Sixteenth Report is viewed in this way, the conclusion to be reached about the mobile industry, at least to me, is obvious: the U.S. mobile wireless industry is performing exceptionally well for consumers, regardless of whether or not it satisfies someone’s arbitrarily-defined standard of “effective competition.”

I’m in good company.  Outgoing FCC Chairman Julius Genachowski lists among his proudest achievements that “the U.S. is now the envy of the world in advanced wireless networks, devices, applications, among other areas.

The report shows that in the last decade, U.S. mobile connections have nearly tripled. The U.S. now has more mobile connections than people.

The report also shows per user data consumption more than doubling year to year.

More important, the proliferation of smartphones, which are powerful mobile computers, is the foundation for a new American software industry widely known as the App Economy. We detailed the short but amazing history of the app and its impact on the economy in our report “Soft Power: Zero to 60 Billion in Four Years.” Likewise, these devices and software applications are changing industries that need changing. Last week, experts testified before Congress about mobile health, or mHealth, and we wrote about the coming health care productivity revolution in “The App-ification of Medicine.”

One factor that still threatens to limit mobile growth is the availability of spectrum. The report details past spectrum allocations that have borne fruit, but the pipeline of future spectrum allocations is uncertain. A more robust commitment to spectrum availability and a free-flowing spectrum market would ensure continued investment in networks, content, and services.

What Congress once called the mobile “phone” industry is now a sprawling global ecosystem and a central driver of economic advance. By most measures, the industry is effectively competitive. By any measure, it’s positively healthy.

— Bret Swanson

Share/Bookmark

Quote of the Day

March 25th, 2013

The statute wants a competitive analysis, but as the Commission correctly points out, competition is not the goal, it the means.  Better performance is the goal.  When the evidence presented in the Sixteenth Report is viewed in this way, the conclusion to be reached about the mobile industry, at least to me, is obvious:  the U.S. mobile wireless industry is performing exceptionally well for consumers, regardless of whether or not it satisfies someone’s arbitrarily-defined standard of “effective competition.”

— George Ford, Phoenix Center chief economist, commenting on the FCC’s 16th Wireless Competition report.

Share/Bookmark

The Broadband Rooster

March 12th, 2013

FCC chairman Julius Genachowski opens a new op-ed with a bang:

As Washington continues to wrangle over raising revenue and cutting spending, let’s not forget a crucial third element for reining in the deficit: economic growth. To sustain long-term economic health, America needs growth engines, areas of the economy that hold real promise of major expansion. Few sectors have more job-creating innovation potential than broadband, particularly mobile broadband.

Private-sector innovation in mobile broadband has been extraordinary. But maintaining the creative momentum in wireless networks, devices and apps will need an equally innovative wireless policy, or jobs and growth will be left on the table.

Economic growth is indeed the crucial missing link to employment, opportunity, and healthier government budgets. Technology is the key driver of long term growth, and even during the downturn the broadband economy has delivered. Michael Mandel estimates the “app economy,” for example, has created more than 500,000 jobs in less than five short years of existence.

We emphatically do need policies that will facilitate the next wave of digital innovation and growth. Chairman Genachowski’s top line assessment — that U.S. broadband is a success — is important. It rebuts the many false but persistent claims that U.S. broadband lags the world. Chairman Genachowski’s diagnosis of how we got here and his prescriptions for the future, however, are off the mark.

For example, he suggests U.S. mobile innovation is newer than it really is.

Over the past few years, after trailing Europe and Asia in mobile infrastructure and innovation, the U.S. has regained global leadership in mobile technology.

This American mobile resurgence did not take place in just the last “few years.” It began a little more than decade ago with smart decisions to:

(1) allow reasonable industry consolidation and relatively free spectrum allocation, after years of forced “competition,” which mandated network duplication and thus underinvestment in coverage and speed (we did in fact trail Europe in some important mobile metrics in the late 1990s and briefly into the 2000s);

(2) refrain from any but the most basic regulation of broadband in general and the mobile market in particular, encouraging experimental innovation; and

(3) finally implement the digital TV / 700 MHz transition in 2007, which put more of the best spectrum into the market.

These policies, among others, encouraged some $165 billion in mobile capital investment between 2001 and 2008 and launched a wave of mobile innovation. Development on the iPhone began in 2004, the iPhone itself arrived in 2007, and the app store in 2008. Google’s Android mobile OS came along in 2009, the year Mr. Genachowski arrived at the FCC. By this time, the American mobile juggernaut had already been in full flight for years, and the foundation was set — the U.S. topped the world in 3G mobile networks and device and software innovation. Wi-Fi, meanwhile surged from 2003 onward, creating an organic network of tens of millions of wireless nodes in homes, offices, and public spaces. Mr. Genachowski gets some points for not impeding the market as aggressively as some other more zealous regulators might have. But taking credit for America’s mobile miracle smacks of the rooster proudly puffing his chest at sunrise.

More important than who gets the credit, however, is determining what policies led to the current success . . . and which are likely to spur future growth. Chairman Genachowski is right to herald the incentive auctions that could unleash hundreds of megahertz of un- and under-used spectrum from the old TV broadcasters. Yet wrangling over the rules of the auctions could stretch on, delaying the the process. Worse, the rules themselves could restrict who can bid on or buy new spectrum, effectively allowing the FCC to favor certain firms, technologies, or friends at the expense of the best spectrum allocation. We’ve seen before that centrally planned spectrum allocations don’t work. The fact that the FCC is contemplating such an approach is worrisome. It runs counter to the policies that led to today’s mobile success.

The FCC also has a bad habit of changing the metrics and the rules in the middle of the game. For example, the FCC has been caught changing its “spectrum screen” to fit its needs. The screen attempts to show how much spectrum mobile operators hold in particular markets. During M&A reviews, however, the FCC has changed its screen procedures to make the data fit its opinion.

In a more recent example, Fred Campbell shows that the FCC alters its count of total available commercial spectrum to fit the argument it wants to make from day to day. We’ve shown that the U.S. trails other nations in the sum of currently available spectrum plus spectrum in the pipeline. Below, see a chart from last year showing how the U.S. compares favorably in existing commercially available spectrum but trails severely in pipeline spectrum. Translation: the U.S. did a pretty good job unleashing spectrum in 1990s through he mid-2000s. But, contrary to Chairman Genachowski’s implication, it has stalled in the last few years.

When the FCC wants to argue that particular companies shouldn’t be allowed to acquire more spectrum (whether through merger or secondary markets), it adopts this view that the U.S. trails in spectrum allocation. Yet when challenged on the more general point that the U.S. lags other nations, the FCC turns around and includes an extra 139 MHz in spectrum in the 2.5 GHz range to avoid the charge it’s fallen behind the curve.

Next, Chairman Genachowski heralds a new spectrum “sharing” policy where private companies would be allowed to access tiny portions of government-owned airwaves. This really is weak tea. The government, depending on how you measure, controls between 60% and 85% of the best spectrum for wireless broadband. It uses very little of it. Yet it refuses to part with meaningful portions, even though it would still be left with more than enough for its important uses — military and otherwise. If they can make it work (I’m skeptical), sharing may offer a marginal benefit. But it does not remotely fit the scale of the challenge.

Along the way, the FCC has been whittling away at mobile’s incentives for investment and its environment of experimentation. Chairman Genachowski, for example, imposed price controls on “data roaming,” even though it’s highly questionable he had the legal authority to do so. The Commission has also, with varied degrees of “success,” been attempting to impose its extralegal net neutrality framework to wireless. And of course the FCC has blocked, altered, and/or discouraged a number of important wireless mergers and secondary spectrum transactions.

Chairman Genachowski’s big picture is a pretty one: broadband innovation is key to economic growth. Look at the brush strokes, however, and there are reasons to believe sloppy and overanxious regulators are threatening to diminish America’s mobile masterpiece.

— Bret Swanson

Share/Bookmark

Does Economic Growth Help the Middle Class?

February 19th, 2013

That’s the question Jim Tankersley asked in a page one Washington Post story this week.

Here is how he summarized the situation:

“In the past three recoveries from recession, U.S. growth has not produced anywhere close to the job and income gains that previous generations of workers enjoyed. The wealthy have continued to do well. But a percentage point of increased growth today simply delivers fewer jobs across the economy and less money in the pockets of middle-class families than an identical point of growth produced in the 40 years after World War II.

That has been painfully apparent in the current recovery. Even as the Obama administration touts the return of economic growth, millions of Americans are not seeing an accompanying revival of better, higher-paying jobs.

The consequences of this breakdown are only now dawning on many economists and have not gained widespread attention among policymakers in Washington. Many lawmakers have yet to even acknowledge the problem. But repairing this link is arguably the most critical policy challenge for anyone who wants to lift the middle class.”

Tankersley cites the historical heuristic that a percentage point of GDP growth usually delivers about a half-point (0.5-0.6%) of employment growth.

“Three and a half years into the recovery that began in 2001 under President George W. Bush, job intensity was stuck at less than 0.2 percent. The recovery under President Obama is now up to an intensity of 0.3 percent, or about half the historical average.”

If we measure incomes, rather than employment, the situation appears even more dire:

“Middle-class income growth looks even worse for those recoveries. From 1992 to 1994, and again from 2002 to 2004, real median household incomes fell — even though the economy grew more than 6 percent, after adjustments for inflation, in both cases. From 2009 to 2011 the economy grew more than 4 percent, but real median incomes grew by 0.5 percent.”

What’s going on? Is the American middle class really in such bad shape? If so, why? And can we do anything about it? If not, why do these data appear to show a fundamental shift in the link between GDP growth and overall prosperity? These are big, complicated questions. For which I don’t have lots of concrete answers. I would, however, suggest a number of factors that may help us think about.

First, our economy does look different from the 1950s or 1960s. It is more complex. Back then, during a recession, factories laid off shifts of workers, leading to sharp employment downturns. Coming out of recessions, factories often hired back those same workers to build the same products. It was a simple process.

Today, although American manufacturing output is larger than ever, it employs a much smaller portion of the economy. The service and knowledge economies now dominate employment. And when jobs are not so closely tied to making widgets and the output is more ambiguous, the simple lay-off/hire-back formula disappears. In other words, we have lots more organizational and human capital today, and less “labor.”

This could be one reason the 1990 and 2001 recessions were shallower, but the job bounce-backs were slower.

Another factor, which everyone points out, is education. The United States may dominate many of the high-end professions in technology and finance because we have large cohorts of highly educated people (and immigrants). During the Great Recession and its aftermath, for example, the new App Economy, based on smartphones, broadband, and software, has created an estimated 500,000-600,000 jobs. Perhaps an also large cohort, however, not nearly as well educated or without the necessary knowledge skills, has been caught in a two-decade wave of globalization that quickly reduced the jobs this cohort was used to doing, without the possibility for quick changes to higher-value industries.

The Great Recession, however, was deeper and its employment rebound slower than the 1990 and 2001 recessions.

So we look to other factors that appear to be suppressing employment. In his new book The Redistribution Recession, University of Chicago economist Casey Mulligan argues that a host of well-intended safety-net programs are the chief culprit. Unemployment insurance, disability payments, the minimum wage, Medicaid, the earned income tax credit, food stamps and other programs can create deep disincentives to work and/or hire. Mulligan estimated that the average marginal tax rate on the relevant population increased eight percentage points, from 40% to 48%, during the Great Recession. For many individuals and families, the complex effects of these programs conspire to yield 100% marginal tax rates — that is, an extra dollar earned loses a dollar or more in benefits and taxes.

I would throw out another possible factor: monetary policy. The Fed’s unorthodox zero-interest-rate-plus-bond-buying policy has created free money for large firms and for government. We see government growing and corporate profits at record highs. But for small and medium-sized firms, credit is being rationed by regulators. Low rates are meaningless if credit is unavailable. The slow recovery for small firms, which are often acknowledged to create most jobs, could be part of the equation.

Switching from employment to income, a few factors are commonly mentioned:

  • Education and globalization may, as with employment, be boosting income for the top but limiting income prospects for the broad middle.
  • Health care and other benefits are rising as a portion of overall compensation, thus limiting the measured portion that we call wages or salaries.
  • Immigration has added millions of low-wage workers that may depress average measured incomes. These particular workers may be much better off than they were in their home countries and, by lowering wages for jobs few Americans want to do, may “harm” only a very small number of Americans.
  • Many income measures do not account for taxes and larger transfer payments in recent times through EITC, Medicaid, disability, unemployment, food stamps, etc. When these are factored in, the numbers look much different.

Alan Reynolds made the case for these underestimates in his 2006 book, Income and Wealth. And now Bruce D. Meyer of the University of Chicago and James X. Sullivan of Notre Dame find that median income growth has not suffered nearly as much as the conventional wisdom says.

“After appropriately accounting for inflation, taxes, and noncash benefits, we show that median income rose by more than 50 percent over the past three decades. This increase is considerably greater than the gains implied by official statistics—official median income rose by only 14 percent between 1980 and 2009. Our improved measure of income increased in each of the past three decades, although the growth has been much slower since 2000. Median consumption also rose at a similar rate over the whole period but at a faster rate than income over the past decade.”

The real income slowdown in the 2000s is not surprising. The decade included two recessions—including the big one. The decade also saw, for the first time since the 1970s, a good whiff of inflation, especially in food, fuel, and housing. Add in spiraling health care and education costs. So, despite spectacular gains in computers, communications, and consumer goods, the middle class squeeze often seems real.

Mark Perry and Don Boudreaux, however, are even more emphatic than Meyer and Sullivan. They say the “trope” of the stagnant middle class is “spectacularly wrong”:

“It is true enough that, when adjusted for inflation using the Consumer Price Index, the average hourly wage of nonsupervisory workers in America has remained about the same. But not just for three decades. The average hourly wage in real dollars has remained largely unchanged from at least 1964—when the Bureau of Labor Statistics (BLS) started reporting it.

“Moreover, there are several problems with this measurement of wages. First, the CPI overestimates inflation by underestimating the value of improvements in product quality and variety. Would you prefer 1980 medical care at 1980 prices, or 2013 care at 2013 prices? Most of us wouldn’t hesitate to choose the latter.

“Second, this wage figure ignores the rise over the past few decades in the portion of worker pay taken as (nontaxable) fringe benefits. This is no small matter—health benefits, pensions, paid leave and the rest now amount to an average of almost 31% of total compensation for all civilian workers according to the BLS.

“Third and most important, the average hourly wage is held down by the great increase of women and immigrants into the workforce over the past three decades. Precisely because the U.S. economy was flexible and strong, it created millions of jobs for the influx of many often lesser-skilled workers who sought employment during these years.”

Perry and Boudreaux go on to say that no income figures—whether the officially stagnant ones or the higher adjusted figures—can account for the dramatic rise in the quantity and quality of consumption that income yields.

Bill Gates in his private jet flies with more personal space than does Joe Six-Pack when making a similar trip on a commercial jetliner. But unlike his 1970s counterpart, Joe routinely travels the same great distances in roughly the same time as do the world’s wealthiest tycoons.

What’s true for long-distance travel is also true for food, cars, entertainment, electronics, communications and many other aspects of ‘consumability.’ Today, the quantities and qualities of what ordinary Americans consume are closer to that of rich Americans than they were in decades past. Consider the electronic products that every middle-class teenager can now afford—iPhones, iPads, iPods and laptop computers. They aren’t much inferior to the electronic gadgets now used by the top 1% of American income earners, and often they are exactly the same.”

Despite all the factors in this multifaceted debate, one thing is certain. Economic growth is better for the middle class than is economic stagnation.

Share/Bookmark

Zero GDP Reading Exposes the Real Deficit – Economic Growth

February 1st, 2013

It is currently in fashion to say, with great contrarian flair, that federal spending growth is the slowest since the Eisenhower Administration. Or, as someone famous recently put it, “We don’t have a spending problem.”

This assertion is, to put it mildly, debatable. Spending jumped 18% in just one year during the Panic of 2008-09. If the government keeps spending at that level, but starts counting after the jump, then the growth rate will appear modest. Spending as a share of GDP is higher than at anytime since World War II, and so is the debt-to-GDP ratio. As the OMB chart below shows, it gets much worse.

Nevertheless, does anyone disagree that we have a growth problem, and a serious one? Yesterday’s negative GDP estimate for the fourth quarter of 2012 (-0.1%) should jolt the nation.

Let’s stipulate the GDP reading’s anomalies — lower than expected inventories and defense spending, which could reverse and add a bit to future growth. Yet economists had expected fourth quarter growth of 1.1% — itself an abysmal projection — and actual growth for the entire year was a barely mediocre 2.2%. Consider, too, that lots of economic activity was moved forward into 2012 to beat the Fiscal Cliff taxman. And don’t forget the Federal Reserve’s extraordinary QE programs, which are supposed to boost growth.

Whatever we’re doing, it’s not working. Not nearly well enough to create jobs. And not nearly well enough to help the budget. Because whatever you think about spending or taxes, the key factor in the health of the budget is economic growth.

OMB projects spending will grow (from today’s historically high level) around 2.96% per year through 2050. It projects annual economic growth over the period of 2.5%. That gets us a debt crisis somewhere down the line, and lots of other economic and social problems along the way.

Last year, however, keep in mind, growth was just 2.2%, following 2011’s even worse reading of 1.8%. If we can’t even match the modest 2.5% long-term projection coming out of a severe downturn, our problems may be worse than we think. Economist Robert Gordon of Northwestern asks “Is U.S. Growth Over?” Outlining seven economic headwinds, he projects growth of around 1.5% over the next few decades. In the chart below, you can see what a budget disaster such a slowdown would produce. Deficits quickly grow from a trillion dollars a year today into the many trillions per year.

Perhaps, many are now suggesting, we can tax our way out of the problem. Almost all academic research, however, suggests higher taxes (in terms of rates and as a portion of the economy) hurt economic growth. The Tax Foundation, for example, surveyed the 26 major studies on the topic going back to the early 1980s. Twenty-three of the studies found that taxes hurt economic growth. No study found higher taxes helped growth. Recent experience in Europe tends to confirm these findings.

Today, most of the policy discussion revolves around debt ceilings, sequesters, and the (fading) possibility of grand bargain budget deal. Mostly lost in the equation is economic growth. One question should dominate the thinking of policymakers: What policies would encourage more productive economic activity?

The new possibility of a breakthrough on immigration reform is an encouraging example. A more rational immigration policy for both low-skilled and high-skilled workers could boost economic growth significantly. Can we find more such policies? As you can see in the chart below, higher taxes can’t make up the budget shortfall. Faster growth and modest spending restraint can. This chart once again shows the OMB projected spending path (solid black line). The solid blue line shows what would happen to tax receipts if (1) growth remains mediocre and (2) we somehow find a way to dramatically raise the portion of the economy Washington taxes from the historical 18% to 23%.

That’s a major jump in taxation. Yet it doesn’t get us close to a healthy budget.

Faster growth and modest spending restraint, on the other hand, close the budget gap. And they do so without increasing the share Washington historically takes from the economy. The orange dashed line shows tax receipts under an economy growing at 3.5% with the historic 18% tax-to-GDP ratio. (Growth of 3.5% may sound like an ambitious goal. Keep in mind, however, that we are still far below trend — we’ve never really recovered from the Great Recession. Long term growth of 3.5%, therefore, merely includes a more rapid recovery to trend over the next several years and then a resumption of the long-term average of 3%.) In the medium to long term, a faster growth-lower tax regime generates more tax revenue than a slow growth-high tax regime.

Faster growth alone would be enough to stabilize budget deficits at today’s levels. But that is not enough. Trillion dollar deficits and Washington spending an ever rising share of the economy are not acceptable. Look, however, at the very modest spending restraint that would be required to essentially balance the budget by 2050. If we slowed spending growth from the projected 2.96% annual rate to just 2.7%, we could close the gap.

Does anyone think spending growth of 2.7% per year versus 2.96% is going to tear apart Social Security, Medicare, the military, or other essential government functions. Many of us could imagine responsible ways to reduce projected spending far, far more than that. All this shows is that a little restraint and robust economic growth go a long way.

The slow growth-high tax scenario produces a budget deficit of almost $3.5 trillion in 2050. Under the faster growth-lower tax scenario, with a touch of spending restraint, the 2050 budget deficit would be just $58 billion.

Now, I’m not pretending I know that a higher tax-to-GDP ratio will produce a particular rate of economic growth. The above are just rough scenarios. Lots of factors are in play. And that is precisely the point. Given an complex, uncertain world, we should attempt to align all our policies for economic growth. We know what policies tend to encourage growth, and those that tend to stunt it.

That means getting immigration policy right — and it appears we may finally be getting somewhere. It means smart, reasonable regulatory policies in energy, health care, education, communications, and intellectual property. It means a healthy division of powers between the federal and state governments. And, yes, it means sweeping tax reform — both individual and corporate.

What we are doing today isn’t working. We are on a dangerous path. Two percent growth won’t get us anywhere. No matter how much we tax ourselves. Only robust growth fueled by entrepreneurship and investment, with a healthy faith in the unknown possibilities of America’s future, will get us there.

Share/Bookmark

Ignorance, the Ultimate Asset

January 24th, 2013

Grab a cup of coffee and check out our new article at The American, the online magazine of the American Enterprise Institute.

“Ignorance, the Ultimate Asset”

Share/Bookmark

Broadband Bullfeathers

December 14th, 2012

Several years ago, some American lawyers and policymakers were looking for ways to boost government control of the Internet. So they launched a campaign to portray U.S. broadband as a pathetic patchwork of tin-cans-and-strings from the 1950s. The implication was that broadband could use a good bit of government “help.”

They initially had some success with a gullible press. The favorite tools were several reports that measured, nation by nation, the number of broadband connections per 100 inhabitants. The U.S. emerged from these reports looking very mediocre. How many times did we read, “The U.S. is 16th in the world in broadband”? Upon inspection, however, the reports weren’t very useful. Among other problems, they were better at measuring household size than broadband health. America, with its larger households, would naturally have fewer residential broadband subscriptions (not broadband users) than nations with smaller households (and thus more households per capita). And as the Phoenix Center demonstrated, rather hilariously, if the U.S. and other nations achieved 100% residential broadband penetration, America would actually fall to 20th from 15th.

In the fall of 2009, a voluminous report from Harvard’s Berkman Center tried to stitch the supposedly ominous global evidence into a case-closed indictment of U.S. broadband. The Berkman report, however, was a complete bust (see, for example, these thorough critiques: 1, 2, and 3 as well as my brief summary analysis).

Berkman’s statistical analyses had failed on their own terms. Yet it was still important to think about the broadband economy in a larger context. We asked the question, how could U.S. broadband be so backward if so much of the world’s innovation in broadband content, services, and devices was happening here?

To name just a few: cloud computing, YouTube, Twitter, Facebook, Netflix, iPhone, Android, ebooks, app stores, iPad. We also showed that the U.S. generates around 60% more network traffic per capita and per Internet user than Western Europe, the supposed world broadband leader. The examples multiply by the day. As FCC chairman Julius Genachowski likes to remind us, the U.S. now has more 4G LTE wireless subscribers than the rest of the world combined.

Yet here comes a new book with the same general thrust — that the structure of the U.S. communications market is delivering poor information services to American consumers. In several new commentaries summarizing the forthcoming book’s arguments, author Susan Crawford’s key assertion is that U.S. broadband is slow. It’s so bad, she thinks broadband should be a government utility. But is U.S. broadband slow?

According to actual network throughput measured by Akamai, the world’s largest content delivery network, the U.S. ranks in the top ten or 15 across a range of bandwidth metrics. It is ninth in average connection speed, for instance, and 13th in average peak speed. Looking at proportions of populations who enjoy speeds above a certain threshold, Akamai finds the U.S. is seventh in the percentage of connections exceeding 10 megabits per second (Mbps) and 13th in the percentage exceeding 4 Mbps. (See the State of the Internet report, 2Q 2012.)

You may not be impressed with rankings of seventh or 13th. But did you look at the top nations on the list? Hong Kong, South Korea, Latvia, Switzerland, the Netherlands, Japan, etc.

Each one of them is a relatively small, densely populated country. The national rankings are largely artifacts of geography and the size of the jurisdictions observed. Small nations with high population densities fare well. It is far more economical to build high-speed communications links in cities and other relatively dense populations. Accounting for this size factor, the U.S. actually looks amazingly good. Only Canada comes close to the U.S. among geographically larger nations.

But let’s look even further into the data. Akamai also supplies speeds for individual U.S. states. If we merge the tables of nations and states, the U.S. begins to look not like a broadband backwater or even a middling performer but an overwhelming success. Here are the two sets of Akamai data combined into tables that directly compare the successful small nations with their more natural counterparts, the U.S. states (shaded in blue).

Average Broadband Connection Speed — Nine of the top 15 entities are U.S. states.

Average Peak Connection Speed — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 10 Megabits per Second — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 4 Megabits per Second — Ten of the top 16 entities are U.S. states.

Among the 61 ranked entities on these four measures of broadband speed, 39, or almost two-thirds, are U.S. states. American broadband is not “pitifully slow.” In fact, if we were to summarize U.S. broadband, we’d have to say, compared to the rest of the world, it is very fast.

It is true that not every state or region in the U.S. enjoys top speeds. Yes, we need more, better, faster, wider coverage of wired and wireless broadband. In underserved neighborhoods as well as our already advanced areas. We need constant improvement both to accommodate today’s content and services and to drive tomorrow’s innovations. We should not, however, be making broad policy under the illusion that U.S. broadband, taken as a whole, is deficient. The quickest way to make U.S. broadband deficient is probably to enact policies that discourage investment and innovation — such as trying to turn a pretty successful and healthy industry that invests $60 billion a year into a government utility.

— Bret Swanson

Share/Bookmark

“Deck the Halls with Macro Follies” – featuring Jean-Baptiste Say!

December 7th, 2012

Share/Bookmark

The App Economy, so far

December 5th, 2012

See our new report summarizing the short but amazing life of the mobile app: Soft Power: Zero to 60 Billion in Four Years.

Share/Bookmark

Memos to the Future: 2042

December 1st, 2012

What would “the New Normal” of a mere 1% per capita GDP growth mean for the American economy over the next few decades?  What if it’s even worse, as many are now predicting? Is there anything we can do about it? If so, what? We address these items in our new article for the Business Horizon Quarterly – “Beyond the New Normal, a New Era of Growth.”

Share/Bookmark

The $66-billion Internet Expansion

November 8th, 2012

Sixty-six billion dollars over the next three years. That’s AT&T’s new infrastructure plan, announced yesterday. It’s a bold commitment to extend fiber optics and 4G wireless to most of the country and thus dramatically expand the key platform for growth in the modern U.S. economy.

The company specifically will boost its capital investments by an additional $14 billion over previous estimates. This should enable coverage of 300 million Americans (around 97% of the population) with LTE wireless and 75% of AT&T’s residential service area with fast IP broadband. It’s adding 10,000 new cell towers, a thousand distributed antenna systems, and 40,000 “small cells” that augment and extend the wireless network to, for example, heavily trafficked public spaces. Also planned are fiber optic connections to an additional 1 million businesses.

As the company expands its fiber optic and wireless networks — to drive and accommodate the type of growth seen in the chart above — it will be retiring parts of its hundred-year-old copper telephone network. To do this, it will need cooperation from federal and state regulators. This is the end of phone network, the transition to all Internet, all the time, everywhere.

Share/Bookmark

This kind of “prosperity” isn’t good enough

November 1st, 2012

Today, Princeton’s Alan Blinder says things are looking up, that we’re finally traveling the road to prosperity, albeit slowly. It’s a rather timid claim:

there are definitely positive signs. The stock market is near a five-year high. Recent data on consumer spending and confidence show improvement, though we need more data before declaring victory. At long last, the housing market is growing rapidly, albeit from a very low base . . . .

On balance, the U.S. economy is healing its wounds—that’s another fact. But none of this puts us on the verge of an exuberant boom. Still, if the fiscal cliff is avoided and the European debt crisis doesn’t explode in our face, both GDP growth and job growth should be higher in 2013 than in 2012—even under current policies. But that’s a forecast, not a fact.

Stanford’s John Taylor counters some of Blinder’s claims:

First, he admits that real GDP growth—the most comprehensive measure we have of the state of the economy—is declining; that’s not an improvement.

Second, he admits that, according to the payroll survey, job growth isn’t faster in 2012 than 2011; that’s not an improvement either.

Third, he mentions that the household survey shows employment growth is faster, but that growth must be measured relative to a growing population. If you look at the employment to population ratio, it is the same (58.5%) in the 12 month period starting in October 2009 (the month he chooses as the low point) as in the past 12 months. That’s not an improvement.

Fourth, he shows that the unemployment rate is coming down. But much of that improvement is due to the decline in the labor force participation rate as people drop out of the labor force. According to the CBO, unemployment would be 9 percent if that unusual and distressing decline–certainly not an improvement–had not occurred.

He then goes on to consider forecasts, saying that there are promising signs, such as the housing market. The problem here, however, is that growth is weakening even as housing is less of a drag, because other components of GDP are flagging.

Meanwhile, there is Northwestern’s Bob Gordon, who is making a much stronger, longer term forecast — that the next several decades will be pretty awful. Specifically, that real U.S. economic growth is likely to halve — or worse — from its recent and historical trend of about 2% per-capita per-year.

We’ve been emphasizing just how important it is to get the economy moving again, and how important long term growth is for jobs, incomes, overall opportunity, and for governmental budgets. The Gordon scenario is even worse than the so-called New Normal of around 1% per-capita growth (or 2% overall growth). Gordon projects per-capita growth over the next few decades of around 0.7%. (In non-per-capita terms, the way GDP figures are most often reported, that’s about 1.7%). He thinks growth for the “99%” will be far worse — just 0.2% per-capita.

In the chart below, you can see just how devastating a New Normal scenario would be, let alone Gordon’s even more pessimistic projection. It’s urgent that we implement a sweeping new set pro-growth reforms on taxes, regulation, immigration, trade, education, and monetary policy.

Share/Bookmark

Discussing Broadband Via Broadband

October 29th, 2012

It was nice of Ball State University’s Digital Policy Institute (@DigitalPolicy) to include me last Friday  in a webinar discussion on broadband policy. Joining the virtual discussion were Leslie Marx, of Duke and formerly FCC chief economist; Anna-Maria Kovacs, well-known regulatory analyst and fellow at Georgetown; and Michael Santorelli of New York Law School.

You can find a replay of the webinar here. Our broadband discussion, which begins at 1:55:48, was preceded by a good discussion of consumer online privacy, which you might also enjoy.

Share/Bookmark

Flashback: “there is no inherent shortage of oil”

October 25th, 2012

The energy boom is an apparent surprise to many. I don’t know why. Here’s the photo, caption, and story right now (Wednesday night) on the front page of  The Wall Street Journal :

Here was our take in 2006:

there is no inherent shortage of oil. One tiny shale formation right in America’s backyard — the 1,200 square mile Piceance Basin of western Colorado — contains a trillion barrels, more than all the proven reserves in the world. Vast open spaces across the globe remain unexplored or untapped.

Today, it’s Dakota, Texas, and Pennsylvania shale that is leading the new boom. As a few smart guys wrote, we have a “bottomless well” of energy, if only we allow ourselves to find, refine, and innovate.

Share/Bookmark

A Policy Path to Internet 2020

October 17th, 2012

Optical fiber versus copper, no contest.

Life’s only certainty is change. Nowhere more true than with modern technologies, particularly broadband. Problem is, lots of government rules are not coming along for the ride.

Yesterday the Communications Liberty and Innovation Project (CLIP) hosted regulatory experts to discuss ways the FCC might incent more investment in digital infrastructure.

A fresh voice at the FCC is focusing the agency and the country on such a policy path of abundant wired and wireless broadband. New FCC Commissioner Ajit Pai (@AjitPaiFCC) yesterday called for the creation of an IP Transition Task Force as a way to accelerate the transition from analog networks to faster and more ubiquitous digital networks. Network providers, he said, want to know how IP services will be regulated before making major infrastructure investments. Commissioner Pai also discussed economic growth and job creation, asserting every $1 billion spent on fiber deployment creates between 15,000 and 20,000 jobs. Therefore to pave the way for robust private sector investment in the IP infrastructure, the FCC must signal a clear intention not to apply outdated 20th century regulations to these 21st century technologies.

The follow-up discussion focused on the need for a regulatory framework that will promote competition and economic growth while also maximizing consumer benefits. Jonathan Banks of US Telecom pointed out that the telecommunications industry is investing $65 billion per year, every year, in broadband infrastructure — a huge boost to current and future economic growth. Whoever occupies the White House after November should make it clear that expanding the nation’s “infostructure” with private investment dollars is a key national priority that will generate huge dividends — digital and otherwise.

Share/Bookmark

Anemic Growth and Why Only Non-Fed Policy Can Boost It

October 15th, 2012

The central economic problem — one that exacerbates all our other serious challenges, from debt to entitlements to persistently low employment — is a sluggish rate of economic growth. Worse than sluggish, really. At less than 2% per annum real growth, the economy is barely limping along. We are growing at perhaps just a third or a fourth the speed (or worse!) compared to previous recoveries from recessions of similar severity.

One school of thought, however, says that there’s not much we can do about it. The nature of the panic — with housing and financial institutions at its core — makes stagnation all but certain. Nonsense, says John Taylor of Stanford, in this new video (part 2 of 3) hosted by the Hoover Institution’s Russ Roberts:

In the next video, Yale’s Robert Shiller reinforces the point about housing. The author of the Case-Shiller Home Price Index questions whether the Fed can reflate home prices with “one button” and whether its zero-rates-forever policy might not do more harm than good. It’s more about “animal spirits,” Shiller says, which means housing is more a function of economic growth than growth is a function of housing.


Share/Bookmark

Possible progress on spectrum expansion

September 25th, 2012

For years we’ve been highlighting the need for policies that encourage communications infrastructure investment. Fiber, cell towers, data centers — these are the foundation of our growing digital economy, the tools of which are increasingly integral components of every business in every industry. One of the most crucial inputs that makes the digital economy go, however, is invisible. It’s wireless spectrum, and today we don’t have the right spectrum allocation to ensure continued wireless growth and innovation.

So it was good news to hear that former FCC commissioner Jonathan Adelstein is the new CEO of the Personal Communications Industry Association, also known as the “Wireless Infrastructure Association.” The companies he will represent are the mobile service providers, cell tower operators, and associated service companies that build these often unseen networks.

“The ultimate goal for consumers and the economy is to accommodate the need for more wireless data,” Adelstein told Communications Daily.  ”More spectrum is sort of the effective means for getting there . . . As more spectrum comes online it will ultimately require new infrastructure to accomplish the goal of meeting the data crunch.”

This gives a boost to the prospects for better spectrum policy.

Share/Bookmark

Money, Inflation, the Euro – Most of What You Hear Is Wrong

September 12th, 2012

Here’s a good interview with Chicago’s John Cochrane, who offers incisive contrarian views on money, inflation, “stimulus,” Greece, the euro, economic growth, and Milton Friedman’s “Inflation is always and everywhere a monetary phenomenon” meme. I wrote about these topics here.

Share/Bookmark

The Growth Effect on Jobs

September 6th, 2012

Is the persistently high unemployment rate a secular, rather than cyclical, occurrence? Is it, in other words, a basic shift in the labor market that will leave us with semi-permanently higher joblessness for years or decades to come — no matter what we try to do about it?

Ed Lazear of Stanford and James Spletzer of the U.S. Census Bureau dug into the matter and presented their findings over the weekend at the Fed’s Jackson Hole economic gathering. Lazear also summarized the research in The Wall Street Journal. “The unemployment rate has exceeded 8% for more than three years,” wrote Lazear.

This has led some commentators and policy makers to speculate that there has been a fundamental change in the labor market. The view is that today’s economy cannot support unemployment rates below 5%—like the levels that prevailed before the recession and in the late 1990s. Those in government may take some comfort in this view. It lowers expectations and provides a rationale for the dismal labor market.

Lazear and Spletzer looked at what happened in particular industries and specific jobs, asking whether the real problem is that some industries are too old and aren’t coming back and whether there is substantial “mismatch” between job requirements and worker skills that prevent jobs from being filled. No doubt the economy is always changing, and few industries or jobs stay the same forever, but they found, for example, that

mismatch increased dramatically from 2007 to 2009. But just as rapidly, it decreased from 2009 to 2012. Like unemployment itself, industrial mismatch rises during recessions and falls as the economy recovers. The measure of mismatch that we use, which is an index of how far out of balance are supply and demand, is already back to 2005 levels.

Whatever mismatch exists today was also present when the labor market was booming. Turning construction workers into nurses might help a little, because some of the shortages in health and other industries are a long-run problem. But high unemployment today is not a result of the job openings being where the appropriately skilled workers are unavailable.

Lazear and Spletzer concluded that no, the jobless problem is not mostly secular, and we shouldn’t accept high unemployment.

The reason for the high level of unemployment is the obvious one: Overall economic growth has been very slow. Since the recession formally ended in June 2009, the economy has grown at 2.2% per year, or 6.6% in total. An empirical rule of thumb is that each percentage point of growth contributes about one-half a percentage point to employment.

The economy has regained about four million jobs since bottoming out in early 2010, which is right around 3% of employment—just the gain that would be predicted from past experience. Things aren’t great, but the failure is a result of weak economic growth, not of a labor market that is not in sync with the rest of the economy.

The evidence suggests that to reduce unemployment, all we need to do is grow the economy. Unfortunately, current policies aren’t doing that. The problems in the economy are not structural and this is not a jobless recovery. A more accurate view is that it is not a recovery at all.

The upside of this dismal situation is that we can do something about it. Think about what a different set of pro-growth policies could mean for American workers. Using Lazear’s very rough rule of “one point growth, half a point employment,” we can get an idea of what faster growth might yield in the labor market.

At today’s feeble 2% growth rate, we might expect to add several tens of thousands, or maybe a hundred thousand or two, of jobs each month. Over the next five years, at 2%, we might add something like seven million jobs. But that’s barely enough to keep up with population growth. Three percent growth, the historic average, meanwhile, would likely yield around 10 million net new jobs, 3 million more than at today’s 2% growth rate.

But three percent growth coming out of a deep recession and slow recovery is itself slower-than-usual recovery speed. It is certainly not an ambitious objective. Coming out of a slump like today’s we should be able to grow at 4, 5, or 6% for several years, as we did in the mid-1980s. Four percent growth for the next five years could add 14 million net new jobs, and 5% growth could add 17.5 million — meaning in 2017 something approaching 11 million more Americans would be working compared to today’s sclerotic 2% growth.

Keep in mind, these are rough rules of thumb, not forecasts or projections, and we’re leaving out lots of technical dynamics. There’s a lot going on in an economy, and we do not pretend these are precise estimates. The point is to show the magnitudes involved — that faster growth can provide jobs for millions more Americans in a relatively short period of time.

The problem is that U.S. policy before and after the financial panic and recession has not supported growth — I’d argue it has impeded growth. Faster growth is so important, we should be doing everything possible to enact policies that encourage it — or, if we can’t enact them today, then at least pointing the nation in the right direction.

A more efficient tax code that rewards rather than punishes investment and entrepreneurship would make a huge difference. Unfortunately, some in Washington and the states are proposing higher tax rates and new carve-outs and favors that will make real tax reform impossible. We need to ensure that Washington doesn’t keep consuming an ever greater share of the economy. But again, we’ve just seen a huge jump in the government-economy ratio, from 20% to 25%, and the current budget path just makes this ratio worse and worse over time.

Does anyone believe we have a regulatory system that promotes economic growth? In each of the last two years, the Federal Register of government regulations has grown by more than 81,000 pages. We’ve recently seen Washington drape vast new blankets of regulation over finance and health care and interfere at every turn with our energy economy — a sector that is poised to deliver explosive growth in coming years. Other regulatory actions, like FCC interference in broadband and mobile networks, can slow growth at the margins or, depending on how zealous regulators choose to be, severely disrupt an innovation ecosystem.

The economy is too complex to dial up exactly what we want. I am not suggesting a simple flip of a switch can achieve this dramatic improvement. But we should be giving ourselves — and American citizens — as many chances as possible. Given what’s at stake, there’s no excuse for not lining up policy to maximize the opportunities for faster growth.


— Bret Swanson

Share/Bookmark

FCC’s 706 Broadband Report Does Not Compute

August 22nd, 2012

Yesterday the Federal Communications Commission issued 181 pages of metrics demonstrating, to any fair reader, the continuing rapid rise of the U.S. broadband economy — and then concluded, naturally, that “broadband is not yet being deployed to all Americans in a reasonable and timely fashion.” A computer, being fed the data and the conclusion, would, unable to process the logical contradictions, crash.

The report is a response to section 706(b) of the 1996 Telecom Act that asks the FCC to report annually whether broadband “is being deployed . . . in a reasonable and timely fashion.” From 1999 to 2008, the FCC concluded that yes, it was. But now, as more Americans than ever have broadband and use it to an often maniacal extent, the FCC has concluded for the third year in a row that no, broadband deployment is not “reasonable and timely.”

The FCC finds that 19 million Americans, mostly in very rural areas, don’t have access to fixed line terrestrial broadband. But Congress specifically asked the FCC to analyze broadband deployment using any technology.”

“Any technology” includes DSL, cable modems, fiber-to-the-x, satellite, and of course fixed wireless and mobile. If we include wireless broadband, the unserved number falls to 5.5 million from the FCC’s headline 19 million. Five and a half million is 1.74% of the U.S. population. Not exactly a headline-grabbing figure.

Even if we stipulate the FCC’s framework, data, and analysis, we’re still left with the FCC’s own admission that between June 2010 and June 2011, an additional 7.4 million Americans gained access to fixed broadband service. That dropped the portion of Americans without access to 6% in 2011 from around 8.55% in 2010 — a 30% drop in the unserved population in one year. Most Americans have had broadband for many years, and the rate of deployment will necessarily slow toward the tail-end of any build-out. When most American households are served, there just aren’t very many to go, and those that have yet to gain access are likely to be in the very most difficult to serve areas (e.g. “on tops of mountains in the middle of nowhere”). The fact that we still added 7.4 million broadband in the last year, lowering the unserved population by 30%, even using the FCC’s faulty framework, demonstrates in any rational world that broadband “is being deployed” in a “reasonable and timely fashion.”

But this is not the rational world — it’s D.C. in the perpetual political silly season.

One might conclude that because the vast majority of these unserved Americans live in very rural areas — Alaska, Montana, West Virginia — the FCC would, if anything, suggest policies tailored to boost infrastructure investment in these hard-to-reach geographies. We could debate whether these are sound investments and whether the government would do a good job expanding access, but if rural deployment is a problem, then presumably policy should attempt to target and remediate the rural underserved. Commissioner McDowell, however, knows the real impetus for the FCC’s tortured no-confidence vote — its regulatory agenda.

McDowell notes that the report repeatedly mentions the FCC’s net neutrality rules (now being contested in court), which are as far from a pro-broadband policy, let alone a targeted one, as you could imagine. If anything, net neutrality is an impediment to broader, faster, better broadband. But the FCC is using its thumbs-down on broadband deployment to prop up its intrusions into a healthy industry. As McDowell concluded, “the majority has used this process as an opportunity to create a pretext to justify more regulation.”

Share/Bookmark

The $3 trillion opportunity for 2022

August 17th, 2012

There’s more to life than economics, but almost nothing matters more to more people than the rate of long-term economic growth. It completely changes the life possibilities for individuals and families and determines the prospects of nations. It also happens to be the central factor in governmental budgets.

We’ve been saying for the last few years that growth is our biggest problem — but also our biggest opportunity. Faster growth would not only put Americans back to work but also help resolve budget impasses and assist in the long-overdue transformations of our entitlement programs. The current recovery, however, is worse than mediocre. It is dangerously feeble. With every passing day, we fall further behind. Investments aren’t made. Risks aren’t taken. Business ideas are shelved. Joblessness persists, and millions of Americans drop out of the labor force altogether. Continued stagnation would of course exacerbate an already dire long-term unemployment problem. It would also, however, turn America’s unattractive habitual overspending into a possible catastrophe of debt.

John Cochrane of the University of Chicago shows, in the chart below, just how far we’ve slipped from our historical growth path. The red line is the 1965-2007 trend line growth of 3.07%, and the thin black line shows the recession and weak recovery.

Recessions are of course downward deviations from a trend line of growth. Trendlines, however, include recessions, and recoveries thus usually exhibit faster-than-trend growth that catches up to trend. To be sure, trends may not continue forever. Historical performance, as they say, is not a guarantee of future results. Perhaps structural factors in the U.S. and world economies have lowered our “potential” growth rate. This possibility is shown in the blue “CBO Potential” line, which depicts the “new normal” of diminished expectations. Yet the current recovery cannot even catch up to this anemic trend line, which supposedly reflects the downgraded potential of the U.S. economy.

Here is another way to visualize today’s stagnation, from Scott Grannis:

Economies are built on expectations. If the “new normal” of 2.35% growth is correct, then we’ve got problems. All our individual, family, business, and government plans will have to downshift. If growth is even lower than that, tomorrow’s problems will tower over today’s. If, on the other hand, we can reignite the American growth engine, then we’ve got a shot to not only reverse today’s decline but also to open the door to a new era of renewed optimism and, yes, rising expectations.

Faster compounding growth over time makes all the difference. One new paper shows how, with a fundamentally new policy direction on taxes and regulation, real GDP in the U.S. could be “between $2.1 and $3.1 trillion higher in 2022 than it would be under a continuation of current slow growth.” Think of that — an American economy perhaps trillions of dollars larger in a single year a decade from now, with better pro-growth policies. That’s a lot of jobs, a lot of higher incomes, a lot of new businesses, and — whether your preference is more or less government spending — much healthier government budgets . . . summed up in one last chart.

Share/Bookmark

Misunderstanding the Mobile Ecosystem

August 9th, 2012

Mobile communications and computing are among the most innovative and competitive markets in the world. They have created a new world of software and offer dramatic opportunities to improve productivity and creativity across the industrial spectrum.

Last week we published a tech note documenting the rapid growth of mobile and the importance of expanding wireless spectrum availability. More clean spectrum is necessary both to accommodate fast-rising demand and drive future innovations. Expanding spectrum availability might seem uncontroversial. In the report, however, we noted that one obstacle to expanding spectrum availability has been a cramped notion of what constitutes competition in the Internet era. As we wrote:

Opponents of open spectrum auctions and flexible secondary markets often ignore falling prices, expanding choices, and new features available to consumers. Instead they sometimes seek to limit new spectrum availability, or micromanage its allocation or deployment characteristics, charging that a few companies are set to dominate the market. Although the FCC found that 77% of the U.S. population has access to three or more 3G wireless providers, charges of a coming “duopoly” are now common.

This view, however, relies on the old analysis of static utility or commodity markets and ignores the new realities of broadband communications. The new landscape is one of overlapping competitors with overlapping products and services, multi-sided markets, network effects, rapid innovation, falling prices, and unpredictability.

Sure enough, yesterday Sprint CEO Dan Hesse made the duopoly charge and helped show why getting spectrum policy right has been so difficult.

Q: You were a vocal opponent of the AT&T/T-Mobile merger. Are you satisfied you can compete now that the merger did not go through?

A: We’re certainly working very hard. There’s no question that the industry does have an issue with the size of the duopoly of AT&T and Verizon. I believe that over time we’ll see more consolidation in the industry outside of the big two, because the gap in size between two and three is so enormous. Consolidation is healthy for the industry as long as it’s not AT&T and Verizon getting larger.

Hesse goes even further.

Hesse also seemed to be likening Sprint’s struggles in competing with AT&T-Rex and Big Red as a fight against good and evil. Sprint wants to wear the white hat, according to Hesse. “At Sprint, we describe it internally as being the good guys, of doing the right thing,” he said.

This type of thinking is always a danger if you’re trying to make sound policy. Picking winners and losers is inevitably — at best — an arbitrary exercise. Doing so based on some notion of corporate morality is plain silly, but even more reasonable sounding metrics and arguments — like those based on market share — are often just as misleading and harmful.

The mobile Internet ecosystem is growing so fast and changing with such rapidity and unpredictability that making policy based on static and narrow market definitions will likely yield poor policy. As we noted in our report:

It is, for example, worth emphasizing: Google and Apple were not in this business just a few short years ago.

Yet by the fourth quarter of 2011 Apple could boast an amazing 75% of the handset market’s profits. Apple’s iPhone business, it was widely noted after Apple’s historic 2011, is larger than all of Microsoft. In fact, Apple’s non-iPhone products are also larger than Microsoft.

Android, the mobile operating system of Google, has been growing even faster than Apple’s iOS. In December 2011, Google was activating 700,000 Android devices a day, and now, in the summer of 2012, it estimates 900,000 activations per day. From a nearly zero share at the beginning of 2009, Android today boasts roughly a 55% share of the global smartphone OS market.

. . .

Apple’s iPhone changed the structure of the industry in several ways, not least the relationships between mobile service providers and handset makers. Mobile operators used to tell handset makers what to make, how to make it, and what software and firmware could be loaded on it. They would then slap their own brand label on someone else’s phone.

Apple’s quick rise to mobile dominance has been matched by Blackberry maker Research In Motion’s fall. RIM dominated the 2000s with its email software, its qwerty keyboard, and its popularity with enterprise IT departments. But it  couldn’t match Apple’s or Android’s general purpose computing platforms, with user-friendly operating systems, large, bright touch-screens, and creative and diverse app communities.

Sprinkled among these developments were the rise, fall, and resurgence of Motorola, and then its sale to Google; the rise and fall of Palm; the rise of HTC; and the decline of once dominant Nokia.

Apple, Google, Amazon, Microsoft, and others are building cloud ecosystems, sometimes complemented with consumer devices, often tied to Web apps and services, multimedia content, and retail stores. Many of these products and services compete with each other, but they also compete with broadband service providers. Some of these business models rely primarily on hardware, some software, some subscriptions, some advertising. Each of the companies listed above — a computer company, a search company, an ecommerce company, and a software company — are now major Internet infrastructure companies.

As Jeffrey Eisenach concluded in a pathbreaking analysis of the digital ecosystem (“Theories of Broadband Competition”), there may be market concentration in one (or more) layer(s) of the industry (broadly considered), yet prices are falling, access is expanding, products are proliferating, and innovation is as rapid as in any market we know.

Share/Bookmark

The Who-What-Where-Why-How of Economic Growth

July 24th, 2012

In all the recent debates over deficits, debt, unemployment, entitlements, bond markets, the euro, housing, etc., the absolutely central factor has too often been ignored. A new book, however, deals with nothing but this central factor — economic growth. If we’re going to improve the economic discussion, and the economy itself, The 4% Solution: Unleashing the Economic Growth America Needs is likely to serve as a good foundation.

The book contains chapters by five Nobel economists, including the modern dean of economic growth Robert Lucas, Ed Prescott on marginal tax rates, and Myron Scholes on true innovation; also Bob Litan on “home run” start-up firms, Nick Schulz on intangible assets, David Malpass on monetary policy, and others on entrepreneurs, immigration, debt, and budgets.

I’ve only skimmed many of the chapters, but one thing that jumped out is an important point about the links, and distinctions, between supply and demand. When economic growth has been discussed these last few years, the cause/cure usually cited is a drop in aggregate demand and the “stimulus” measures needed to boost it. It’s of course true that the housing bust and banking troubles caused lots of deleveraging and that government spending and interest rate cuts may help tide over certain consumers and businesses during temporary tough times. Despite substantial Keynesian fiscal and monetary “stimulus,” however — wild deficit spending, four years of zero-interest-rates, and a tripling of the Fed’s balance sheet — businesses, consumers, and the economy-at-large have not responded as hoped. Even if you believe in the efficacy of short term Keynesian growth policies, you ignore at great forecasting peril the array of countervailing anti-growth policies.

Here is how I put it in a Forbes online column last December:

the real problem with demand is supply. Consumption is partly based on current income and needs, sure, but more importantly it is a function of the expected future. Milton Friedman’s version of this idea was the permanent income hypothesis. More generally, we might ask, what are the prospects for prosperity?

We live in a complex, uncertain world. But it’s not unreasonable to believe, even after the Great Recession, that America and the globe still have prodigious potential to create new wealth. It’s also not unreasonable to believe that Washington has severely impaired America’s innovative capacity and our ability to grow.

If you think ObamaCare reinforces and expands many of the worst features of our overpriced, government-heavy health system, then you worry we might not get the productivity revolution we need in one of the largest sectors of our economy. If you think Dodd-Frank and other post-crisis ideas will discourage true financial innovation while preserving “too big to fail,”  then you worry more financial disruptions are in store. If you think tax rates on capital and entrepreneurship are going up, then you might downgrade your estimates of the amount of investment and dynamism — and thus good jobs — America will enjoy.

A downgrade of expected long term growth impairs growth today.

In the new book, Lucas makes a similar argument:

imagine that households and businesses were somehow convinced that the United States would soon move toward a European-level welfare state, financed by a European tax structure. These beliefs would naturally be translated into beliefs that labor costs would soon increase and returns on investment decrease. Beliefs of a future GDP reduction of 30% would be brought forward into the present even before these beliefs could be realized (or refuted).

This is just hypothetical, of course, but it is a hypothesis that is entirely consistent with the way that we know economies work, everyone basing current decisions on expectations about future returns. What I have called recovery growth has happened after previous U.S. recessions and depressions and is certainly a worthy and attainable objective for economic policy today, but it would be foolish to take it as a foregone conclusion.

In the next chapter, Ed Prescott reinforces the point:

what people expect policies to be in the future determines what happens now. Bad policies can and often do depress the economy even before they are implemented. Peoples actions now depend on what they think policy will be — not what it was.

. . .

The disturbing fact is that, as of the beginning of 2012, the economy has not even partially recovered from the this recession. When it will recover is a political question and not an economic question. Only if the Americans making personal economic decisions knew what future policy would be could economists predict when recovery would occur.

This is one reason long term growth policies are often more important, even in the short term, than most short term “growth” policies.

Share/Bookmark

The Real Deal on U.S. Broadband

June 11th, 2012

Is American broadband broken?

Tim Lee thinks so. Where he once leaned against intervention in the broadband marketplace, Lee says four things are leading him to rethink and tilt toward more government control.

First, Lee cites the “voluminous” 2009 Berkman Report. Which is surprising. The report published by Harvard’s Berkman Center may have been voluminous, but it lacked accuracy in its details and persuasiveness in its big-picture take-aways. Berkman used every trick in the book to claim “open access” regulation around the world boosted other nation’s broadband economies and lack of such regulation in the U.S. harmed ours. But the report’s data and methodology were so thoroughly discredited (especially in two detailed reports issued by economists Robert Crandall, Everett Ehrlich, and Jeff Eisenach and Robert Hahn) that the FCC, which commissioned the report, essentially abandoned it.  Here was my summary of the economists’ critiques:

The [Berkman] report botched its chief statistical model in half a dozen ways. It used loads of questionable data. It didn’t account for the unique market structure of U.S. broadband. It reversed the arrow of time in its country case studies. It ignored the high-profile history of open access regulation in the U.S. It didn’t conduct the literature review the FCC asked for. It excommunicated Switzerland.

. . .

Berkman’s qualitative analysis was, if possible, just as misleading. It passed along faulty data on broadband speeds and prices. It asserted South Korea’s broadband boom was due to open access regulation, but in fact most of South Korea’s surge happened before it instituted any regulation. The study said Japanese broadband, likewise, is a winner because of regulation. But regulated DSL is declining fast even as facilities-based (unshared, proprietary) fiber-to-the-home is surging.

Berkman also enjoyed comparing broadband speeds of tiny European and Asian countries to the whole U.S. But if we examine individual American states — New York or Arizona, for example — we find many of them outrank most European nations and Europe as a whole. In fact, applying the same Speedtest.com data Berkman used, the U.S. as a whole outpaces Europe as a whole! Comparing small islands of excellence to much larger, more diverse populations or geographies is bound to skew your analysis.

The Berkman report twisted itself in pretzels trying to paint a miserable picture of the U.S. Internet economy and a glowing picture of heavy regulation in foreign nations. Berkman, however, ignored the prima facie evidence of a vibrant U.S. broadband marketplace, manifest in the boom in Web video, mobile devices, the App Economy, cloud computing, and on and on.

How could the bulk of the world’s best broadband apps, services, and sites be developed and achieve their highest successes in the U.S. if American broadband were so slow and thinly deployed? We came up with a metric that seemed to refute the notion that U.S. broadband was lagging, namely, how much network traffic Americans generate vis-à-vis the rest of the world. It turned out the U.S. generates more network traffic per capita and per Internet user than any nation but South Korea and generates about two-thirds more per-user traffic than the closest advanced economy of comparable size, Western Europe.

Berkman based its conclusions almost solely on (incorrect) measures of “broadband penetration” — the number of broadband subscriptions per capita — but that metric turned out to be a better indicator of household size than broadband health. Lee acknowledges the faulty analysis but still assumes “broadband penetration” is the sine qua non measure of Internet health. Maybe we’re not awful, as Berkman claimed, Lee seems to be saying, but even if we correct for their methodological mistakes, U.S. broadband penetration is still just OK. “That matters,” Lee writes,

because a key argument for America’s relatively hands-off approach to broadband regulation has been that giving incumbents free rein would give them incentive to invest more in their networks. The United States is practically the only country to pursue this policy, so if the incentive argument was right, its advocates should have been able to point to statistics showing we’re doing much better than the rest of the world. Instead, the argument has been over just how close to the middle of the pack we are.

No, I don’t agree that the argument has consisted of bickering over whether the U.S. is more or less mediocre. Not at all. I do agree that advocates of government regulation have had to adjust their argument – U.S. broadband is awful mediocre. Yet they still hang their hat on “broadband penetration” because most other evidence on the health of the U.S. digital economy is even less supportive of their case.

In each of the last seven years, U.S. broadband providers have invested between $60 and $70 billion in their networks. Overall, the U.S. leads the world in info-tech investment — totaling nearly $500 billion last year. The U.S. now boasts more than 80 million residential broadband links and 200+ million mobile broadband subscribers. U.S. mobile operators have deployed more 4G mobile network capacity than anyone, and Verizon just announced its FiOS fiber service will offer 300 megabit-per-second residential connections — perhaps the fastest large-scale deployment in the world.

Eisenach and Crandall followed up their critique of the Berkman study with a fresh March 2012 analysis of “open access” regulation around the world (this time with Allan Ingraham). They found:

  • “it is clear that copper loop unbundling did not accelerate the deployment or increase the penetration of first-generation broadband networks, and that it had a depressing effect on network investment”
  • “By contrast, it seems clear that platform competition was very important in promoting broadband deployment and uptake in the earlier era of DSL and cable modem competition.”
  • “to the extent new fiber networks are being deployed in Europe, they are largely being deployed by unregulated, non-ILEC carriers, not by the regulated incumbent telecom companies, and not by entrants that have relied on copper-loop unbundling.”

Lee doesn’t mention the incisive criticisms of the Berkman study nor the voluminous literature, including this latest example, showing open access policies are ineffective at best, and more likely harmful.

In coming posts, I’ll address Lee’s three other worries.

— Bret Swanson

Share/Bookmark

New iPad, Fellow Bandwidth Monsters Hungry for More Spectrum

March 13th, 2012

Last week Apple unveiled its third-generation iPad. Yesterday the company said the LTE versions of the device, which can connect via Verizon and AT&T mobile broadband networks, are sold out.

It took 15 years for laptops to reach 50 million units sold in a year. It took smartphones seven years. For tablets (not including Microsoft’s clunky attempt a decade ago), just two years. Mobile device volumes are astounding. In each of the last five years, global mobile phone sales topped a billion units. Last year smartphones outsold PCs for the first time – 488 million versus 432 million. This year well over 500 million smartphones and perhaps 100 million tablets could be sold.

Smartphones and tablets represent the first fundamentally new consumer computing platforms since the PC, which arrived in the late ’70s and early ’80s. Unlike mere mobile phones, they’ve got serious processing power inside. But their game-changing potency is really based on their capacity to communicate via the Internet. And this power is, of course, dependent on the cloud infrastructure and wireless networks.

But are wireless networks today prepared for this new surge of bandwidth-hungry mobile devices? Probably not. When we started to build 3G mobile networks in the middle of last decade, many thought it was a huge waste. Mobile phones were used for talking, and some texting. They had small low-res screens and were terrible at browsing the Web. What in the world would we do with all this new wireless capacity? Then the iPhone came, and, boom — in big cities we went from laughable overcapacity to severe shortage seemingly overnight. The iPhone’s brilliant screen, its real Web browsing experience, and the world of apps it helped us discover totally changed the game. Wi-Fi helped supply the burgeoning iPhone with bandwidth, and Wi-Fi will continue to grow and play an important role. Yet Credit Suisse, in a 2011 survey of the industry, found that mobile networks overall were running at 80% of capacity and that many network nodes were tapped out.

Today, we are still expanding 3G networks and launching 4G in most cities. Verizon says it offers 4G LTE in 196 cities, while AT&T says it offers 4G LTE in 28 markets (and combined with its HSPA+ networks offers 4G-like speeds to 200 million people in the U.S.). Lots of things affect how fast we can build new networks — from cell site permitting to the fact that these things are expensive ($20 billion worth of wireless infrastructure in the U.S. last year). But another limiting factor is spectrum availability.

Do we have enough radio waves to efficiently and cost-effectively serve these hundreds of millions of increasingly powerful mobile devices, which generate and consume increasingly rich content, with ever more stringent latency requirements, and which depend upon robust access to cloud storage and computing resources?

Capacity is a function of money, network nodes, technology, and radio waves. But spectrum is grossly misallocated. The U.S. government owns 61% of the best airwaves, while mobile broadband providers — where all the action is — own just 10%. Another portion is controlled by the old TV broadcasters, where much of this beachfront spectrum lay fallow or underused.

They key is allowing spectrum to flow to its most valuable uses. Last month Congress finally authorized the FCC to conduct incentive auctions to free up some unused and underused TV spectrum. Good news. But other recent developments discourage us from too much optimism on this front.

In December the FCC and Justice Department vetoed AT&T’s attempt to augment its spectrum and cell-site position via merger with T-Mobile. Now the FCC and DoJ are questioning Verizon’s announced purchase of Spectrum Co. — valuable but unused spectrum owned by a consortium of cable TV companies. The FCC has also threatened to tilt any spectrum auctions so that it decides who can bid, how much bidders can buy, and what buyers may or may not do with their spectrum — pretending Washington knows exactly how this fast-changing industry should be structured, thus reducing the value of spectrum and probably delaying availability of new spectrum and possibly reducing the sector’s pace of innovation.

It’s very difficult to see how it’s at all productive for the government to block companies who desperately need more spectrum from buying it from those who don’t want it, don’t need it, or can’t make good use of it. The big argument against AT&T and Verizon’s attempted spectrum purchases is “competition.” But T-Mobile wanted to sell to AT&T because it admitted it didn’t have the financial (or spectrum) wherewithal to build a super expensive 4G network. Apparently the same for the cable companies, who chose to sell to Verizon. Last week Dish Network took another step toward entering the 4G market with the FCC’s approval of spectrum transfers from two defunct companies, TerreStar and DBSD.

Some people say the proliferation of Wi-Fi or the increased use of new wireless technologies that economize on spectrum will make more spectrum availability unnecessary. I agree Wi-Fi is terrific and will keep growing and that software radios, cognitive radios, mesh networks and all the other great technologies that increase the flexibility and power of wireless will make big inroads. So fine, let’s stipulate that perhaps these very real complements will reduce the need for more spectrum at the margin. Then the joke is on the big companies that want to overpay for unnecessary spectrum. We still allow big, rich companies to make mistakes, right? Why, then, do proponents of these complementary technologies still oppose allowing spectrum to flow to its highest use?

Free spectrum auctions would allow lots of companies to access spectrum — upstarts, middle tier, and yes, the big boys, who desperately need more capacity to serve the new iPad.

— Bret Swanson

Share/Bookmark

U.S. Internet Growth – Another Way to Visualize

February 27th, 2012

We’ve published a lot of linear and log-scale line charts of Internet traffic growth. Here’s just another way to visualize what’s been happening since 1990. The first image shows 1990-2004.

The second image scales down the first to make room for the next period.

The third image, using the same scale as image 2, shows 2005-2011.

These images use data compiled by MINTS, with our own further analysis and estimations. Other estimates from Cisco and Arbor/Labovitz — and our own analysis based on those studies — show even higher traffic levels, though roughly similar growth rates.

Share/Bookmark

Prof. Krugman misses the App Economy

February 7th, 2012

Steve Jobs designed great products. It’s very, very hard to make the case that he created large numbers of jobs in this country.

— Prof. Paul Krugman, New York Times, January 25, 2012

Turns out, not very hard at all.

The App Economy now is responsible for roughly 466,000 jobs in the United States, up from zero in 2007 when the iPhone was introduced.

— Dr. Michael Mandel, TechNet study, February 7, 2012

See our earlier rough estimate of Apple’s employment effects: “Jobs: Steve vs. the Stimulus.”

— Bret Swanson

Share/Bookmark

R.H. Stands for Regulatory Hubris

February 1st, 2012

“It is the single worst telecom bill that I have ever seen.”

— Reed Hundt, Jan. 31, 2012

Isn’t this rich?

One of the most zealous regulators America has known says Congress is overstepping its bounds because it wants to unleash lots of new wireless spectrum but also wants to erect a few guardrails so that FCC regulators don’t run roughshod over the booming mobile broadband market.

At a New America Foundation event yesterday, former FCC chairman Reed Hundt said Congress shouldn’t micromanage the FCC’s ability to micromanage the wireless industry. Mr. Congressman, you don’t know anything about how the FCC should regulate the Internet. But the FCC does know how to build networks, run mobile Internet businesses, and perfectly structure a wildly tumultuous economic sector. It’s just the latest remarkable example of the growing hubris of the regulatory state.

In his book, You Say You Want a Revolution, Hundt famously recounted his staff’s interpretation and implementation of the 1996 Telecom Act.

The passage of the new law placed me on a far more public stage. But I felt Congress — in the constitutional sense — had asked me to exercise the full power of all ideas I could summon. And I believed that I and my team had learned, through many failures, how to succeed. Later, I realized that we knew almost nothing of the complexity and importance of the tasks in front of the FCC.

Meeting in several overlapping groups of about a dozen people each . . . we dedicated almost three weeks to studying the possible readings of each word in the 150-page statute. The conference committee compromises had produced a mountain of ambiguity that was generally tilted toward the local phone companies’ advantage. But under the principles of statutory interpretation, we had broad authority to exercise our discretion in writing the implementing regulations. Indeed, like the modern engineers trying to straighten the Leaning Tower of Pisa, we could aspire to provide the new entrants to the local telephone markets a fairer chance to compete than they might find in any explicit provision of the law. In addition, the law gave almost no guidance about how to treat the Internet, data networks, . . . and many other critical issues. (Three years later, Justice Antonin Scalia agreed, on behalf of the Supreme Court, that the law was profoundly ambiguous.)

The more my team studied the law, the more we realized our decisions could determine the winners and losers of the new economy. We did not want to confer advantage on particular companies; that seemed inequitable. But inevitably

wink, wink,

a decision that promoted entry into the local market would benefit a company that followed such a strategy.

There are so many angles here.

(1) Hundt says he and his team basically stretched the statute to mean whatever they wanted. The law may have been ambiguous — and it was, I’m not going to defend the ‘96 Act — yet the Supreme Court still found in a series of early-2000s cases that Hundt’s FCC had wildly overstepped even these flimsy bounds. That’s how aggressive and unconstrained Hundt was.

(2) Hundt’s rules helped crash the tech and telecom sectors in 2000-2002. His rules were so complex and intrusive that, whatever your views about the CLEC wars, the PCS C block spectrum debacle, and other battles, it’s hard to deny that the paralysis caused by the rules hurt broadband and the nascent Net.

(3) Is it surprising that, given the FCC’s poor record of reaching way past its granted powers, some in Congress want to circumscribe FCC regulators by giving them less-than-omnipotent authority? Is the new view of elite regulators that Congress should pass laws, the full text of which might read: “§1. Congress grants to the Internet Agency the authority to regulate the Internet. Go forth and regulate.”

(4) On the other hand, it’s not clear why Hundt would care particularly what Congress says in any new spectrum statute. He didn’t care much for the words or intent of the ‘96 Act, and he thinks regulators should “aspire” to grand self-appointed projects. Who knows, maybe all those Supreme Court smack downs in the early 2000s made an impression.

(5) Hundt says he and his team later realized, in effect, how naive they were about “the complexity and importance of the tasks in front of the FCC.” So he’s acknowledging after things didn’t go so well that his FCC underestimated the complexity and thus overestimated their own expertise . . . yet he says today’s FCC deserves comprehensive power to structure the mobile Internet as it sees fit?

(6) Hundt admitted his FCC relished its capacity to pick winners and losers. Not particular companies, mind you — that would be improper — merely the types of companies who win and lose. A distinction without very much of a difference.

(7) We don’t argue that Congress, instead of the FCC, should impose intrusive regulation through statute. We don’t advocate long and complex laws. That’s not the point. Laws should be clear and simple, but stating the boundaries of a regulator’s authority is not a controversial act. No one should be imposing intrusive regulation or overdetermining the structure of an industry. And that’s what Congress — perhaps in a rare case! — is protecting against here.

Share/Bookmark

Jobs’ jobs versus “jobs”

January 27th, 2012

On Tuesday afternoon, Apple said it earned $13 billion in the fourth quarter on $46 billion in revenue. Thirty-seven million iPhones and 15 million iPads sold in the quarter helped boost its market cap to $415 billion. A few hours later, Indiana Gov. Mitch Daniels, in his State of the Union response message, contrasted the technology juggernaut with Washington’s impotent jobs efforts: “The late Steve Jobs – what a fitting name he had – created more of them than all those stimulus dollars the President borrowed and blew.”

First thing Wednesday morning, however, Paul Krugman countered with a devastating argument – “Mitch Daniels Doesn’t Read the New York Times.” Prof. Krugman referred to the first of the Timesmultipart series on Apple’s Chinese manufacturing operations.

From Sunday’s Times:

Not long ago, Apple boasted that its products were made in America. Today, few are. Almost all of the 70 million iPhones, 30 million iPads and 59 million other products Apple sold last year were manufactured overseas.

Apple employs 43,000 people in the United States and 20,000 overseas, a small fraction of the over 400,000 American workers at General Motors in the 1950s, or the hundreds of thousands at General Electric in the 1980s. Many more people work for Apple’s contractors: an additional 700,000 people engineer, build and assemble iPads, iPhones and Apple’s other products. But almost none of them work in the United States. Instead, they work for foreign companies in Asia, Europe and elsewhere, at factories that almost all electronics designers rely upon to build their wares.

Steve Jobs designed great products. It’s very, very hard to make the case that he created large numbers of jobs in this country. Obama’s auto bailout, just by itself, saved a lot more jobs than Apple’s US employment.

So the New York Times thinks all those Chinese Foxconn assembly workers are the primary employment effect of Apple. And Prof. Krugman sidesteps the argument by noting the “auto bailout” – not the stimulus – “saved” – not created, mind you – more jobs than Apple’s under-roof American workforce.

CNNMoney jumped in:

Daniels’ math just doesn’t add up, no matter how successful and valuable Apple has become.

Not even close.

This little episode exposes quite a lot about the fundamentally different ways people think about the economy.

The economy is dynamic and complex. It’s a cooperative, competitive, and evolutionary. In recent pre-Great Recession history, the U.S. lost around 15 millions jobs every year — holy depression! But we created some 17 million a year, netting two million. There’s no way to quantify Jobs’ jobs impact exactly, which is one of the great virtues of capitalism.

An attempt to estimate in a very rough way, however, might be useful:

Apple

Apple has 60,000 total employees, around 43,000 in U.S.

Multiply these numbers by the years these jobs have existed, decades in the case of many. That’s many hundreds of thousands of “job-years.”

Then consider the broad software industry, especially the world of “apps” being developed for iPhone and iPad, and now for Macs. More than 500,000 iOS apps now exist, and 1.2 billion were downloaded in the last week of December 2011. Lots of people are trying to quantify how many jobs this app ecosystem has created. Likely it will mean many tens of thousands of jobs for decades to come, meaning hundreds of thousands of job-years, though even the “app” won’t look this way forever or even for long. We’ll see.

Apple computers, iPhones, iPads, and multimedia software, like OSX, iOS, Quicktime, and WebKit, drive the Internet and wireless industries. (WebKit is an open software platform developed by Apple that most people have never heard of. But it’s crucial to Internet browsers and webpage development.) These devices allow people and companies to create content. They improve productivity and create new kinds of jobs. How many graphic designers would we have had over the years without the Mac?

Apple devices devour bandwidth and storage and drive new generations of broadband and mobile network build-outs, totaling about $65 billion per year in the U.S. So add some significant portion of networking equipment salesmen and telecom pole-climbers and Verizon and Comcast workers and data center technicians. The iPhone alone completely reinvigorated the U.S. mobile industry and ushered in a new paradigm of computing, moving from PC to mobile device. Apple jolted AT&T back to life when the two companies partnered on the first iPhone. How many jobs across the economy did the iPhone “save” by boosting our digital industries when the PC era had about run its course? A lot.

Jobs created a new digital music industry. It’s impossible to gauge how many jobs were created versus eliminated. But clearly the new jobs are higher value jobs.

Apple is now the largest buyer of microchips in the world. It buys 23% of all the world’s flash memory, for example. Much of that is South Korean. But Apple probably buys something like 20 million Intel microprocessors each year. That’s a huge part of Intel’s business. Intel employs 100,000 people (not all in the U.S.).

The notion that “almost none” of the “additional 700,000″ people who contribute to designing and building Apple products work in the U.S. is false. And silly.

Apple’s list of suppliers includes many of America’s leading-edge technology companies: Qualcomm, Intel, Corning, LSI, Broadcom, Seagate, Micron, Analog Devices, Linear, Maxim, Marvell, International Rectifier, Western Digital, ON Semi, Nvidia, AMD, Cypress, Texas Instruments, TriQuint, SanDisk, etc.

Lots of Apple’s foreign suppliers have substantial workforces in the U.S. Oft cited are the two Austin, Texas, Samsung fabs, which employ 3,500 workers who make NAND flash memory and Apple’s A5 chip. But many Asian and European Apple suppliers have sales, marketing, and support staff in America.

And of course no government or stimulus jobs are possible without private wealth creation. During the “stimulus” period — 2009-11 — Apple paid $16.5 billion in corporate income taxes, thus financing about 2% of the entire $821 billion stimulus package and thus 2% of the stimulus “jobs.” One might counter that stimulus was funded with debt, but money is fungible, and issuing debt depends on future claims on wealth. Moreover, because stimulus jobs were so extraordinarily expensive, a different accounting says that Apple’s $16.5 billion in taxes could have paid for 330,000 $50,000-a-year salaries.

Pixar

In 1986, Steve Jobs bought a tiny division of George Lucas’s LucasFilm and created what we know as Pixar, the leading movie animation studio. In 2006, Pixar merged with Walt Disney. Disney has 156,000 employees and $41 billion in sales, a growing portion of which directly or indirectly relate to Pixar properties, film development, characters, licensing, and distribution. Pixar really saved Hollywood during a dark time for film and spawned a whole new animation boom. Pixar developed and inspired many new technologies for film making, video games, and other interactive visual media.

An additional consideration: Over the 2009-11 period, Disney paid $7 billion in income taxes, thus financing just under 1% of the stimulus and 1% of the “jobs.” That $7 billion could have funded 140,000 $50,000-a-year salaries.

Macro

The economy-wide effects of Steve Jobs are of course impossible to measure with precision. But a new study from Robert Shapiro and Kevin Hassett estimates that advances in mobile Internet technologies boosted U.S. employment by around 400,000 per year from 2007 to 2011, or by a total of around 1.2 million over the 2009-11 stimulus period. The Phoenix Center found similar employment effects. What proportion of these can be attributed to Steve Jobs is, again, impossible to say. But it’s clear Apple was the primary innovator in mobile Internet technologies in this period, towering over a multitude of other important technologies. More than any other device, the iPhone exploited the new, larger-capacity 3G mobile networks of the period, and once it proved wildly popular it was the chief impetus for additional 3G mobile capacity.

Stimulus

CBO estimates ARRA (the Stimulus bill) yielded between 1.3 and 3.5 million job-years net, meaning created or saved. But as the stimulus wanes, many of these jobs go away, or at least are not attributable to the stimulus.

Robert Barro of Harvard questions whether ARRA created any jobs at all. He says the question isn’t whether the Keynesian multiplier is greater than 1 (meaning break even; spend $1, get $1 in GDP), let alone whether it’s 1.5 (spend a dollar, get $1.50), but whether the multiplier is greater than zero.

Stanford’s John Taylor also thinks ARRA had no positive effect.

And do stimulus-boosters really want to equate these two activities?

(1) the federal government pays a state worker’s salary for a year instead of the state paying the salary;

(2) a new job derived from an entrepreneur who’s created whole new industries with new kinds of higher value jobs that last for decades, spurring yet more growth and jobs.

In Keynesian macro world those two jobs are equivalent, I guess.

The CNNMoney report acknowledged the 43,000 U.S. employees of Apple and also the 850 employees of Pixar at the time it merged with Disney in 2006. It even allowed that perhaps Pixar could employ twice as many people now. It also grudgingly admitted that maybe some Americans are building apps for the App Store. That’s about it.

This imprecise exercise misses the deeper truths of entrepreneurial capitalism and short-changes the dynamic versus static view of the economy. In a new article today, which I just see as I’m finishing this post, Prof. Krugman quite rightly notes the importance of industrial clusters to growth. He cites the Chinese supply-chains highlighted in the NYT series. But he entirely ignores the most famous and successful cluster on earth — Silicon Valley. How many jobs in Silicon Valley do we think are dependent on or symbiotic with Apple. It’s incalculable, but its a lot.

I asked Gov. Daniels what he thought.

“I won’t be reading Herr Krugman,” Gov. Daniels replied, “but I did read the New York Times, and it changes nothing. Just means Dr. K doesn’t understand the dynamism of innovation, either.”

— Bret Swanson

Share/Bookmark

Roam, roam on the range. Will Washington’s new intrusions discourage wireless expansion?

January 26th, 2012

The U.S. wireless sector has been only mildly regulated over the last decade. We’d argue this is a key reason for its success. But this presumption of mostly unfettered experimentation and dynamism may be changing.

Consider Sprint’s apparent decision to use “roaming” in Oklahoma and Kansas instead of building its own network. Now, roaming is a standard feature of mobile networks worldwide. Company A might not have as much capacity as it would like in some geography, so it pays company B, who does have capacity there, for access. Company A’s customers therefore get wider coverage, and Company B is paid for use of its network.

The problem comes with the FCC’s 2011 “digital roaming” order. Last spring three FCC commissioners decided that private mobile services — which the Communications Act says “shall not . . . be treated as a common carrier” — are a common carrier. Only D.C. lawyers smarter than you and me can figure out how to transfigure “shall not” into “may.” Anyway, the possible effect is to subject mobile data — one of the fastest growing sectors anywhere on earth — to all sorts of forced access mandates and price controls.

We warned here and here that turning competitive broadband infrastructure into a “common carrier” could discourage all players in the market from building more capacity and covering wider geographies. If company A can piggyback on company B’s network at below market rates, why would it build its own expensive network? And if company B’s network capacity is going to company A’s customers, instead of its own customers, do we think company B is likely to build yet more cell sites and purchase more spectrum?

With 37 million iPhones and 15million iPads sold last quarter, we need more spectrum, more cell towers, more capacity. This isn’t the way to get it. And what we are seeing with Sprint’s decision to roam instead of build in Oklahoma and Kansas may be the tip of this anti-investment iceberg.

Last spring when the data roaming order came down we began wondering about a possible “slow walk to a reregulated communications market.” Among other items, we cited net neutrality, possible new price controls for Special Access links to cell sites, and a host of proposed regulations affecting things like behavioral advertising and intellectual property (see, PIPA/SOPA). Since then we’ve seen the government block the AT&T-T-Mobile merger. And the FCC is now holding up its own important push for more wireless spectrum because it wants the right to micromanage who gets what spectrum and how mobile carriers can use it.

Many of these items can be thoughtfully debated. But the number of new encroachments onto the communications sector threatens to slow its growth. Many of these encroachments, moreover, are taking place outside any basic legislative authority. In the digital roaming and net neutrality cases, for example, the FCC appeared clearly to grant itself extra- if not il-legal authority. These new regulations are now being challenged in court.

We need some restraint across the board on these matters. The Internet is too important. We can’t allow a quiet, gradual reregulation of the sector to slow down our chief engine of economic growth.

— Bret Swanson

Share/Bookmark

Quote of the Day

January 17th, 2012

“One solution is giving back to bank creditors the job of policing bank risk-taking. Roll back deposit insurance, for instance. We may not be able to see the future, but we can incentivize caution as a general matter. And we can improve the odds that, when banks make mistakes, they won’t all make the same mistake at the same time.”

— Holman Jenkins, The Wall Street Journal, January 18, 2011

Share/Bookmark

Is the FCC serious about more wireless spectrum? Apparently not.

January 13th, 2012

For the third year in a row, FCC chairman Julius Genachowski used his speech at the Consumer Electronics Show in Las Vegas to push for more wireless spectrum. He wants Congress to pass the incentive auction law that would unleash hundreds of megahertz of spectrum to new and higher uses. Most of Congress agrees: we need lots more wireless capacity and spectrum auctions are a good way to get there.

Genachowski, however, wants overarching control of the new spectrum and, by extension, the mobile broadband ecosystem. The FCC wants the authority to micromanage the newly available radio waves — who can buy it, how much they can buy, how they can use it, what content flows over it, what business models can be employed with it. But this is an arena that is growing wildly fast, where new technologies appear every day, and where experimentation is paramount to see which business models work. Auctions are supposed to be a way to get more spectrum into the marketplace, where lots of companies and entrepreneurs can find the best ways to use it to deliver new communications services. ”Any restrictions” by Congress on the FCC “would be a real mistake,” said Genachowski. In other words, he doesn’t want Congress to restrict his ability to restrict the mobile business. It seems the liberty of regulators to act without restraint is a higher virtue than the liberty of private actors.

At the end of 2011, the FCC and Justice Department vetoed AT&T’s proposed merger with T-Mobile, a deal that would have immediately expanded 3G mobile capacity across the nation and accelerated AT&T’s next generation 4G rollout by several years. That deal was all about a more effective use of spectrum, more cell towers, more capacity to better serve insatiable smart-phone and tablet equipped consumers. Now the FCC is holding hostage the spectrum auction bill with its my-way-or-the-highway approach. And one has to ask: Is the FCC really serious about spectrum, mobile capacity, and a healthy broadband Internet?

— Bret Swanson

Share/Bookmark

Quote of the Day

December 23rd, 2011

“If the Greeks had skimped on the olive oil in a liter bottle, that wouldn’t threaten the metric system.”

— John Cochrane, Bloomberg View, December 21, 2011

Share/Bookmark

Risk Parity in Indiana

December 21st, 2011

For readers interested in either Indiana or investment strategy, see my letter (subscription) to the Indianapolis Business Journal commenting on the new asset allocation and risk management strategies at INPRS, the state’s $25-billion pension fund.

Ken Skarbeck’s column (Nov. 19) addressed a new strategy the Indiana Public Retirement System is using to diversify its portfolio. The new strategy, known as risk parity, has been around for over 20 years and will eventually compose 10% of INPRS assets.

Since the financial crisis of 2008, INPRS has dedicated significant time and resources to improve its risk management infrastructure. The decision to move a portion of the assets into risk parity – which seeks to diversify risk, rather than merely diversify asset classes – is one direct outcome of the new risk management program.

Risk parity attempts to balance risk across equities, bonds, commodities, and inflation-linked bonds. It recognizes the distinct performance characteristics of these assets during periods of robust or slow growth, for instance, or high or low inflation. For any given rate of return target, risk can be mitigated. Likewise, for a given risk appetite, returns can be improved. Nothing is a sure bet, but risk parity strategies have achieved robust returns while minimizing risk over most time periods.

Mr. Skarbeck makes a good point that historical volatility does not measure all types of risk. We heartily agree.

Mr. Skarbeck thinks stocks are a good bet right now. He may be correct. INPRS owns billions of dollars of equities and works with investment managers who have strong views, perhaps similar to Mr. Skarbeck’s, about the direction of stocks, bonds, and other assets. But as an entity charged with funding the retirements of 500,000 Hoosier workers and retirees, INPRS as a whole should not make overly concentrated bets.

Truly balanced portfolios recognize that neither INPRS, nor anyone else, knows with certainty what the global economy has in store. Committing to a concentrated asset mix because of a particular view on equities would represent the very type of risk Mr. Skarbeck warns against.

Fortunately, risk parity has performed well in all environments – from low inflation, high growth periods where stocks might outperform to high-inflation periods where commodities and TIPS might do better. That’s the point of the strategy: seek healthy returns sufficient to fund the retirements of INPRS members while minimizing downside risk.

Bret T. Swanson
Trustee and Investment Committee Member
Indiana Public Retirement System (INPRS)

Share/Bookmark

Another blow to U.S. economic growth

December 20th, 2011

More bad news for U.S. economic growth. In the face of multiplying obstacles deployed by Washington regulators, AT&T today abandoned its pursuit of T-Mobile. The most important outcome of the merger would have been a quicker and broader roll-out of 4G mobile broadband services. Now AT&T will have to find other paths to the wireless radio spectrum (and cell towers) it needs to meet growing demand and build tomorrow’s networks. T-Mobile is left in purgatory, short of the spectrum and long-term financial wherewithal to effectively compete.

Some say, don’t worry, assuming that another U.S. mobile provider will pick up T-Mobile. Not so fast. If Washington disallowed AT&T, it would do the same for Verizon. Sprint was pursuing T-Mobile before AT&T swooped in, but a Sprint-TMo combo makes much less sense. The spectrum-technology-tower infrastructure positions of AT&T and TMo were almost perfectly complementary. Not so for Sprint, who uses mostly higher frequencies, has always been a CDMA company (as opposed to WCDMA), and is already finding it challenging to raise the funds to build its own LTE network, given rocky times with partner Clearwire.

The U.S. mobile industry has been a shining star in an otherwise dark U.S. economy. But with Washington nixing the AT&T- T-Mobile merger, and given recent struggles at Clearwire and engineering disputes with upstart LightSquared, it’s not clear mobile will continue on its steep ascent. The FCC “staff report” opposing the AT&T-TMo deal didn’t even address the elephant in the room – spectrum. It’s odd. The FCC declared a spectrum crisis two years ago and repeatedly emphasized the urgent need for broadband expansion. Then, poof, not hardly a mention of either in its report. Not a good sign when the expert agency has taken its eye off the ball.

The industry is still full of potential, but there will be near-term disruptions as companies sort out new spectrum, business, and technology strategies. And as millions of un- and underemployed Americans know, time is money. Regulatory impediments and foot-dragging are especially harmful – and even infuriating – for an industry that desperately wants to grow. For an industry that is in many ways the bedrock of the 21st century American knowledge economy.

Beyond the disquieting roller-coaster in the mobile industry, one wonders more broadly about the American economy. Just what kind of business are we allowed to conduct? What investments are preferred – by whom? How far will the tilt of decision-making from private entities to public bureaucracies go?

— Bret Swanson

Share/Bookmark

FedEx vs. Broadband: the Big Bio data dilemma

December 1st, 2011

The New York Times reports today that scientists reading human genomes are generating so much data that they must use snail mail instead of the Internet to send the DNA readouts around the globe.

BGI, based in China, is the world’s largest genomics research institute, with 167 DNA sequencers producing the equivalent of 2,000 human genomes a day.

BGI churns out so much data that it often cannot transmit its results to clients or collaborators over the Internet or other communications lines because that would take weeks. Instead, it sends computer disks containing the data, via FedEx.

“It sounds like an analog solution in a digital age,” conceded Sifei He, the head of cloud computing for BGI, formerly known as the Beijing Genomics Institute. But for now, he said, there is no better way.

The field of genomics is caught in a data deluge. DNA sequencing is becoming faster and cheaper at a pace far outstripping Moore’s law, which describes the rate at which computing gets faster and cheaper.

The result is that the ability to determine DNA sequences is starting to outrun the ability of researchers to store, transmit and especially to analyze the data.

We’ve been talking about the oncoming rush of biomedical data for a while. A human genome consists of some 2.9 billion base pairs, easily stored in around 725 megabytes with standard compression techniques. Two thousand genomes a day, times 725 MB, equals 1,450,000 MB, or 1.45 terabytes. That’s a lot of data for one entity to transmit in a day’s time. Some researchers believe a genome can be losslessly compressed to approximately 4 megabytes. In compressed form, 2,000 genomes would total around 8,000 MB, or just 8 gigabytes. Easily doable for a major institution.

Interested to know more.

Share/Bookmark

FCC wireless mischief: On to the substance

December 1st, 2011

Here’s a critique of the FCC’s new “staff report” from AT&T itself. Obviously, AT&T is an interested party and has a robust point of view. But it’s striking the FCC was so sloppy in a staff report — for instance not addressing the key issue at hand: spectrum — let alone releasing this not-ready-for-prime-time report to the public.

Surely, it is neither fair nor logical for the FCC to trumpet a national spectrum crisis for much of the past year, and then draft a report claiming that two major wireless companies face no such constraints despite sworn declarations demonstrating the opposite.

The report is so off-base and one-sided that the FCC may actually have hurt its own case.

Share/Bookmark

Why is the FCC playing procedural games?

November 30th, 2011

America is in desperate need of economic growth. But as the U.S. economy limps along, with unemployment stuck at 9%, the Federal Communications Commission is playing procedural tiddlywinks with the nation’s largest infrastructure investor, in the sector of the economy that offers the most promise for innovation and 21st century jobs. In normal times, we might chalk this up to clever Beltway maneuvering. But do we really have the time or money to indulge bureaucratic gamesmanship?

On Thanksgiving Eve, the FCC surprised everyone. It hadn’t yet completed its investigation into the proposed AT&T-T-Mobile wireless merger, and the parties had not had a chance to discuss or rebut the agency’s initial findings. Yet the FCC preempted the normal process by announcing it would send the case to an administrative law judge — essentially a vote of no-confidence in the deal. I say “vote,” but  the FCC commissioners hadn’t actually voted on the order.

FCC Chairman Julius Genachowski called AT&T CEO Randall Stevenson, who, on Thanksgiving Day, had to tell investors he was setting aside $4 billion in case Washington blocked the deal.

The deal is already being scrutinized by the Department of Justice, which sued to block the merger last summer. The fact that telecom mergers and acquisitions must negotiate two levels of federal scrutiny, at DoJ and FCC, is already an extra burden on the Internet industry. But when one agency on this dual-track games the system by trying to influence the other track — maybe because the FCC felt AT&T had a good chance of winning its antitrust case — the obstacles to promising economic activity multiply.

After the FCC’s surprise move, AT&T and T-Mobile withdrew their merger application at the FCC. No sense in preparing for an additional hearing before an administrative law judge when they are already deep in preparation for the antitrust trial early next year. Moreover, the terms of the merger agreement are likely to have changed after the companies (perhaps) negotiate conditions with the DoJ. They’d have to refile an updated application anyway. Not so fast, said the FCC. We’re not going to allow AT&T and T-Mobile to withdraw their application. Or we if we do allow it, we will do so “with prejudice,” meaning the parties can’t refile a revised application at a later date. On Tuesday the FCC relented — the law is clear: an applicant has the right to withdraw an application without consent from the FCC. But the very fact the FCC initially sought to deny the withdrawal is itself highly unusual. Again, more procedural gamesmanship.

If that weren’t enough, the FCC then said it would release its “findings” in the case — another highly unusual (maybe unprecedented) action. The agency hadn’t completed its process, and there had been no vote on the matter. So the FCC instead released what it calls a “staff report” — a highly critical internal opinion that hadn’t been reviewed by the parties nor approved by the commissioners. We’re eager to analyze the substance of this “staff report,” but the fact the FCC felt the need to shove it out the door was itself remarkable.

It appears the FCC is twisting legal procedure any which way to fit its desired outcome, rather than letting the normal merger process play out. Indeed, “twisting legal procedure” may be too kind. It has now thrown law and procedure out the window and is in full public relations mode. These extralegal PR games tilt the playing field against the companies, against investment and innovation, and against the health of the U.S. economy.

— Bret Swanson

Share/Bookmark

What Mobile, Video, Big Data, and Cloud mean for network traffic

November 21st, 2011

See our new report “Into the Exacloud” . . . including analysis of:

> Why cloud computing requires a major expansion of wireless spectrum and investment

> An exaflood update: what Mobile, Video, Big Data, and Cloud mean for network traffic

> Plus, a new paradigm for online games, Web video, and cloud software


Share/Bookmark

Stay hungry. Stay foolish.

October 6th, 2011

Share/Bookmark

Damming the Digital River: Netflix, Spectrum, and Info Dynamism

September 20th, 2011

After the decision to separate its online streaming and DVD-in-the mail services, Wall St. Cheat Sheet asked, “Is Netflix the new Research In Motion?”

Translation: Will Netflix be just the latest technology titan to suffer a parabolic plunge? We don’t know ourselves. Netflix’s streaming-DVD split is a reaction to the overwhelming popularity of its streaming service. CEO Reed Hastings is trying to avoid complacency and stay ahead of the curve. Maybe he is panicking. Maybe he’s a genius. But that is just the point: the digital curve these days is shifting and steepening faster than ever.

Which makes the government’s attempted damming of this digital river all the more harmful. Wireless spectrum is a central resource in the digital economy, and a chief enabler of services like Netflix. Yet Washington hogs the best airwaves – at last count the government owned 61%, the mobile service providers just 10%. So AT&T, its pipes bursting with iPhone and iPad traffic, tries to add capacity by merging with T-Mobile. Nope. The Department of Justice won’t allow that either.

Something, however, has got to give. New data from wireless infrastructure maker Ericsson shows that mobile data traffic jumped 130% in the first quarter of 2011 from 2010. Just four years ago, mobile data traffic was perhaps 1/15th of mobile voice traffic. Today, mobile data is likely three times voice. Credit Suisse, meanwhile, reports that U.S. mobile networks are running at 80% of capacity, meaning many network nodes are tapped out.

More mobile traffic drivers are on the way, like mass adoption of video chat apps and Apple’s imminent iCloud service. iCloud will create an environment of pervasive computing, where all your computers and devices are in continuous communication, integrating your digital life through a virtual presence in the cloud. No doubt too, software app downloads and the rich content they unleash will only grow. As of July, 425,000 distinct Apple apps had been downloaded 15 billion times on 200 million devices. The Android ecosystem of devices and apps has been growing even faster.

Perhaps the iCloud service in particular won’t succeed, but no doubt others like it will, not to mention all the apps and services we haven’t thought of. We do know that more bandwidth and connectivity will encourage more new ideas, and thus more traffic. In all, IDC estimates that by 2015 we will create or replicate around 8 zettabytes (8,000,000,000,000,000,000,000 bytes) of new data each year.

Big Data, in turn, will yield large economic benefits, from medical research to retail. The McKinsey Global Institute estimates that Big Data – the sophisticated exploitation of large sets of fine-grained information – could boost annual economic value in the U.S. health care sector by $300 billion. McKinsey thinks personal geolocation services could expand annual consumer surplus by $600 billion globally.

The wide array of Big Data techniques and services is crucially dependent on robust and capacious networks.  U.S. service providers invested $26 billion in 2010 – and $232 billion over the last decade – on wireless infrastructure alone. Total info-tech investment in the U.S. last year was $488 billion. We’ll need more of the same to spur and accommodate Big Data, Cloud, Mobile, Netflix, and the rest. But without more spectrum, the whole enterprise of building the digital infrastructure could slow.

Picocells and femtocells – smaller network nodes that cover less area – can effectively expand capacity for some users by reusing existing wireless spectrum. These mini cells work together as HetNets (heterogeneous networks) and will be a central feature in the next decade of wireless expansion. But the new 4G mobile standard, called LTE, gets the biggest bang for the buck in wider spectrum bands. LTE also is by far the most powerful and flexible standard to manage the complexities and unlock the big potential of HetNets. So we see a virtuous complementarity: more, better spectrum will boost spectrum reuse efficiencies. In other words, spectrum reuse and more spectrum are not either-or alternatives but are mutually helpful and reinforcing.

We don’t know whether the new Netflix strategy will fly, whether iCloud will succeed, how HetNets will evolve, or exactly what the mobile ecosystem will look like. But in such an arena, we do know that maximum flexibility – and LOTS more spectrum – will give a beneficial tilt toward innovation and growth.

— Bret Swanson

Share/Bookmark

Gross or Net Jobs on the Mobile Net?

September 1st, 2011

A paper out today challenges the assertion that the AT&T-T-Mobile merger will create jobs. AT&T has said it would invest an additional $8 billion in wireless network infrastructure, above and beyond its usual $8-10 billion per year, and the Economic Policy Institute estimated this would result in between 55,000 and 96,000 job-years. The Communication Workers of America has cited the EPI study as one reason it supports the mobile union.

In a study prepared for Sprint, however, professor David Neumark says the EPI estimate fails to account for the fact that T-Mobile will no longer be investing its normal couple billion dollars per year after it is subsumed by AT&T. He says EPI is only looking at AT&T’s gross increase, not the net industry effect. He thinks the net effect will be negative and will thus cost jobs.

This is a fair point. We should analyze these things in as dynamic and realistic a way as possible. But the Sprint study appears to be relying on its own static, simplistic view of the world. Namely, it assumes an independent T-Mobile would keep investing billions a year on network infrastructure. Even though T-Mobile says it has neither the spectrum nor the financial resources from its parent Deutche Telekom to continue as an effective competitor in the highly dynamic mobile market where companies must constantly upgrade their networks to exploit all the good stuff offered by Moore’s law. In other words, it’s unlikely T-Mobile will continue investing several billion per year as a stand-alone company.

Another point that needs clarification: Some smart people think the AT&T estimate of $8 billion in additional capex is specific to the merger — connecting the two networks, expanding LTE beyond its previous plans, etc. But if these people are right, it’s still the case that AT&T will have to adopt at least some portion of network upgrades and maintenance that T-Mobile does every day on its own network. So AT&T’s capex spend is likely to go up beyond this additional $8 billion. In a merger scenario, therefore, not all, perhaps not even most, of the existing T-Mobile network investment “goes away.”

Another scenario in which a non-AT&T carrier acquired T-Mobile would result in whatever similar loss of T-Mobile specific investment that Sprint claims under the AT&T-T-Mobile scenario. But it doesn’t account for this possibility either.

So it seems the new Neumark-Sprint analysis also is not really a net estimate, just another form of gross estimate.

Ultimately, no one knows exactly what will happen in an ever-changing economy in our ever-changing world. But it is pretty safe to say that a healthy, growing, vibrant mobile industry will support more sustainable jobs than an unhealthy industry. The Sprint paper correctly acknowledges that efficiencies from mergers can result in all sorts of economic welfare gains, both for consumers and for workers who move into higher-value jobs.

A stand-alone T-Mobile is not a healthy company, and without T-Mobile, AT&T, although healthy, doesn’t have the spectrum or cell towers it needs to match current growth and fuel new growth. The proposed merger would result in a major supplier of next gen 4G broadband mobile services across most of the U.S. The benefits of this go far beyond the capex it takes to build the network (though very important) and extend to every citizen and industry that will enjoy ubiquitous go-anywhere broadband. These jobs created across the economy are incalculable but are likely to be substantial.

Share/Bookmark

The DoJ Anti-Jobs Division

August 31st, 2011

Where to begin. The economy is still in the doldrums some three years after an historic crash, the Administration is having a tough time boosting output and job growth, and so its Justice Department thinks it would be a good idea to discourage one of the nation’s biggest investors and employers from building yet more high-tech infrastructure in a sector of the economy that is manifestly healthy and which serves as a productivity platform for the rest of the economy.

It’s hard to believe, but that’s exactly what’s happening with the DoJ’s attempt to block AT&T’s merger with T-Mobile.

AT&T wants T-Mobile’s wireless spectrum and compatible cell-tower infrastructure so it can more quickly roll out next generation 4G mobile broadband services. It can’t wait for much needed spectrum auctions that will hopefully occur over the next several years. Meanwhile, T-Mobile doesn’t have the spectrum or financial wherewithal (through its parent Deutche Telekom) to build its own 4G network. Perfect fit, right? Join forces to rapidly deploy new network capacity and coverage for the next iteration of iPads, Androids, Thunderbolts, Galaxy Tabs, and broadband everywhere.

The Communication Workers of America union thinks the union is a good idea, estimating the merger will create 96,000 jobs. AT&T even this morning sweetened the pot by announcing – before DoJ’s surprise announcement – that on completion of the merger it would bring back 5,000 call center jobs from overseas and guarantee no job cuts for T-Mobile call center employees.

DoJ says a combination will hurt competition, but T-Mobile itself says it can’t really compete in the next generation of 4G. And DoJ ignores the fact, reported by the FCC, that 90% of the U.S. population has five or more mobile service provider choices, with brand new entrants like Clearwire, LightSquared, and Dish Network coming online and expanding every day. DoJ relies on indirect evidence of current market share to infer that bad things might happen in the future even as it ignores direct evidence of low prices, wild innovation, and widespread consumer choice in networks and devices.

This July 11 paper from economists Gerald Faulhaber, Robert Hahn, and Hal Singer really says it all.

With the economy in crisis, you’d think someone with a bit of business sense would be seeking every way to expand investment and employment, not find creative ways to quash it. Antitrust lawyers imagine themselves guardians of the public good, but there’s a big problem: they usually see the world through a rear-view mirror, wearing blinders, while experiencing tunnel vision.

Was it antitrust that saved the world from big, bad Microsoft. No, the Internet, Google, and Apple, among hundreds of other innovators, diluted Microsoft’s very temporary dominance. Did the AOL-TimeWarner merger kill competition in the online content or broadband markets? No. To remember the alarmism over that merger is to laugh. DoJ did block WorldCom’s bid for Sprint, and of course WorldCom went bankrupt. Did Verizon’s acquisition of Alltel kill innovation in the mobile market? What? Who’s Alltel?

There’s just no way a few attorneys in Washington can decree the proper organization of an industry that is so exceedingly dynamic. Meanwhile, the economy shuffles along slowly, very slowly.

— Bret Swanson

Share/Bookmark

Banning Risk Is Our Biggest Risk

August 30th, 2011

See our new column in Forbes:

As we entered August, a time of family vacations and corporate retreats, a CEO friend, who is a director of several companies, made a darkly humorous observation. “I’m impressed,” he said. “At our upcoming retreat, the CEO is dedicating an entire day to talk about . . . the business.”

This was a break from the new normal, where management is consumed with compliance, legality, accounting, risk mitigation, and political prognostication and manipulation. Carving time out of a business retreat to talk strategy, execution, product, and sales was a welcome novelty. It also revealed a chief challenge of our times – the obsession with and aversion to risk.

Update: Steve Lohr, the excellent New York Times technology reporter, offers his own take on risk-taking through the lens of Steve Jobs. Lohr and I picked the same great quote from Jobs’ Stanford commencement address.

Share/Bookmark

Broadband Bridges to Rural America

July 29th, 2011

A host of telecom and cable companies today announced a new plan to reform the Universal Service Fund and extend broadband further into rural America. I’ve spent years only partially understanding how USF works. Or how it doesn’t work, as seems the case. I think even in the old days, when it may have made some kind of sense, USF probably retarded investment and new technology in the areas it aimed to support. Unsubsidized potential entrants sporting new technologies couldn’t hope to compete with heavily subsidized incumbents. Even incumbents effectively couldn’t deploy newer, more efficient unsubsidized technologies. The result was probably some extension of phone service in the early days but lots of stagnation for decades after that. In today’s communications market, however, where many companies and many technologies supply many wholesale, commercial, and consumer services — and where broadband, Internet cloud, and wireless complement, compete, and overlap — USF has really broken down. Reform is long overdue, and this consensus industry plan should finally help move USF into the Internet age.

The new proposal — called America’s Broadband Connectivity Plan — also reforms the antiquated and broken Inter Carrier Compensation system, which sets the terms for traffic exchange among communications companies. In a broadband-mobile-Internet world, ICC, like USF, no longer works and is often exploited with arbitrage schemes that add no value but shuffle money via clever manipulation of the rules.

For too long wrangling and indecision between industry and government — and among industry players themselves — has delayed action. We now have a good consensus leap on the road to modernization.

Share/Bookmark

World Broadband Update

June 28th, 2011

The OECD published its annual Communications Outlook last week, and the 390 pages offer a wealth of information on all-things-Internet — fixed line, mobile, data traffic, price comparisons, etc. Among other remarkable findings, OECD notes that:

In 1960, only three countries — Canada, Sweden and the United States — had more than one phone for every four inhabitants. For most of what would become OECD countries a year later, the figure was less than 1 for every 10 inhabitants, and less than 1 in 100 in a couple of cases. At that time, the 84 million telephones in OECD countries represented 93% of the global total. Half a century later there are 1.7 billion telephones in OECD countries and a further 4.1 billion around the world. More than two in every three people on Earth now have a mobile phone.

Very useful stuff. But in recent times the report has also served as a chance for some to misrepresent the relative health of international broadband markets. The common refrain the past several years was that the U.S. had fallen way behind many European and Asian nations in broadband. The mantra that the U.S. is “15th in the world in broadband” — or 16th, 21st, 24th, take your pick — became a sort of common lament. Except it wasn’t true.

As we showed here, the second half of the two-thousand-aughts saw an American broadband boom. The Phoenix Center and others showed that the most cited stat in those previous OECD reports — broadband connections per 100 inhabitants — actually told you more about household size than broadband. And we developed metrics to better capture the overall health of a nation’s Internet market — IP traffic per Internet user and per capita.

Below you’ll see an update of the IP traffic per Internet user chart, built upon Cisco’s most recent (June 1, 2011) Visual Networking Index report. The numbers, as they did last year, show the U.S. leads every region of the world in the amount of IP traffic we generate and consume both in per user and per capita terms. Among nations, only South Korea tops the U.S., and only Canada matches the U.S.

Although Asia contains broadband stalwarts like Korea, Japan, and Singapore, it also has many laggards. If we compare the U.S. to the most uniformly advanced region, Western Europe, we find the U.S. generates 62% more traffic per user. (These figures are based on Cisco’s 2010 traffic estimates and the ITU’s 2010 Internet user numbers.)

As we noted last year, it’s not possible for the U.S. to both lead the world by a large margin in Internet usage and lag so far behind in broadband. We think these traffic per user and per capita figures show that our residential, mobile, and business broadband networks are among the world’s most advanced and ubiquitous.

Lots of other quantitative and qualitative evidence — from our smart-phone adoption rates to the breakthrough products and services of world-leading device (Apple), software (Google, Apple), and content companies (Netflix) — reaffirms the fairly obvious fact that the U.S. Internet ecosystem is in fact healthy, vibrant, and growing. Far from lagging, it leads the world in most of the important digital innovation indicators.

— Bret Swanson

Share/Bookmark

The Growth Imperative

May 26th, 2011

I’m no in-the-weeds budget expert — not even close — but it seemed to me that among all the important debates over deficits, entitlements, and debt ceilings, the biggest factor of all is being mostly ignored. That factor is the compound rate of economic growth, and I made the case for “The Growth Imperative” at a Tuesday meeting of the National Chamber Foundation Fellows. Here’s my column at Forbes. See the slides below:

Share/Bookmark

The Slow Walk to a Reregulated Communications Market

May 24th, 2011

The generally light-touch regulatory approach to America’s Internet industry has been a big success story. Broadband, wireless, digital devices, Internet content and apps — these technology sectors have exploded over the last half-dozen years, even through the Great Recession.

So why are Washington regulators gradually encroaching on the Net’s every nook and cranny? Perhaps the explanation is a paraphrased line about Washington’s upside-down ways: If it fails, subsidize it. If it succeeds, tax it. And if it succeeds wildly, regulate it.

Whatever the reason, we should watch out and speak up, lest D.C. do-gooders slow the growth of our most dynamic economic engine.

Last December, the FCC imposed a watered down version of Net Neutrality. A few weeks ago the FCC asserted authority to regulate prices and terms in the data roaming market for mobile phones. There are endless Washington proposals to regulate digital advertising markets and impose strict new rules to (supposedly) protect consumer privacy. The latest new idea (but surely not the last) is to regulate prices and terms of “special access,” or Internet connectivity in the middle of the network.

Special access refers to high-speed links that connect, say, cell phone towers to the larger network, or an office building to a metro fiber ring. Another common name for these network links is “backhaul.” Washington lobbyists have for years been trying to get the FCC to dictate terms in this market, without success. But now, as part of the proposed AT&T-T-Mobile merger, they are pushing harder than ever to incorporate regulation of these high-speed Internet lines into the government’s prospective approval of  the acquisition.

As the chief opponent of the merger, Sprint especially is lobbying for the new regulations. Sprint claims that just a few companies control most the available backhaul links to its cell phone towers and wants the FCC to set rates and terms for its backhaul leases. But from the available information, it’s clear that many companies — not just Verizon and AT&T — provide these Special Access backhaul services. It’s not clear why an AT&T-T-Mobile combination should have a big effect on the market, nor why the FCC should use the event to regulate a well-functioning market.

Sprint is a majority owner and major partner of 4G mobile network Clearwire, which uses its own microwave wireless links for 90% of its backhaul capacity. Sprint used Clearwire backhaul for its Xohm Wi-Max network beginning in 2008 and will pay Clearwire around a billion dollars over the next two years to lease backhaul capacity.

T-Mobile, meanwhile, uses mostly non-AT&T, non-Verizon backhaul for its towers. Recent estimates say something like 80% of T-Mobile sites are linked by smaller Special Access providers like Bright House, FiberNet, Zayo Bandwidth, and IP Networks. Lots of other providers exist, from the large cable companies like Comcast, Cox, and TimeWarner to smaller specialty firms like FiberTower and TowerCloud to large backbone providers like Level 3. The cable companies all report fast growing cell site backhaul sales, accounting for large shares of their wholesale revenue.

One of the rationales for AT&T’s purchase of T-Mobile was that the two companies’ cell sites are complementary, not duplicative, meaning AT&T may not have links to many or most of T-Mobile’s sites. So at least in the short term it’s likely the T-Mobile cells will continue to use their existing backhaul providers, who are, again, mostly not Verizon or AT&T. It’s possible over time AT&T would expand its network and use its own links to serve the sites, but the backhaul business by then will only be more competitive than today.

This is a mostly unseen part of the Internet. Few of us every think about Special Access or Backhaul when we fire up our Blackberry, Android, or iPhone. But these lines are key components in mobile ecosystem, essential to delivering the voices and bits to and from our phones, tablets, and laptops. The wireless industry, moreover, is in the midst of a massive upgrade of its backhaul lines to accommodate first 3G and now 4G networks that will carry ever richer multimedia content. This means replacing the old T-1 and T-3 copper phone lines with new fiber optic lines and high-speed radio links. These are big investments in a very competitive market.

Given the Internet industry’s overwhelming contribution to the U.S. economy — not just as an innovative platform but as a leading investor in the capital base of the nation — one might think we wouldn’t lightly trifle with success. The chart below, compiled by economist Michael Mandel, shows that the top two — and three out of the top seven — domestic investors are communications companies. These are huge sums of money supporting hundreds of thousands of jobs directly and many millions indirectly.

via Michael Mandel

We’ve seen the damage micromanagement can cause — in the communications sector no less. The type of regulation of prices and terms on infrastructure leases now proposed for Special Access was, in my view, a key to the 2000 tech/telecom crash. FCC intrusions (remember line sharing, TELRIC, and UNE-P, etc.) discouraged investments in the first generation of broadband. We fell behind nations like Korea. Over the last half-dozen years, however, we righted our communications ship and leapt to the top of the world in broadband and especially mobile services.

I’m not arguing these regulations would crash the sector. But the accumulated costs of these creeping Washington intrusions could disrupt the crucial price mechanisms and investment incentives that are no where more important than the fastest growing, most dynamic markets, like mobile networks.Time for FCC lawyers to hit the beach — for Memorial Day weekend . . . and beyond. They should sit back and enjoy the stupendous success of the sector they oversee. The market is working.

— Bret Swanson

Share/Bookmark

Mitch Daniels on K-12 Education

May 5th, 2011

I wrote last week about the hugely successful legislative agenda of Indiana Gov. Mitch Daniels — and the possibility he might offer his leadership to all of America. In the video below, the Governor himself outlines the nation’s most far-reaching education reforms.

Share/Bookmark

The Neverending Frontier

April 25th, 2011

Brink Lindsey of the Kauffman Foundation summarizes a new paper on the imperative of constantly exploring the economic frontier:

Share/Bookmark

Up-is-down data roaming vote could mean mobile price controls

April 11th, 2011

Section 332(c)(2) of the Communications Act says that “a private mobile service shall not . . . be treated as a common carrier for any purpose under this Act.”

So of course the Federal Communications Commission on Thursday declared mobile data roaming (which is a private mobile service) a common carrier. Got it? The law says “shall not.” Three FCC commissioners say, We know better.

This up-is-down determination could allow the FCC to impose price controls on the dynamic broadband mobile Internet industry. Up-is-down legal determinations for the FCC are nothing new. After a decade trying, I’ve still not been able to penetrate the legal realm where “shall not” means “may.” Clearly the FCC operates in some alternate jurisprudential universe.

I do know the decision’s practical effect could be to slow mobile investment and innovation. It takes lots of money and know-how to build the Internet and beam real-time videos from anywhere in the world to an iPad as you sit on your comfy couch or a speeding train. Last year the U.S. invested $489 billion in info-tech, which made up 47% of all non-structure capital expenditures. Two decades ago, info-tech comprised just 33% of U.S. non-structure capital investment. This is a healthy, growing sector.

As I noted a couple weeks ago,

You remember that “roaming” is when service provider A pays provider B for access to B’s network so that A’s customers can get service when they are outside A’s service area, or where it has capacity constraints, or for redundancy. These roaming agreements are numerous and have always been privately negotiated. The system works fine.

But now a group of provider A’s, who may not want to build large amounts of new network capacity to meet rising demand for mobile data, like video, Facebook, Twitter, and app downloads, etc., want the FCC to mandate access to B’s networks at regulated prices. And in this case, the B’s have spent many tens of billions of dollars in spectrum and network equipment to provide fast data services, though even these investments can barely keep up with blazing demand. . . .

It is perhaps not surprising that a small number of service providers who don’t invest as much in high-capacity networks might wish to gain artificially cheap access to the networks of the companies who invest tens of billions of dollars per year in their mobile networks alone. Who doesn’t like lower input prices? Who doesn’t like his competitors to do the heavy lifting and surf in his wake? But the also not surprising result of such a policy could be to reduce the amount that everyone invests in new networks. And this is simply an outcome the technology industry, and the entire country, cannot afford. The FCC itself has said that “broadband is the great infrastructure challenge of the early 21st century.”

But if Washington actually wants more infrastructure investment, it has a funny way of showing it. On Sunday at a Boston conference organized by Free Press, former Obama White House technology advisor Susan Crawford talked about America’s major communications companies.  ”[R]egulating these guys into to an inch of their life is exactly what needs to happen,” she said. You’d think the topic was tobacco or human trafficking rather than the companies that have pretty successfully brought us the wonders of the Internet.

It’s the view of an academic lawyer who has never visited that exotic place called the real world. Does she think that the management, boards, and investors of these companies will continue to fund massive  infrastructure projects in the tens of billions of dollars if Washington dangles them within “an inch of their life”? Investment would dry up long before we ever saw the precipice. This is exactly what’s happened economy-wide over the last few years as every company, every investor, in every industry worried about Washington marching them off the cost cliff. The White House supposedly has a newfound appreciation for the harms of over-regulation and has vowed to rein in the regulators. But in case after case, it continues to toss more regulatory pebbles into the economic river.

Perhaps Nick Schulz of the American Enterprise Institute has it right. Take a look. He calls it the Tommy Boy theory of regulation, and just maybe it explains Washington’s obsession — yes, obsession; when you watch the video, you will note that is the correct word — with managing every nook and cranny of the economy.

Share/Bookmark

AT&T’s Exaflood Acquisition Good for Mobile Consumers and Internet Growth

March 21st, 2011

AT&T’s announced purchase of T-Mobile is an exaflood acquisition — a response to the overwhelming proliferation of mobile computers and multimedia content and thus network traffic. The iPhone, iPad, and other mobile devices are pushing networks to their limits, and AT&T literally could not build cell sites (and acquire spectrum) fast enough to meet demand for coverage, capacity, and quality. Buying rather than building new capacity improves service today (or nearly today) — not years from now. It’s a home run for the companies — and for consumers.

We’re nearing 300 million mobile subscribers in the U.S., and Strategy Analytics estimates by 2014 we’ll add an additional 60 million connected devices like tablets, kiosks, remote sensors, medical monitors, and cars. All this means more connectivity, more of the time, for more people. Mobile data traffic on AT&T’s network rocketed 8,000% in the last four years. Remember that just a decade ago there was essentially no wireless data traffic. It was all voice traffic. A few rudimentary text applications existed, but not much more. By year-end 2010, AT&T was carrying around 12 petabytes per month of mobile traffic alone. The company expects another 8 to 10-fold rise over the next five years, when its mobile traffic could reach 150 petabytes per month. (We projected this type of growth in a series of reports and articles over the last decade.)

The two companies’ networks and businesses are so complementary that AT&T thinks it can achieve $40 billion in cost savings. That’s more than the $39-billion deal price. Those huge efficiencies should help keep prices low in a market that already boasts the lowest prices in the world (just $0.04 per voice minute versus, say, $0.16 in Europe).

But those who focus only on the price of existing products (like voice minutes) and traditional metrics of “competition,” like how many national service providers there are, will miss the boat. Pushing voice prices down marginally from already low levels is not the paramount objective. Building fourth generation mobile multimedia networks is. Some wonder whether “consolidation of power could eventually lead to higher prices than consumers would otherwise see.” But “otherwise” assumes a future that isn’t going to happen. T-Mobile doesn’t have the spectrum or financial wherewithal to deploy a full 4G network. So the 4G networks of AT&T, Verizon, and Sprint (in addition to Clearwire and LightSquared) would have been competing against the 3G network of T-Mobile. A 3G network can’t compete on price with a 4G network because it can’t offer the same product. In many markets, inferior products can act as partial substitutes for more costly superior products. But in the digital world, next gen products are so much better and cheaper than the previous versions that older products quickly get left behind. Could T-Mobile have milked its 3G network serving mostly voice customers at bargain basement prices? Perhaps. But we already have a number of low-cost, bare-bones mobile voice providers.

The usual worries from the usual suspects in these merger battles go like this: First, assume a perfect market where all products are commodities, capacity is unlimited yet technology doesn’t change, and competitors are many. Then assume a drastic reduction in the number of competitors with no prospect of new market entrants. Then warn that prices could spike. It’s a story that may resemble some world, but not the one in which we live.

The merger’s boost to cell-site density is hugely important and should not be overlooked. Yes, we will simultaneously be deploying lots of new Wi-Fi nodes and femtocells (little mobile nodes in offices and homes), which help achieve greater coverage and capacity, but we still need more macrocells. AT&T’s acquisition will boost its total number of cell sites by 30%. In major markets like New York, San Francisco, and Chicago, the number of AT&T cell sites will grow by 25%-45%. In many areas, total capacity should double.

It’s not easy to build cell sites. You’ve got to find good locations, get local government approvals, acquire (or lease) the sites, plan the network, build the tower and network base station, connect it to your long-haul network with fiber-optic lines, and of course pay for it. In the last 20 years, the number of U.S. cell sites has grown from 5,000 to more than 250,000, but we still don’t have nearly enough. CEO Randall Stephenson says the T-Mobile purchase will achieve almost immediately a network expansion that would have taken five years through AT&T’s existing organic growth plan. Because of the nature of mobile traffic — i.e., it’s mobile and bandwidth is shared — the combination of the two networks should yield a more-than-linear increase in quality improvements. The increased cell-site density will give traffic planners much more flexibility to deliver high-capacity services than if the two companies operated separately.

The U.S. today has the most competitive mobile market in the world (second, perhaps, only to tiny Hong Kong). Yes, it’s true, even after the merger, the U.S. will still have a more “competitive” market than most. But “competition” is often not the most — or even a very — important metric in these fast moving markets. In periods of undershoot, where a technology is not good enough to meet demand on quantity or quality, you often need integration to optimize the interfaces and the overall experience, a la the hand-in-glove paring of the iPhone’s hardware, software, and network. Streaming a video to a tiny piece of plastic in your pocket moving at 60 miles per hour — with thousands of other devices competing for the same bandwidth — is not a commodity service. It’s very difficult. It requires millions of things across the network to go just right. These services often take heroic efforts and huge sums of capital just to make the systems work at all.

Over time technologies overshoot, markets modularize, and small price differences matter more. Products that seem inferior but which are “good enough” then begin to disrupt state-of-the art offerings. This was what happened to the voice minute market over the last 20 years. Voice-over-IP, which initially was just “good enough,” made voice into a commodity. Competition played a big part, though Moore’s law was the chief driver of falling prices. Now that voice is close to free (though still not good enough on many mobile links) and data is king, we see the need for more integration to meet the new challenges of the multimedia exaflood. It’s a never ending, dynamic cycle. (For much more on this view of technology markets, see Harvard Business School’s Clayton Christensen).

The merger will have its critics, but it seriously accelerates the coming of fourth generation mobile networks and the spread of broadband across America.

— Bret Swanson

Share/Bookmark

Data roaming mischief . . . Another pebble in the digital river?

March 17th, 2011

Mobile communications is among the healthiest of U.S. industries. Through a time of economic peril and now merely uncertainty, mobile innovation hasn’t wavered. It’s been a too-rare bright spot. Huge amounts of infrastructure investment, wildly proliferating software apps, too many devices to count. If anything, the industry is moving so fast on so many fronts that we risk not keeping up with needed capacity.

Mobile, perhaps not coincidentally, has also been historically a quite lightly regulated industry. But emerging is a sort of slow boil of small but many rules, or proposed rules, that could threaten the sector’s success. I’m thinking of the “bill shock” proceeding, in which the FCC is looking at billing practices and various “remedies.” And the failure to settle the D block public safety spectrum issue in a timely manner. And now we have a group of  rural mobile providers who want the FCC to set prices in the data roaming market.

You remember that “roaming” is when service provider A pays provider B for access to B’s network so that A’s customers can get service when they are outside A’s service area, or where it has capacity constraints, or for redundancy. These roaming agreements are numerous and have always been privately negotiated. The system works fine.

But now a group of provider A’s, who may not want to build large amounts of new network capacity to meet rising demand for mobile data, like video, Facebook, Twitter, and app downloads, etc., want the FCC to mandate access to B’s networks at regulated prices. And in this case, the B’s have spent many tens of billions of dollars in spectrum and network equipment to provide fast data services, though even these investments can barely keep up with blazing demand.

The FCC has never regulated mobile phone rates, let alone data rates, let alone data roaming rates. And of course mobile voice and data rates have been dropping like rocks. These few rural providers are asking the FCC to step in where it hasn’t before. They are asking the FCC to impose old-time common carrier regulation in a modern competitive market – one in which the FCC has no authority to impose common carrier rules and prices.

In the chart above, we see U.S. info-tech investment in 2010 approached $500 billion. Communications equipment and structures (like cell phone towers) surpassed $105 billion. The fourth generation of mobile networks is just in its infancy. We will need to invest many tens of billions of dollars each year for the foreseeable future both to drive and accommodate Internet innovation, which spreads productivity enhancements and wealth across every sector in the economy.

It is perhaps not surprising that a small number of service providers who don’t invest as much in high-capacity networks might wish to gain artificially cheap access to the networks of the companies who invest tens of billions of dollars per year in their mobile networks alone. Who doesn’t like lower input prices? Who doesn’t like his competitors to do the heavy lifting and surf in his wake? But the also not surprising result of such a policy could be to reduce the amount that everyone invests in new networks. And this is simply an outcome the technology industry, and the entire country, cannot afford. The FCC itself has said that “broadband is the great infrastructure challenge of the early 21st century.”

Economist Michael Mandel has offered a useful analogy:

new regulations [are] like  tossing small pebbles into a stream. Each pebble by itself would have very little effect on the flow of the stream. But throw in enough small pebbles and you can make a very effective dam.

Why does this happen? The answer is that each pebble by itself is harmless. But each pebble, by diverting the water into an ever-smaller area,  creates a ‘negative externality’ that creates more turbulence and slows the water flow.

Similarly, apparently harmless regulations can create negative externalities that add up over time, by forcing companies to spending  time and energy meeting the new requirements. That reduces business flexibility and hurts innovation and growth.

It may be true that none of the proposed new rules for wireless could alone bring down the sector. But keep piling them up, and you can dangerously slow an important economic juggernaut. Price controls for data roaming are a terrible idea.

Share/Bookmark

John Cochrane’s “Unpleasant Fiscal Arithmetic”

March 15th, 2011

Can economic growth stop the coming fiscal inflation?

See my new Forbes column on the puzzling economic outlook and a new way to think about monetary policy . . . .

Share/Bookmark

An Economic Solution to the D Block Dilemma

March 8th, 2011

Last month, Cisco reported that wireless data traffic is growing faster than projected (up 159% in 2010 versus its estimate of 149%). YouTube illustrated the point with its own report that mobile views of its videos grew 3x last year to over 200 million per day. Tablets like the Apple iPad were part of the upside surprise.

The very success of smartphones, tablets, and all the new mobile form-factors fuels frustration. They are never fast enough. We always want more capacity, less latency, fewer dropped calls, and ubiquitous access. In a real sense, these are good problems to have. They reflect a fast-growing sector delivering huge value to consumers and businesses. Rapid growth, however, necessarily strains various nodes in the infrastructure. At some point, a lack of resources could stunt this upward spiral. And one of the most crucial resources is wireless spectrum.

There is broad support for opening vast swaths of underutilized airwaves — 300 megahertz (MHz) by 2015 and 500 MHz overall — but we first must dispose of one spectrum scuffle known as the “D block.” Several years ago in a previous spectrum auction, the FCC offered up 10 MHz for commercial use — with the proviso that the owner would have to share the spectrum with public safety users (police, fire, emergency) nationwide. This “D block” sat next to an additional 10 MHz known as Public Safety Broadband (PSB), which was granted outright to the public safety community. But the D block auction failed. Potential bidders could not reconcile the technical and business complexities of this “encumbered” spectrum. The FCC received just one D block bid for just $472 million, far below the FCC’s minimum acceptable bid of $1.3 billion. So today, three years after the failed auction and almost a decade after 9/11, we still have not resolved the public safety spectrum question. Read the rest of this entry »

Share/Bookmark

A History, A Theory, An Exaflood

March 1st, 2011

My friendly UPS guy just dropped off two copies of James Gleick’s new book The Information: A History, A Theory, A Flood. Haven’t anticipated a book this eagerly in a long time.

Will be back with my thoughts after I read this 500-pager. For now, a few of today’s high-profile reviews of the book: Nick Carr in The Daily Beast and John Horgan in The Wall Street Journal. And, of course, Freeman Dyson’s terrific essay in the NY Review of Books.

Share/Bookmark

Cloud Wars Baffle Simmering Cyber Lawyers

February 25th, 2011

My latest column in Forbes – “Cloud Wars Baffle Simmering Cyber Lawyers”:

Like their celestial counterparts, cyber clouds are unpredictable and ever-changing. The Motorola Xoom tablet arrived on Tuesday. The Apple iPad II arrives next week. Just as Verizon finally boasts its own iPhone, AT&T turns the tables with the Motorola Atrix running on the even faster growing Google Android platform. Meanwhile, Nokia declares its once-mighty Symbian platform ablaze and abandons ship for a new mobile partnership with Microsoft.

In the media world, Apple pushes the envelope with publishers who use iPhone and iPad apps to deliver content. Its new subscription service seeks 30% of the price of magazines, newspapers, and, it hopes, games and videos delivered through its App Store and iTunes.

Google quickly counters with OnePass, a program that charges content providers 10% for access to its Android mobile platform. But unlike Apple, said Google CEO Eric Schmidt, “We don’t prevent you from knowing, if you’re a publisher, who your customers are.” Game on.

Netflix, by the way, saw its Web traffic spike 38% in just one month between December 2010 and January 2011 and is, ho hum, upending movies, cable, and TV.

As the cloud wars roar, the cyber lawyers simmer. This wasn’t how it was supposed to be. The technology law triad of Harvard’s Lawrence Lessig and Jonathan Zittrain and Columbia’s Tim Wu had a vision. They saw an arts and crafts commune of cyber-togetherness. Homemade Web pages with flashing sirens and tacky text were more authentic. “Generativity” was Zittrain’s watchword, a vague aesthetic whose only definition came from its opposition to the ominous “perfect control” imposed by corporations dictating “code” and throwing the “master switch.”

In their straw world of “open” heros and “closed” monsters, AOL’s “walled garden” of the 1990s was the first sign of trouble. Microsoft was an obvious villain. The broadband service providers were of coursedangerous gatekeepers, the iPhone was too sleek and integrated, and now even Facebook threatens their ideal of uncurated chaos. These were just a few of the many companies that were supposed to kill the Internet. The triad’s perfect world would be mostly broke organic farmers and struggling artists. Instead, we got Apple’s beautifully beveled apps and Google’s intergalactic ubiquity. Worst of all, the Web started making money.

Read the full column here . . . .

Share/Bookmark

Budget Blow-Out

February 20th, 2011

“Over the 10-year budget window, the president plans for Washington to extract $39 trillion in taxes and spend $46 trillion. The debt limit, currently $14.3 trillion, would have to grow to over $26 trillion.

“Making matters worse, these horrendous spending, taxing and debt numbers would be even grimmer if not for the budget’s rosy assumptions. The budget assumes that real growth will climb from an already wishful 4% in 2012 to 4.5% in 2013 and 4.2% in 2014 — despite plans for sweeping tax increases. The assumed GDP growth is well over any growth rate achieved in the Bush expansion. The budget also reflects the unrealistic assumption that the Federal Reserve will be able to keep interest rates very low and generate $476 billion in profits through highly leveraged financial speculation.”

— David Malpass, The Wall Street Journal, February 16, 2011

Share/Bookmark

More Stagnation

February 14th, 2011

Tyler Cowen talks to Matt Yglesias about The Great Stagnation . . . . Here was my book review – “Tyler Cowen’s Techno Slump.”

Share/Bookmark

World Catches On to the Exaflood

February 11th, 2011

Researchers Martin Hilbert and Priscila Lopez add to the growing literature on the data explosion (what we long ago termed the “exaflood”) with a study of analog and digital information storage, transmission, and computation from 1986 through 2007. They found in 2007 globally we were able to store 290 exabytes, communicate almost 2 zettabytes, and compute around 6.4 exa-instructions per second (EIPS?) on general purpose computers. The numbers have gotten much, much larger since then. Here’s the Science paper (subscription), which appears along side an entire special issue, “Dealing With Data,” and here’s a graphic from the Washington Post:

(Thanks to @AdamThierer for flagging the WashPost article.)

Share/Bookmark

The Stagnation Conversation, continued

February 5th, 2011

Another review of Tyler Cowen’s The Great Stagnation, this one by Michael Mandel. More from Brink Lindsey.

And Nick Schulz’s video interview of Cowen:


Share/Bookmark

Mobile traffic grew 159% in 2010 . . . Tablets giving big boost

February 3rd, 2011

Among other findings in the latest version of Cisco’s always useful Internet traffic updates:

  • Mobile data traffic was even higher in 2010 than Cisco had projected in last year’s report. Actual growth was 159% (2.6x) versus projected growth of 149% (2.5x).
  • By 2015, we should see one mobile device per capita . . . worldwide. That means around 7.1 billion mobile devices compared to 7.2 billion people.
  • Mobile tablets (e.g., iPads) are likely to generate as much data traffic in 2015 as all mobile devices worldwide did in 2010.
  • Mobile traffic should grow at an annual compound rate of 92% through 2015. That would mean 26-fold growth between 2010 and 2015.

Share/Bookmark

Are we doomed by The Great Stagnation?

January 27th, 2011

Here are my thoughts on Tyler Cowen’s terrific new e-book essay The Great Stagnation.

Brink Lindsey of the Kauffman Foundation comments here.

UPDATE: Tyler Cowen lists more reviews of his essay here:

2. Scott Sumner buys a Kindle and reviews The Great Stagnation.

3. Forbes review of The Great Stagnation.

4. Ryan Avent review of The Great Stagnation.

5. David Brooks coverage of The Great Stagnation.

7. Japan reviews The Great Stagnation.

Arnold Kling comments here.

And Nick Schulz here.

Share/Bookmark

Akamai CEO Exposes FCC’s Confused “Paid Priority” Prohibition

January 4th, 2011

In the wake of the FCC’s net neutrality Order, published on December 23, several of us have focused on the Commission’s confused and contradictory treatment of “paid prioritization.” In the Order, the FCC explicitly permits some forms of paid priority on the Internet but strongly discourages other forms.

From the beginning — that is, since the advent of the net neutrality concept early last decade — I argued that a strict neutrality regime would have outlawed, among other important technologies, CDNs, which prioritized traffic and made (make!) the Web video revolution possible.

So I took particular notice of this new interview (sub. required) with Akamai CEO Paul Sagan in the February 2011 issue of MIT’s Technology Review:

TR: You’re making copies of videos and other Web content and distributing them from strategic points, on the fly.

Paul Sagan: Or routes that are picked on the fly, to route around problematic conditions in real time. You could use Boston [as an analogy]. How do you want to cross the Charles to, say, go to Fenway from Cambridge? There are a lot of bridges you can take. The Internet protocol, though, would probably always tell you to take the Mass. Ave. bridge, or the BU Bridge, which is under construction right now and is the wrong answer. But it would just keep trying. The Internet can’t ever figure that out — it doesn’t. And we do.

There it is. Akamai and other content delivery networks (CDNs), including Google, which has built its own CDN-like network, “route around” “the Internet,” which “can’t ever figure . . . out” the fastest path needed for robust packet delivery. And they do so for a price. In other words: paid priority. Content companies, edge innovators, basement bloggers, and poor non-profits who don’t pay don’t get the advantages of CDN fast lanes. Read the rest of this entry »

Share/Bookmark

Did the FCC order get lots worse in last two weeks?

December 21st, 2010

So, here we are. Today the FCC voted 3-2 to issue new rules governing the Internet. I expect the order to be struck down by the courts and/or Congress. Meantime, a few observations:

  • The order appears to be more intrusive on the topic of “paid prioritization” than was Chairman Genachowski’s outline earlier this month. (Keep in mind, we haven’t seen the text. The FCC Commissioners themselves only got access to the text at 11:42 p.m. last night.)
  • If this is true, if the “nondiscrimination” ban goes further than a simple reasonableness test, which itself would be subject to tumultuous legal wrangling, then the Net Neutrality order could cause more problems than I wrote about in this December 7 column.
  • A prohibition or restriction on “paid prioritization” is a silly rule that belies a deep misunderstanding of how our networks operate today and how they will need to operate tomorrow. Here’s how I described it in recent FCC comments:

In September 2010, a new network company that had operated in stealth mode digging ditches and boring tunnels for the previous 24 months, emerged on the scene. As Forbes magazine described it, this tiny new company, Spread Networks

“spent the last two years secretly digging a gopher hole from Chicago to New York, usurping the erstwhile fastest paths. Spread’s one-inch cable is the latest weapon in the technology arms race among Wall Street houses that use algorithms to make lightning-fast trades. Every day these outfits control bigger stakes of the markets – up to 70% now. “Anybody pinging both markets  has to be on this line, or they’re dead,” says Jon A. Najarian, cofounder of OptionMonster, which tracks high-frequency trading.

“Spread’s advantage lies in its route, which makes nearly a straight line from a data center  in Chicago’s South Loop to a building across the street from Nasdaq’s servers in Carteret, N.J. Older routes largely follow railroad rights-of-way through Indiana, Ohio and Pennsylvania. At 825 miles and 13.3 milliseconds, Spread’s circuit shaves 100 miles and 3 milliseconds off of the previous route of lowest latency, engineer-talk for length of delay.”

Why spend an estimated $300 million on an apparently duplicative route when numerous seemingly similar networks already exist? Because, Spread says, three milliseconds matters.

Spread offers guaranteed latency on its dark fiber product of no more than 13.33 milliseconds. Its managed wave product is guaranteed at no more than 15.75 milliseconds. It says competitors’ routes between Chicago and New York range from 16 to 20 milliseconds. We don’t know if Spread will succeed financially. But Spread is yet another demonstration that latency is of enormous and increasing importance. From entertainment to finance to medicine, the old saw is truer than ever: time is money. It can even mean life or death.

A policy implication arises. The Spread service is, of course, a form a “paid prioritization.” Companies are paying “eight to 10 times the going rate” to get their bits where they want them, when they want them.5 It is not only a demonstration of the heroic technical feats required to increase the power and diversity of our networks. It is also a prime example that numerous network users want to and will pay money to achieve better service.

One way to achieve better service is to deploy more capacity on certain links. But capacity is not always the problem. As Spread shows, another way to achieve better service is to build an entirely new 750-mile fiber route through mountains to minimize laser light delay. Or we might deploy a network of server caches that store non-realtime data closer to the end points of networks, as many Content Delivery Networks (CDNs) have done. But when we can’t build a new fiber route or store data – say, when we need to get real-time packets from point to point over the existing network – yet another option might be to route packets more efficiently with sophisticated QoS technologies. Each of these solutions fits a particular situation. They take advantage of, or submit to, the technological and economic trade-offs of the moment or the era. They are all legitimate options. Policy simply must allow for the diversity and flexibility of technical and economic options – including paid prioritization – needed to manage networks and deliver value to end-users.

Depending on how far the FCC is willing to take these misguided restrictions, it could actually lead to the very outcomes most reviled by “open Internet” fanatics — that is, more industry concentration, more “walled gardens,” more closed networks. Here’s how I described the possible effect of restrictions on the important voluntary network management tools and business partnerships needed to deliver robust multimedia services:

There has also been discussion of an exemption for “specialized services.” Like wireless, it is important that such specialized services avoid the possible innovation-sapping effects of a Net Neutrality regulatory regime. But the Commission should consider several unintended consequences of moving down the path of explicitly defining, and then exempting, particular “specialized” services while choosing to regulate the so-called “basic,” “best-effort,” or “entry level” “open Internet.”

Regulating the “basic” Internet but not “specialized” services will surely push most of the network and application innovation and investment into the unregulated sphere. A “specialized” exemption, although far preferable to a Net Neutrality world without such an exemption, would tend to incentivize both CAS providers and ISPs service providers to target the “specialized” category and thus shrink the scope of the “open Internet.”

In fact, although specialized services should and will exist, they often will interact with or be based on the “basic” Internet. Finding demarcation lines will be difficult if not impossible. In a world of vast overlap, convergence, integration, and modularity, attempting to decide what is and is not “the Internet” is probably futile and counterproductive. The very genius of the Internet is its ability to connect to, absorb, accommodate, and spawn new networks, applications and services. In a great compliment to its virtues, the definition of the Internet is constantly changing. Moreover, a regime of rigid quarantine would not be good for consumers. If a CAS provider or ISP has to build a new physical or logical network, segregate services and software, or develop new products and marketing for a specifically defined “specialized” service, there would be a very large disincentive to develop and offer simple innovations and new services to customers over the regulated “basic” Internet. Perhaps a consumer does not want to spend the extra money to jump to the next tier of specialized service. Perhaps she only wants the service for a specific event or a brief period of time. Perhaps the CAS provider or ISP can far more economically offer a compelling service over the “basic” Internet with just a small technical tweak, where a leap to a full-blown specialized service would require more time and money, and push the service beyond the reach of the consumer. The transactions costs of imposing a “specialized” quarantine would reduce technical and economic flexibility on both CAS providers and ISPs and, most crucially, on consumers.

Or, as we wrote in our previous Reply Comments about a related circumstance, “A prohibition of the voluntary partnerships that are likely to add so much value to all sides of the market – service provider, content creator, and consumer – would incentivize the service provider to close greater portions of its networks to outside content, acquire more content for internal distribution, create more closely held ‘managed services’ that meet the standards of the government’s ‘exclusions,’ and build a new generation of larger, more exclusive ‘walled gardens’ than would otherwise be the case. The result would be to frustrate the objective of the proceeding. The result would be a less open Internet.”

It is thus possible that a policy seeking to maintain some pure notion of a basic “open Internet” could severely devalue the open Internet the Commission is seeking to preserve.

All this said, the FCC’s legal standing is so tenuous and this order so rooted in reasoning already rejected by the courts, I believe today’s Net Neutrality rule will be overturned. Thus despite the numerous substantive and procedural errors committed on this “darkest day of the year,” I still expect the Internet to “survive and thrive.”

Share/Bookmark

The Internet Survives, and Thrives, For Now

December 7th, 2010

See my analysis of the FCC’s new “net neutrality” policy at RealClearMarkets:

Despite the Federal Communications Commission’s “net neutrality” announcement this week, the American Internet economy is likely to survive and thrive. That’s because the new proposal offered by FCC chairman Julius Genachowski is lacking almost all the worst ideas considered over the last few years. No one has warned more persistently than I against the dangers of over-regulating the Internet in the name of “net neutrality.”

In a better world, policy makers would heed my friend Andy Kessler’s advice to shutter the FCC. But back on earth this new compromise should, for the near-term at least, cap Washington’s mischief in the digital realm.

. . .

The Level 3-Comcast clash showed what many of us have said all along: “net neutrality” was a purposely ill-defined catch-all for any grievance in the digital realm. No more. With the FCC offering some definition, however imperfect, businesses will now mostly have to slug it out in a dynamic and tumultuous technology arena, instead of running to the press and politicians.

Share/Bookmark

Caveats. Already!

December 1st, 2010

If it’s true, as Nick Schulz notes, that FCC Commissioner Copps and others really think Chairman Genachowski’s proposal today “is the beginning . . . not the end,” then all bets are off. The whole point is to relieve the overhanging regulatory threat so we can all move forward. More — much more, I suspect — to come . . . .

Share/Bookmark

FCC Proposal Not Terrible. Internet Likely to Survive and Thrive.

December 1st, 2010

The FCC appears to have taken the worst proposals for regulating the Internet off the table. This is good news for an already healthy sector. And given info-tech’s huge share of U.S. investment, it’s good news for the American economy as a whole, which needs all the help it can get.

In a speech this morning, FCC chair Julius Genachowski outlined a proposal he hopes the other commissioners will approve at their December 21 meeting. The proposal, which comes more than a year after the FCC issued its Notice of Proposed Rule Making into “Preserving the Open Internet,” appears mostly to codify the “Four Principles” that were agreed to by all parties five years ago. Namely:

  • No blocking of lawful data, websites, applications, services, or attached devices.
  • Transparency. Consumers should know what the services and policies of their providers are, and what they mean.
  • A prohibition of “unreasonable discrimination,” which essentially means service providers must offer their products at similar rates and terms to similarly situated customers.
  • Importantly, broadband providers can manage their networks and use new technologies to provide fast, robust services. Also, there appears to be even more flexibility for wireless networks, though we don’t yet know the details.

(All the broad-brush concepts outlined today will need closer scrutiny when detailed language is unveiled, and as with every government regulation, implementation and enforcement can always yield unpredictable results. One also must worry about precedent and a new platform for future regulation. Even if today’s proposal isn’t too harmful, does the new framework open a regulatory can of worms?)

So, what appears to be off the table? Most of the worst proposals that have been flying around over the last year, like . . .

  • Reclassification of broadband as an old “telecom service” under Title II of the Communications Act of 1934, which could have pierced the no-government seal on the Internet in a very damaging way, unleashing all kinds of complex and antiquated rules on the modern Net.
  • Price controls.
  • Rigid nondiscrimination rules that would have barred important network technologies and business models.
  • Bans of quality-of-service technologies and techniques (QoS), tiered pricing, or voluntary relationships between ISPs and content/application/service (CAS) providers.
  • Open access mandates, requiring networks to share their assets.

Many of us have long questioned whether formal government action in this arena is necessary. The Internet ecosystem is healthy. It’s growing and generating an almost dizzying array of new products and services on diverse networks and devices. Communications networks are more open than ever. Facebook on your BlackBerry. Netflix on your iPad. Twitter on your TV. The oft-cited world broadband comparisons, which say the U.S. ranks 15h, or even 26th, are misleading. Those reports mostly measure household size, not broadband health. Using new data from Cisco, we estimate the U.S. generates and consumes more network traffic per user and per capita than any nation but South Korea. (Canada and the U.S. are about equal.) American Internet use is twice that of many nations we are told far outpace the U.S. in broadband. Heavy-handed regulation would have severely depressed investment and innovation in a vibrant industry. All for nothing.

Lots of smart lawyers doubt the FCC has the authority to issue even the relatively modest rules it outlined today. They’re probably right, and the question will no doubt be litigated (yet again), if Congress does not act first. But with Congress now divided politically, the case remains that Mr. Genachowski’s proposal is likely the near-term ceiling on regulation. Policy might get better than today’s proposal, but it’s not likely to get any worse. From what I see today, that’s a win for the Internet, and for the U.S. economy.

— Bret Swanson

Share/Bookmark

One Step Forward, Two Steps Back

November 22nd, 2010

The FCC’s apparent about-face on Net Neutrality is really perplexing.

Over the past few weeks it looked like the Administration had acknowledged economic reality (and bipartisan Capitol Hill criticism) and turned its focus to investment and jobs. Outgoing NEC Director Larry Summers and Commerce Secretary Gary Locke announced a vast expansion of available wireless spectrum, and FCC chairman Julius Genachowski used his speech to the NARUC state regulators to encourage innovation and employment. Gone were mentions of the old priorities — intrusive new regulations such as Net Neutrality and Title II reclassification of modern broadband as an old telecom service. Finally, it appeared, an already healthy and vibrant Internet sector could stop worrying about these big new government impositions — and years of likely litigation — and get on with building the 21st century digital infrastructure.

But then came word at the end of last week that the FCC would indeed go ahead with its new Net Neutrality regs. Perhaps even issuing them on December 22, just as Congress and the nation take off for Christmas vacation [the FCC now says it will hold its meeting on December 15]. When even a rare  economic sunbeam is quickly clouded by yet more heavy-handedness from Washington, is it any wonder unemployment remains so high and growth so low?

Any number of people sympathetic to the economy’s and the Administration’s plight are trying to help. Last week David Leonhardt of the New York Times pointed the way, at least in a broad strategic sense: “One Way to Trim the Deficit: Cultivate Growth.” Yes, economic growth! Remember that old concept? Economist and innovation expert Michael Mandel has suggested a new concept of “countercyclical regulatory policy.” The idea is to lighten regulatory burdens to boost growth in slow times and then, later, when the economy is moving full-steam ahead, apply more oversight to curb excesses. Right now, we should be lightening burdens, Mandel says, not imposing new ones:

it’s really a dumb move to monkey with the vibrant and growing communications sector when the rest of the economy is so weak. It’s as if you have two cars — one running, one in the repair shop — and you decide it’s a good time to rebuild the transmission of the car that actually works because you hear a few squeaks.

Apparently, FCC honchos met with interested parties this morning to discuss what comes next. Unfortunately, at a time when we need real growth, strong growth, exuberant growth! (as Mandel would say), the Administration appears to be saddling an economy-lifting reform (wireless spectrum expansion) with leaden regulation. What’s the point of new wireless spectrum if you massively devalue it with Net Neutrality, open access, and/or Title II?

One step forward, two steps back (ten steps back?) is not an exuberant growth and jobs strategy.

Share/Bookmark

Finally, A Real Debate Over Monetary Policy

November 18th, 2010

Scott Sumner is an original economic thinker and a particular expert in monetary affairs. So I sat upright when I saw his skeptical reply to the QE2 Skeptics.

Early this week a host of high-profile economists, investors, and thinkers, under the e21 banner, issued an understated but unusually critical “open letter to Ben Bernanke.” They urged him to abandon the $600 billion QE2 strategy, warning of uncertain but possibly very large downside risks compared to little reward even in the unlikely case it works.

Sumner, who favors a concept he calls NGDP (nominal GDP) targeting, says the Fed isn’t trying to spur inflation. It’s trying to boost national income. And who could be opposed to that?

Sumner says the Fed can move the AD (aggregate demand) curve to the right. “Whether that extra spending shows up as inflation or real growth,” he acknowledges, “is of course an important issue.” A very important issue. But critics of QE2 and the broader existing Fed framework aren’t necessarily worried about short-term inflation of the CPI type. No, we are worried about sinking Fed credibility, dollar debasement, possible asset bubbles, and international turmoil. And, yes, possible inflation down the road.

I think Sumner ignores a couple important factors that argue against the simple equation that more Fed easing yields a significant and quantifiable higher level of NGDP, and more importantly RGDP.

First, the transmission mechanism whereby increased bank reserves become credit isn’t working well. A trillion dollars of excess reserves sit on U.S. bank balance sheets. Small and medium sized businesses have found access to loans difficult. Consumers, too, even with historically low mortgage and personal loan rates, have not necessarily been able to access credit because of tighter lending standards and retrenched credit cards and home equity lines. If QE2 merely increases excess reserves further, without a more effective way to boost the supply and demand of actual credit, I don’t think the Monetary Ease –> More NGDP equation is so clear. A further complication: Large companies and the federal government find credit at historically low rates abundant and accessible. But this begs the second problem with the simple Ease –> NGDP equation.

In a world of closed economies, Sumner’s view that U.S. QE would directly translate into more U.S. AD (or his preferred national income) might work, at least temporarily. But we don’t live in a closed economy. Or as Robert Mundell long ago said, “There is only one closed economy — the world economy.” Companies, hedge funds, and other global entities can borrow cheap dollars and then go find opportunities across the globe.

An example is this Nov. 17 Bloomberg story: “Bernanke’s ‘Cheap Money’ Stimulus Spurs Corporate Investment Outside U.S.”

Southern Copper Corp., a Phoenix- based mining company that boasts some of the industry’s largest copper reserves, plans to invest $800 million this year in projects such as a new smelter and a more efficient natural-gas furnace.

Such spending sounds like just what the Federal Reserve had in mind in 2008 when it cut interest rates to near zero and started buying $1.7 trillion in securities to spur job growth. Yet Southern Copper, which raised $1.5 billion in an April debt offering, will use that money at its mines in Mexico and Peru, not the U.S., said Juan Rebolledo, spokesman for parent Grupo Mexico SAB de CV of Mexico City.

Southern Copper’s plans illustrate why the Fed’s second round of bond buying may not reduce unemployment, which has stalled near a 26-year high.

Or as Richard Fisher, CEO of the Dallas Federal Reserve Bank, said in an October 19 speech:

I have begun to wonder if the monetary accommodation we have already engineered might even be working in the wrong places.

I’m all for companies investing in the best opportunities around the globe. And some of that investment may benefit the companies’ American assets or workforce in direct or indirect ways over time. But that kind of long-term symbiotic growth is not what the Fed is aiming for or says it’s doing with QE2. When the Fed specifically targets the short-term U.S. economy and ends up pushing money overseas, that’s a direct failure of the mission. I believe the Fed should concentrate more on the dollar’s value as the world’s key reserve currency. But here we have a case of arbitrage — getting weak dollars the heck out of the country. We can see that much of ROW is growing faster than the U.S.

Beyond these transmission and international factors, it’s clear that Fed policy — now that we are beyond the panic of 2008-09 when Bernanke and Co. rightly filled an emergency monetary hole — is fueling the growth of government and giving Washington an excuse to continue with counterproductive anti-growth fiscal and regulatory policies.

Sumner tries to addresses this criticism:

7.  “Won’t monetary stimulus just paper over the failures of the Obama administration, allowing him to get re-elected?”

That’s an argument unworthy of principled conservatives.  After 30 years of major neoliberal reforms all over the world (even in Sweden!) it’s time for conservatives to become less defeatist about the possibility of making positive improvements in governance.  We need to do the right thing, and let the political chips fall where they may.  If monetary stimulus is tried, and succeeds in boosting NGDP (which even conservatives implicitly acknowledge can happen when they worry about inflation) then it would drive a stake through the heart of the Krugmanite fiscal stimulus argument (for future recessions.)

I think Sumner misses the point. Fed critics should of course root for the success of Bernanke and our other economic policymakers. But it’s not the case that QE2 is objectively the “right thing” and all critics are opposing it for political reasons. If critics think it is the wrong monetary policy — with the additional ominous factor that it is aiding and abetting (“papering over”) a harmful fiscal and regulatory path — then they are not required to bite their lips and “let the political chips fall where they may” as the economy continues to limp along. If mere monetary policy could solve all the world’s problems, then Mao’s China could have succeeded so long as Beijing printed enough money. That’s a severe reference, an exaggeration to make a point. But Bernanke himself has stated that the Fed cannot do everything, and it’s crystal clear historically that central banks often cause more problems than they cure, often when they are trying to compensate for other poisonous policies.

Despite the sluggish economy and these disagreements, I’m encouraged we are finally having a real, national (international!) debate over monetary policy — one I’ve urged for a long time. And I look forward to further offerings from Sumner . . . and many others.

Share/Bookmark

Killing the Master Switch

November 17th, 2010

Adam Thierer nicely dissects a bunch of really sloppy arguments by Tim Wu, author of a new book on information industries called The Master Switch. (Scroll down to the comments section.)

Libertarians do NOT believe everything will be all sunshine and roses in a truly free marketplace. There will indeed be short term spells of what many of us would regard as excessive market power. The difference between us comes down to the amount of faith we would place in government actors versus market forces / evolution to better solve that problem. Libertarians would obviously have a lot more patience with markets and technological change, and would be willing to wait and see how things work out. We believe, as I have noted in my previous responses, that it’s often during what critics regard as a market’s darkest hour that innovation is producing some of the most exciting technologies with the greatest potential to disrupt the incumbents whose “master switch” you fear. Again, we are simply more bullish on what I have called experimental, evolutionary dynamism. Innovators and entrepreneurs don’t sit still; they respond to incentives, and for them, short-term spells of “market power” are golden opportunities. Ultimately, that organic, bottom-up approach to addressing “market power” or “market failure” simply makes a lot more sense to us – especially because it lacks the coercive element that your approach would bring to bear preemptively to solve such problems.

For Adam’s comprehensive six-part review of the book, go here.

Share/Bookmark

Facebook’s (New) Old Idea

November 15th, 2010

With its new converged messaging service announced today, Facebook took several more steps toward David Gelernter’s Lifestreams concept, first outlined in the mid- to late-1990s. See an old Yale web page on the topic, the newer Lifestream blog, and a 2009 interview with Gelernter. A lifestream is bascially a digital representation of your life — all the communications, documents, photos, blips, bleeps, and bits that come and go . . . arranged in chronological order as a never-ending river of searchable information. Google’s Gmail was the first popular application/service that hinted at the Lifestream ideal. Facebook — with its “seamless messaging,” “conversation history,” and “social inbox” — now moves further with the integration of email, text, IM, and attachments in never ending streams, accessible on any device.

Share/Bookmark

Department of Monetary Mistakes: QE2 Is Nothing New

November 15th, 2010

The Federal Reserve plan to buy an additional $600 billion in longer term securities — known as QE2 — is taking flak domestically and from around the world. And rightly so, in my view. Check out e21’s understated but highly critical open letter to Ben Bernanke from a group of economists, investors, and thinkers.

But in some ways, QE2 is nothing new. Yes, it is a departure from the traditional Fed purchases of only very short-term securities. And yes, it could lead to all the problems of which its new critics warn. But this is just the latest round in a long series of mistakes. The new worries are possible currency debasement, inflation, asset bubbles, international turmoil, and avoidance of the real burdens on the U.S. economy — namely fiscal and regulatory policy. These worries are real. But this would be a replay of what already happened in the lead up to the 2008 Panic. Or the 1998 Asian Flu. Or the 2000 U.S. crash.

Here was my warning to the Fed in The Wall Street Journal in 2006:

It is these periods of transition, where the value of the currency is changing fast, but before price changes filter through all commerce and contracts, when financial and political disruptions often take place.

That was two years before a Very Big Disruption. (I followed up with another monetary critique in the WSJ here.)

But over the last few decades, there was no common critique of monetary policy among conservatives, Republicans, libertarians, supply-siders, nor among Democrats, liberals, or Keynesians, etc. (Take your pick of labels: the point is there was no effective coalition with any hope of altering the American monetary status quo. There were, for example, just as many Republican backers of Greenspan/Bernanke, and of America’s weak-dollar policy, as there were detractors.) A silver lining today is that QE2 appears to have united and galvanized a broad and thoughtful opposition to the existing monetary regime. Hopefully these events can spur deeper thinking about a new American — and international — monetary policy that can build a firmer foundation for global financial stability and economic growth.

Columbia’s Charles Calomiris discusses his opposition to the Fed’s QE2

Share/Bookmark

Microsoft Outlines Economics of the Cloud

November 12th, 2010

In a new white paper:

We believe that large clouds could one day deliver computing power at up to 80% lower cost than small clouds.  This is due to the combined effects of three factors:supply-sideeconomies of scale which allow large clouds to purchase and operate infrastructure cheaper; demand-sideeconomies of scale which allow large clouds to run that infrastructure more efficiently by pooling users; and multi-tenancy which allows users to share an application, splitting the cost of managing that application.

Share/Bookmark

NetFlix Boom Leads to Switch

November 12th, 2010

NetFlix is moving its content delivery platform from Akamai back to Level 3. Level 3 is adding 2.9 terabits per second of new capacity specifically to support NetFlix’s booming movie streaming business.

Share/Bookmark

The End of Net Neutrality?

November 8th, 2010

In what may be the final round of comments in the Federal Communications Commission’s Net Neutrality inquiry, I offered some closing thoughts, including:

  • Does the U.S. really rank 15th — or even 26th — in the world in broadband? No.
  • The U.S. generates and consumes substantially more IP traffic per Internet user and per capita than any other region of the world.
  • Among individual nations, only South Korea generates significantly more IP traffic than the U.S. (Canada and the U.S. are equal.)
  • U.S. wired and wireless broadband networks are among the world’s most advanced, and the U.S. Internet ecosystem is healthy and vibrant.
  • Latency is increasingly important, as demonstrated by a young company called Spread Networks, which built a new optical fiber route from Chicago to New York to shave mere milliseconds off the existing fastest network offerings. This example shows the importance — and legitimacy — of “paid prioritization.”
  • As we wrote: “One way to achieve better service is to deploy more capacity on certain links. But capacity is not always the problem. As Spread shows, another way to achieve better service is to build an entirely new 750-mile fiber route through mountains to minimize laser light delay. Or we might deploy a network of server caches that store non-realtime data closer to the end points of networks, as many Content Delivery Networks (CDNs) have done. But when we can’t build a new fiber route or store data — say, when we need to get real-time packets from point to pointover the existing network — yet another option might be to route packets more efficiently with sophisticated QoS technologies.”
  • Exempting “wireless” from any Net Neutrality rules is necessary but not sufficient to protect robust service and innovation in the wireless arena.
  • “The number of Wi-Fi and femtocell nodes will only continue to grow. It is important that they do, so that we might offload a substantial portion of traffic from our mobile cell sites and thus improve service for users in mobile environments. We will expect our wireless devices to achieve nearly the robustness and capacity of our wired devices. But for this to happen, our wireless and wired networks will often have to be integrated and optimized. Wireline backhaul — whether from the cell site or via a residential or office broadband connection — may require special prioritization to offset the inherent deficiencies of wireless. Already, wireline broadband companies are prioritizing femtocell traffic, and such practices will only grow. If such wireline prioritization is restricted, crucial new wireless connectivity and services could falter or slow.”
  • The same goes for “specialized services,” which some suggest be exempted from new Net Neutrality regulations. Again, necessary but not sufficient.
  • “Regulating the ‘basic’ Internet but not ’specialized’ services will surely push most of the network and application innovation and investment into the unregulated sphere. A ’specialized’ exemption, although far preferable to a Net Neutrality world without such an exemption, would tend to incentivize both CAS providers and ISPs service providers to target the ’specialized’ category and thus shrink the scope of the ‘open Internet.’ In fact, although specialized services should and will exist, they often will interact with or be based on the ‘basic’ Internet. Finding demarcation lines will be difficult if not impossible. In a world of vast overlap, convergence, integration, and modularity, attempting to decide what is and is not ‘the Internet’ is probably futile and counterproductive. The very genius of the Internet is its ability to connect to, absorb, accommodate, and spawn new networks, applications and services. In a great compliment to its virtues, the definition of the Internet is constantly changing.”
Share/Bookmark