Lots of people think innovation is over. Robert Gordon, the author of a grand new book called The Rise and Fall of American Growth, thinks the Information Age may be finished. We disagree.
On Thursday, my colleagues in the American Enterprise Institute’s technology program offered views on “Cyberspace policy at home and abroad,” covering the increasingly contentious realms of hacking, encryption, IP, and global Internet governance and the domestic effects of FCC regulation. I spoke for 10 minutes on technology’s broader impact on the economy and addressed the Great Stagnation question. Has a four-decade dearth of technology caused slow growth and inequality, with more disappointment to come? Or could better policy quickly encourage new bursts of innovation and resurgent economic growth? Watch here (with my segment beginning at 3:20:30, if it doesn’t jump there automatically).
Here’s a longer talk covering many of the same topics, from Purdue’s Dawn or Doom 2 tech conference in September.
Fifteen years ago, Art Laffer, the principal advocate of the Reagan tax reforms, outlined his ideal tax code for the 21st century. In December 2001, I visited Laffer in San Diego and asked him:
What does your perfect tax code look like?
Number one, it should start out on the first dollar you earn. Then take all federal taxes, (except the sin taxes, which are there to discourage behavior not collect revenue) – I’m talking payroll taxes, income taxes, corporate profits taxes, all federal excise taxes, tariffs, telecom taxes – get rid of them all. And have two taxes. One on business value added. And one on personal unadjusted gross income.
Why do you like a value added tax?
Because it’s got a huge tax base. And it’s all value added. You want to tax the value added to the GDP because that’s what you’re getting the resource base out of. You want to tax both unadjusted gross income and business value added because that way you get the whole GDP twice, so you can have half the rate.
What’s the rationale for that?
If you beat a dog, it’s gonna run, but you don’t know which direction. If you feed a dog, you know where it will be. Taxes are like that. People will do all they can to avoid paying taxes. Evasion, avoidance, underground economy, tax shelters, etc. Going out of work. So the theory behind the flat tax is you want the lowest possible rate on the broadest possible base. By having the lowest base, you provide the fewest incentives to evade, avoid or otherwise not report taxable income.
Isn’t there double taxation involved?
Oh, there is. But it’s double taxation of everything the same. There are no distortions. You can tax GDP at, what is it today, 22 percent of GDP. Or you can tax it at 11 percent at the individual level and 11 percent at the production level. I think it makes a lot of sense to tax 11 percent of each because you make the base that much bigger and the rate that much smaller.
If this looks like the tax reform plan of presidential candidate Sen. Ted Cruz, that’s be cause it basically is. Cruz calls the value added portion of his plan a business flat tax and even referred to Laffer’s support for his plan in last week’s debate. Other candidates, however, have attacked this plan as a “VAT” – a value added tax. They assert that this dreaded three letter tax is an obvious menace of high taxation and big government. But why do they say this? Do they really think that Laffer, the economist most widely associated with pro-growth tax policy, and Cruz, a fierce advocate for taxpayers, want to over-tax the U.S. economy?
For the first few days of the debate, the attackers seemed to be emphasizing the semantics rather than the substance – it is too a VAT! they insisted. If they are trying to equate the Cruz flat tax with a European-style VAT, however, I think they are wrong. Most European VATs are sales taxes, applied transaction by transaction. The Cruz/Laffer proposal taxes firms on revenues minus capital investments and payments to other businesses, is based on corporate accounts, and is payable by firms quarterly. It essentially taxes profits and payrolls, not sales. Or as the Tax Foundation put it,
Is It Like A Retail Sales Tax?
No, it’s not.
Most of the GOP tax proposals, regardless of flavor and legal incidence, tax “value added,” so VAT is a much less precise and informative term than the debate this last week would imply. I understand the political incentive for opponents to link the two semantically, but the label matters much less than the substance. Critics make some plausible sounding arguments, but I’m not sure any of them hold up. Among the criticisms:
VATs are a key reason European taxation and government is so large.
It’s true that many European nations employ VATs, but these sales taxes are almost uniformly imposed on top of payroll taxes, corporate taxes, and progressive income taxes. They are not a replacement for these taxes but an addition to these taxes. Laffer’s outline and Cruz’s plan, however, use the business flat tax and the personal flat tax to replace the current tax code, not to augment it.
Conservatives have been warning against European style VATs for decades. Why would we go down this road?
Again, the key argument against Euro style VATs was that American liberals have wanted for a long time to boost taxation and the size of government, and adding a VAT on top of the current tax code has been one Democratic idea to accomplish this. Conservatives were and are correct to argue against this additional layer of taxation. The Tax Foundation analysis says the Cruz plan would boost economic growth without increasing the tax burden (and in many important ways, reducing the tax burden) – just the opposite of the European experience.
Which gets us to the next argument – that VATs raise too much money.
It’s true that economists of all persuasions think VATs are efficient methods to raise revenue, which conservatives usually say is the purpose of the tax code. Not social engineering, not redistribution. Laffer’s explanation above makes the point: the lowest possible rates on the largest possible economic base, which will minimize distortions, disincentives, unfairness, and noncompliance. One of the foundational insights of supply-side economics and the Reagan economic revival was that some taxes are better than others – that tax complexity and high rates can impose large costs on the economy relative to the revenue they collect, and that we can encourage greater economic activity and collect necessary revenues with a more efficient tax code. An efficient tax just means we can enjoy lower rates and less interference in the economy.
Ah, critics say, yes, but VATs tend to hide the cost of government.
True, VATs don’t appear as deductions from your paycheck, nor would the business flat tax. But neither do corporate income taxes appear on your paycheck, nor, for most people, do the high-rate income taxes that pay for a huge proportion of the nation’s total tax take. Taxes in the U.S. today don’t reflect the true cost of government for many voters. Let’s give future voters a little more credit. They would quickly figure out that the business flat tax rates are built into the prices of goods and services and affect wages, and would vote accordingly. (The Cruz plan even says that firms would pay the tax quarterly and report the figures to their employees and shareholders, making it transparent.) In fact, one could argue that a low-rate-broad-base tax would better align voters with good economic policy. In the current highly progressive system in which the cost of government is invisible to many, half(+1) the population can essentially vote to tax the other half(-1). With a flat tax’s broader base and single rate, on the other hand, the costs would be more apparent, less unfairly and arbitrarily distributed, and a substantial majority of voters would be likely to oppose tax rate increases. The Cruz plan would still protect low-income Americans with a larger standard deduction and, they say, an improved EITC.
Yes, yes, yes, but the real threat is that future politicians could raise the VAT rate without people noticing.
Politicians have already proved willing and adept at raising (and complicating) today’s taxes! I understand the theory behind this line of argument, but I just don’t buy it. Again, I think most people would understand that voting to increase the flat tax rate is voting to tax themselves. A further irony: the very critics who warn that future politicians will raise the rates in Cruz’s plan themselves support a tax plan with a corporate tax rate nearly twice as high and a personal income tax rate more than three times as high. One could say their favored tax code thus enshrines from the outset what they warn against as a mere possibility for their opponent’s plan. All that said, yes, I’d love to see some additional protections so that any new and improved tax system would be difficult to undo.
Now, one argument I have not heard from critics but that I can imagine is this: Because a flat tax puts everyone basically in the same boat, and better aligns the incentives of all taxpayers/voters, taxes as a political issue may lose their saliency. Presumably, a large and crucial part of the Republican coalition is based upon the group of voters that pays an overwhelming portion of all taxes. Might some anti-tax advocates think inefficient taxes that gouge some taxpayers are good for generating anti-tax voting incentives and holding together the political coalition? With less of a tax split, would this clear cut issue go away, while the parties realign based on other non-tax issues? I have no idea, am no political expert, and am just speculating.
The fact is that the tax proposals of many of the GOP presidential candidates would all improve the tax code and the U.S. economy. I think the Laffer/Cruz proposal is perhaps the most attractive option among many good plans. For good summaries, detailed analyses, and comparisons of the candidates’ plans, see the Tax Foundation.
Marginal Revolution University has a good new short video on why economic growth is so important . . .
It’s difficult to overstate just how panicked the world was over oil prices a decade ago — stratospherically high oil prices. We were, most policy makers and economists believed, in an energy crisis — the result of a desperate shortage of petroleum that could only be solved with cellulosic ethanol and windmills. During this “energy crisis” of 2006, we wrote the following Wall Street Journal commentary, hoping to calm fears of peak oil and other such nonsense that often accompanies big price swings. We said oil prices likely would recede. We said vast stores of oil, especially in shale, were about to be found and extracted. We said alternative energy schemes in part justified by high oil prices were a bad idea. We also said a big financial disruption was likely. The macro environment is very different today — prices are low instead of high; the dollar strong instead of weak. In fact, we’ve been telling clients for the last year that today’s environment looks much like the late 1990s: a strong dollar, plummeting energy and commodity prices, soaring prices for abstract technology firms like Internets and bio-techs, and trouble in emerging markets. We reprint this column as a reminder of the economic fundamentals…and energy’s abundance.
The Elephant in the Barrel
The Wall Street Journal — August 12, 2006
by Bret Swanson
Nigerian pipeline explosions, Chinese demand, Arab angst, Venezuelan volatility, peak oil and a Putin premium: These are the usual explanations for high petroleum prices. But our discussion of the “energy crisis” has ignored the elephant in the barrel — monetary policy. Today, high oil prices are the backdrop for Middle Eastern chaos and calls for bad energy policy. It was much the same in the 1970s, when high prices yielded similar violence against our fellow man and against economics. This is no coincidence. A weak dollar is the culprit, now as then.
When the Yom Kippur war was launched in October 1973, the price of oil had been rising for two years. For decades, oil’s price had been remarkably stable, like the prices of most other goods. But in 1971 Richard Nixon broke the dollar’s links both to gold and to key foreign currencies. Bretton Woods — and the dollar — collapsed, and a decade-long inflation began.
By July 1973, gold had deviated from its long-time price of $35 per ounce and soared to $120. Oil also responded quickly to dollar weakness and doubled in price by the early autumn. The Mideast nations complained that the Western oil companies were accumulating massive “windfall profits.” Having negotiated agreements in the previous environment of price stability, the Arabs and Persians were stuck with much lower prices and royalty payments. You know the rest of the decade’s news: embargoes, gas lines, inflation, wage and price controls, hostages.
Today, commodity prices across the board, from coffee to carbon fiber, remain near 25-year highs. High oil prices are not a unique phenomenon, but just another commodity whose price is determined primarily by the value of the dollar. Expensive oil isn’t exclusively a monetary event, of course: Risk and demand matter, too. But in comparing oil to other commodities, especially gold, we find that elevated risk and demand explains only $10-$15 of the higher oil price; $30 of the price is explained by a weak, inflationary dollar. The entity most responsible for expensive oil is thus the Fed.
Today, Samsung asked the Supreme Court to review an antiquated component of patent law. My brief take:
“The prevailing interpretation of design patents and penalties is rooted in the 1870s. It doesn’t work in a smartphone world. The Supreme Court should take this case and modernize the notion of damages for ‘total device profits’ for complex products. The Court should continue its good work in rebalancing our intellectual property paradigm away from clever lawyering and in favor of true innovation.”
Surprise: there’s a bit of good news from Washington. The House and Senate just agreed to include a permanent Internet Tax Freedom Act in the Customs and Border Protection reauthorization. Congress first barred states and the federal government from taxing Internet access in 1998. But the measure was temporary, and every few years since then it’s been in jeopardy of expiring. Applying discriminatory taxes to Internet access would have slowed the rollout of broadband, the uptake by consumers, and the emergence of some of America’s most successful industries. This new measure ensures we continue a successful policy . . . permanently.
In recent days, the New York Times and Wall Street Journal have reported on the Affordable Care Act’s growing problems. Skyrocketing premiums, more lost coverage, skyrocketing deductibles, narrow networks, dysfunctional health insurance exchanges (more than half of which have now closed shop), and a warning from the nation’s largest insurer UnitedHealth that it may abandon Obamacare altogether. One consumer summed up the dismal situation:
“We can’t afford the Affordable Care Act, quite honestly,” said Cassaundra Anderson, whose family canvassed for Obama in their neighborhood, a Republican stronghold outside Cincinnati. “The intention is great, but there is so much wrong. . . . I’m mad.”
Is there a better way? Yes, there are lots of better ways, and lots of good ideas to “reform health care reform.” In fact, I believe health care is poised to explode with exciting innovations that will slash costs and radically improve care. Just yesterday the venture capital firm Andreessen Horowitz announced a new fund focused on software for biotech. But many of these important medical and economic advancements will only happen to the degree we allow them to happen. And right now, the ACA is exacerbating the worst features of the existing health market while adding new pathologies of its own. Choice is contracting, costs are mushrooming, and innovation is being stifled. The FDA, too, is a big obstacle. Instead of this rigid, top-down, costly path, I’ve laid out what I think is a more hopeful vision for the future of health care in a new report called “The App-ification of Medicine: A Four-Facted Information Revolution in Health.” This revolution is based on:
- Smartphones and personal technology
- Big Data, Social Data
- The Code of Life
- The app-ification of the business of health care
The report is by no means a comprehensive look at what is a huge sector and a hugely complex topic. But it might spark some ideas and bolster our optimism that if we free the health sector, it can become an economic blessing rather than a burden.
Here’s a new version of my 50th anniversary assessment of Moore’s Law, just out from the American Enterprise Institute.
- Over the last 50 years, exponential scaling of silicon microelectronics “turned a hundred dollar chip with a few dozen transistors into a 10 dollar chip with a few billion transistors,” fulfilling Moore’s Law, Gordon Moore’s ambitious prediction, and propelling the information economy.
- Information technology, powered by Moore’s Law, provided nearly all the productivity growth of the last 40 years and promises to transform industries such as health care and education that desperately need creative disruption.
- Shrinking silicon transistors is getting more difficult as we approach fundamental atomic limits, but varied innovations—in materials, devices, state variables, and parallel architectures—will likely combine to deliver continued exponential growth in computation, data storage, sensing, and communications.
Unless we address the growth of the Administrative State, it will continue to stifle growth in the real economy. As you can see in the chart above, this recovery has suffered, among other maladies, from the weakest business investment of any recent expansion. The weakest by far. A number of factors may be at play — monetary policy, global turmoil, bad corporate tax policy, the nature of the last downturn, etc. But it’s not a stretch to conclude that a major factor in the economy’s underperformance is growing bureaucratic interference with economic activity. One study estimates that regulation costs the economy $1.88 trillion per year, and another study puts the cost to the economy into the tens of trillions of dollars. As bureaucratic excursions into firms and industries grow, and as the costs so manifestly outweigh the benefits, the agencies’ rationales for regulatory control become ever more creative.
A good example comes from Susan Dudley of George Washington University, who studies environmental regulation. She describes a clearly political decision cloaked as “science.”
The Environmental Protection Agency published its final national ambient air quality standard (NAAQS) for ozone in the Federal Register on Monday. EPA emphasizes that “Setting air quality standards is about protecting public health and the environment. By law, EPA cannot consider costs in doing that.” The agency did prepare a regulatory impact analysis (RIA) to comply with presidential executive orders 12866 and 13563, but it is explicit that “although an RIA has been prepared, the results of the RIA have not been considered in issuing this final rule.”
The results of the RIA, however, were featured prominently in EPA’s press release. According to the release, “The public health benefits of the updated standards, estimated at $2.9 to $5.9 billion annually in 2025, outweigh the estimated annual costs of $1.4 billion.” EPA’s fact sheet relies on the RIA to assert that meeting the new 70 parts per billion (ppb) standard will avoid 320 to 660 premature deaths each year.
Nonetheless, the 480-page RIA suggests that these health benefits pale in comparison to the benefits that achieving a more stringent 65 ppb standard would bring. According to EPA’s models, a standard of 65 ppb would avoid between 1,590 and 3,320 premature deaths. (This does not include California.)
There are ample reasons to question EPA’s ozone health benefit estimates but the fact is, the agency’s own analysis claims that the more stringent 65 ppb standard would have saved an additional 1,274 to 2,660 lives per year, and avoided an additional 2,670 emergency room visits and almost 1,300 hospital admissions.
If, as EPA says, “the Act requires [it] to base the decision for the primary standard on health considerations only; economic factors cannot be considered,” how can it reconcile setting a standard that leaves so many lives unprotected?
EPA cannot openly admit that its decision was influenced by the enormous costs of achieving the tighter standard. (Chapter 4 of the RIA acknowledges that no known measures are available to achieve either of the standards EPA considered, but estimates that a 65 ppb standard would impose costs of $16 billion per year – more than 10 times the estimated $1.4 billion per year cost of achieving a 70 ppb standard.)
It’s obvious that EPA did consider the gigantic cost, and Dudley concluded:
It’s time to stop the charade that it is wise or even possible to base NAAQS purely on health considerations. There are very real tradeoffs involved in these policy decisions that deserve open and transparent debate, rather than the pretense that they can be made by considering only science.
Another example from the environmental arena is the never-ending Keystone XL saga, in which various bureaucracies have for seven years pretended to “study the impact assessments” while blocking the project. Almost no one even argues anymore that this is anything but a political football designed to pacify narrow constituencies and raise campaign money. And yet billions of dollars in potential investment and thousands of jobs are put off.
It is impossible to insulate executive and even independent agencies from all politics. Let’s be realistic. And yet emboldened bureaucrats are increasingly dispensing with even the pretense of expertise, fair play, and the rule of law.
In recent years, the Federal Communications Commission, a nominally “independent expert agency,” has descended into the political swamp. In the most famous case, one year ago, just after the 2014 elections, the FCC collapsed in the face of a subversive White House campaign to write new regulations governing the Internet, one of the most important and innovative sectors of the economy. The FCC had been heading in one policy direction, but at the last second, after years of consideration, a small team of non-expert political operatives in the White House (in cahoots with a few FCC insiders who, it turns out, were also orchestrating outside political activists) twisted Chairman Tom Wheeler’s arm, and the White House got its favored policy. Never mind that all of this was illegal — Congress had told the FCC 20 years ago the Internet was to remain “unfettered by Federal and State regulation.”
Last week, one of Chairman Wheelers’s senior advisors spoke to an industry group and once again asked them to go on a political campaign in favor of even more regulation of the communications sector. As Light Reading reported,
Ideally, Sohn [Wheeler’s senior advisor] said, the same kind of consumer activism that helped drive the Open Internet rule changes earlier this year — including pickets at Wheeler’s home and the White House, and widespread TV coverage — could be brought to bear on some of the more arcane issues, such as special access and IP transition rules.
So senior staff at “independent expert” agencies, who make economic rules and enforce technology standards in highly technical sectors of the economy, are now urging political activists to go to the home of the agency chairman to bank pots and pans and urge specific policies — invariably tilted toward more regulation.
In a possible silver lining, the assertiveness of regulatory and expert agencies is exposing fundamental flaws in the Administrative State. So egregious is the behavior, so overt and obvious is the politicization, so damaging is the impact on the economy, that the agencies — long political tools but not recognized as such — are earning the scrutiny that could lead to a revolution of sorts.
Steven Davis of the University of Chicago describes the size of the problem — a Code of Federal Regulations now 175,000 pages long, for example — here. Charles Murray describes the nature of the regulatory charade and a possible political solution here. John Cochrane of UChicago and the Hoover Institution outlines the impact of regulatory insanity on economic growth here. I’ve looked at the impact on economic growth here and suggested that, in the cases where regulation is needed, it’s imperative to “Keep It Simple.”
We’ve been hammering for years on the importance of reinvigorating economic growth, and John Cochrane of the University of Chicago has put lots of the key ideas, big and small, in one new paper. Enjoy.
Here’s a short list of analyses of Jeb Bush’s new tax reform plan, which focuses on reviving economic growth.
- The Tax Foundation (@taxfoundation): Details and analysis of Gov. Jeb Bush’s tax plan;
- Four economists supporting and advising Bush — John Cogan, Glenn Hubbard, Martin Feldstein, and Kevin Warsh: Fundamental Tax Reform: An Essential Pillar of Economic Growth
- William Gale at Brookings: Bush’s tax plan: Something old, something new.
- Editorial page of The Wall Street Journal (@WSJopinion): The Bush Growth Agenda.
Today The Wall Street Journal highlighted a debate over unlicensed wireless spectrum that’s been brewing for the last few months. On one side, mobile carriers like Verizon and T-Mobile are planning to roll out a new technology known as LTE-U that will make use of the existing unlicensed spectrum most commonly used for Wi-Fi. LTE-U is designed to deliver a similar capability as Wi-Fi, namely short-range connectivity to mobile devices. As billions of mobile devices and Web video continue to strain wireless networks and existing spectrum allocations (see “The Immersive Internet Needs More Wireless Spectrum”), mobile service providers (and everyone else) is looking for good sources of spectrum. For the meantime, they’ve found it in the 5 GHz ISM band. The 5 GHz band is a good place in which to deploy “small cells” (think miniature cell towers delivering transmissions over a much smaller area) which can greatly enhance the capacity, reach, and overall functionality of wireless services.
Google and the cable companies, such as Comcast, however, are opposed to the use of LTE-U because they say LTE-U could interfere with Wi-Fi. The engineering department at the Federal Communications Commission (FCC) has been looking into the matter for the last few months to see whether the objections are valid, but the agency has not yet reported any firm conclusions.
Is this a technical issue? Or a business dispute?
Until I see some compelling technical evidence that LTE-U interferes with Wi-Fi, this looks like a business dispute. Meaning the FCC probably should not get involved. The 2.4 GHz and 5 GHz spectrum in which Wi-Fi (and Bluetooth and other technologies) operate is governed by just a few basic rules. Most crucially, devices must not exceed certain power thresholds, and they can’t actively interfere with one another. Wi-Fi was designed to share nicely, but as everyone knows, large numbers of devices in one area, or large numbers of Wi-Fi hotspots can cause interference and thus degrade performance. The developers of LTE-U have spent the last couple years designing it specifically to play by the rules of the unlicensed spectrum and to play nicely with Wi-Fi.
The early results are encouraging. In real world tests so far,
- LTE-U delivers better performance than Wi-Fi,
- doesn’t degrade nearby Wi-Fi performance, and
- may in fact improve the performance of nearby Wi-Fi networks.
For more commentary and technical analysis, see Richard Bennett’s recent posts here and here. Bennett was an early co-inventor of Wi-Fi and thus knows what he’s talking about. Also, Qualcomm has a white paper here and some good technical reports here and here.
Another line of opposition to LTE-U says that the mobile service providers like Verizon and T-Mobile will use LTE-U to deliver services that compete with Wi-Fi and will thus disadvantage competitive service providers. But the mobile service providers already operate lots of Wi-Fi hotspots. They are some of the biggest operators of Wi-Fi hotspots anywhere. In other words, they already compete (if that’s the right word) with Google and cable firms in this unlicensed space. LTE-U is merely a different protocol that makes use of the same unlicensed spectrum, and must abide by the same rules, as Wi-Fi. The mobile providers just think LTE-U can deliver better performance and better integrate with their wide area LTE-based cellular networks. Consider an analogy: the rental fleets of Hertz and Avis are both made up of Ford vehicles. Hertz then decides to start renting Fords and Chevys. The new Chevys don’t push Fords off the road. They are both cars that must obey the rules of the road and the laws of physics. The two types of vehicles can coexist and operate just as they did before. Hertz is not crowding out Avis because it is now using Chevys.
I’ll be looking for more real world tests that either confirm or contradict the initially encouraging evidence. Until then, we shouldn’t prejudge and block a potentially useful new technology.
I’m delighted to see the robust discussion breaking out over the urgent need to reignite the U.S. economy. The impetus seems to be Jeb Bush’s call last week to implement policies that would boost the U.S. growth rate to 4%, at least for several years. A number of economists and journalists said Bush’s 4% goal was impossible. But others say nonsense; of course we can do much better than we have over the last decade. See Glenn Hubbard and Kevin Warsh, for example, in The Wall Street Journal today. Jon Hartley follows up here. Michael Solon wrote an excellent piece back in February. And John Taylor has been urging the same here and here.
Here’s a selection of my own research and commentary on the topic over the last five years:
The political fights of the last decade have distracted us from what should be, in my view, the central issue of our policy debates—reviving economic growth. First quarter 2015 growth of just 0.2% comes on the heels of a lackluster 2014, when the economy grew just 2.4%. The financial crisis surely took its toll, but for how long can we blame a seven-year-old event, while millions of Americans are denied the opportunities that attend a faster growing economy?
There are many excuses for the first quarter reading. Yes, it was cold. Yes, there might be some statistical aberration—first quarter growth has been conspicuously low for the last few years. But this is not a single-quarter problem. Over the last nine full years, the economy has not achieved 3% growth.
The stock market has recovered nicely, but middle-class Americans and small businesses are struggling with the anxieties of slower growth. If, after the last recession, the U.S. had kept moving ahead at its historical 3% growth rate, the American economy would be $2.3 trillion larger today. (The Congressional Budget Office, using a slightly more conservative analysis, says the economy would be $1.7 trillion larger—still an astounding shortfall.) No, 3% growth is not a law of nature. It is no guarantee. But the failure to clear away self-defeating policies is simply unacceptable. continue reading . . .
Although lots of firms sat out the public debate over Net Neutrality, we’ve learned that many of them strenuously opposed the FCC’s new Internet regulations behind the scenes. The latest example is Sony, which, according to this Daily Caller story, warned that Title II Internet regulation “might put up roadblocks on how we distribute content.” Plumbing internal emails now available because of the notorious Sony hack, DC found a number of private complaints about the FCC. Sony Pictures Entertainment chief technology officer Spencer Stephens, for example, was adamant:
“The Internet has drawn investment precisely because it isn’t a utility,” Stephens wrote. “My expectation is that prioritized services will mean investment in infrastructure which would expand the size of the pipe.”
Responding to Netflix’s assertions that interconnection disagreements compelled the FCC to enact sweeping regulation, Stephens wrote that “their claims that they have been held to ransom are, IMHO, complete BS.”
Dept. of You Can’t Make This Stuff Up:
Three of the driving forces behind the 10-year effort to regulate the Internet — Netflix, Mozilla, and Google — have, in the last few days and in their own ways, all recanted their zealous support of Net Neutrality. It may have been helpful to have this information . . . before last week, when the FCC plunged the entire Internet industry into a years-long legal war.
First, on Monday, Netflix announced it had entered into a “sponsored data” deal with an Australian ISP, which violates the principles of “strong Net Neutrality,” Netflix’s preferred and especially robust flavor of regulation.
Then on Wednesday, Netflix CFO David Wells, speaking at an investor conference, said
“Were we pleased it pushed to Title II? Probably not,” Wells said at the conference. “We were hoping there might be a non-regulated solution.”
At this week’s huge Mobile World Congress in Barcelona, meanwhile, my AEI colleague Jeff Eisenach reported via Twitter that a Mozilla executive had backtracked:
Mozilla’s Dixon-Thayer is latest #netneutrality advocate to backpedal – “we don’t necessarily favor regulation” #repealtitleII #MWC15MP
3/4/15, 10:44 AM
Add these to the revelations about Google’s newfound reticence. Several weeks ago, in The Wall Street Journal‘s blockbuster exposé, we found out that Google Chairman Eric Schmidt called the White House to protest President Obama’s surprise endorsement of Title II regulation of the Internet. Then, just days before the February 26 vote at the FCC, Google urgently pleaded that the Commission remove the bizarre new regulatory provision known as broadband subscriber access service (BSAS), which would have created out of thin air a hereto unknown “service” between websites and ISP consumers — in order to regulate that previously nonexistent service. (Ironic, yes, that this BSAS provision was dreamt up by . . . Mozilla.) Google was successful, just 48 hours before the vote, in excising this menacing regulation of a phantom service. But Google and the others are waking up to the fact that Title II and broad Section 706 authority might contain more than a few nasty surprises.
Fred Campbell examined Netflix’s statements over the last year and concluded: “Netflix bluffed. And everybody lost.”
And Yet . . .
The bottom line of these infuriating reversals may actually be a positive for the Internet. These epiphanies — “Holy bit, we just gave the FCC the power do do what!?!” — may wake serious people from the superficial slumber of substance-free advocacy. The epiphanies may give new life to efforts in Congress to find a legislative compromise that would prohibit clear bad behavior (blocking, throttling, etc.) but which would also circumscribe the FCC’s regulatory ambitions and thus allow the Internet to continue on its mostly free and unregulated — and hugely successful — path.
The fanfare surrounding Apple exploded to new levels two weeks ago as we learned that the iPhone maker may enter the automobile business. The Wall Street Journal reported that CEO Tim Cook has hired away top auto executives from Mercedes and Ford and is running a secret car team that may number up to 1,000 employees. Apple, apparently, doesn’t want to let Google, with its driverless car program, or Tesla, the auto darling of the moment, have all the fun. Or, another rumor goes, maybe Apple plans to buy Tesla — for $75 billion. Who knows. Odds are Apple will never build cars. Perhaps Apple is mostly targeting electronics, software, and content in the new and very large “connected car” world.
Whatever the case, its not difficult to imagine Apple’s iOS, its apps, its icons, and its designs seeping into more and more devices, from smart-watches to smart-homes to connected cars.
Which gets us to the point of this post . . .
There’s a big oral argument today. No, not the health care hearing at the Supreme Court. Today is the latest round of the four-years patent war between Apple and Samsung. The two smartphone titans have been suing each other all over the world, but the cases have reduced to a couple remaining skirmishes in American courts.
While not the focus of today’s argument, the highest profile issue remains unresolved. Last year a jury found Samsung infringed three fairly minor Apple design patents and awarded Apple $930 million — a huge number considering the nature and consequence of the patents in question. Among other legal arguments at issue is a quirk of patent law, dating to 1887, which says an infringer is liable for its “total profit.” But as we’ve previously explained, in today’s market of hypercomplex products, this rule is perverting rationality.
The question is whether the remedy in these cases — the award to the plaintiff of the total profits earned by the defendant’s product — makes any sense in the modern world.
A smartphone is a complex integration of thousands of hardware and software technologies, manufacturing processes, aesthetic designs and concepts. These components may each be patented, or licensed, or not at all, by any number of firms. A smartphone, by one estimate, may contain up to 250,000 patents. Does a minor design patent comprising a tiny fraction of a product’s overall makeup drive the purchase decision? If company A’s product contains one infringing component among many thousands, even if it has no knowledge or intent to infringe, and even if the patent should never have been issued, does it make sense that company B gets all company A’s profits?
There are good reasons to think a fair reading gives a much saner result:
To see why the phrase should be interpreted in a common sense way, consider an alternative plain reading. Why couldn’t “total profit,” for example, mean the entire profit of the firm, including profits derived from unrelated products?
Does anyone think this is the meaning of the law? No. Among other common sense readings, the phrase “to the extent” is a modifier that can easily be read to limit the award in proportion to the severity of the infringement. An additional consideration is that many design patents better resemble trademarks and copyrights, and in fact trademark and copyright law (although imperfect themselves) often provide for more common sense remedies.
Imagine, however, if the reading of the 1887 law that yielded the $930-million award is upheld. Several years from now, Apple’s iOS is installed in Chevrolets and BMWs. But Ford and Lexus are using distinct software that in some way resembles Apple’s. Apple sues Ford and Lexus for a tiny graphical icon containing a bevel that could only have originated in the mind of Sir Johnny Ive. Could Apple be awarded all of Ford’s or Lexus’s profits?
Absurd? Yes. But that is the logical extension of the overly-expansive “total profits” reading.
In the last few years, the Supreme Court has reined in software patents in a hugely constructive way. A common sense ruling here would be one more step forward on the path to patent sanity.
In its effort to regulate the Internet, the Federal Communications Commission is swimming upstream against a flood of evidence. The latest data comes from Fred Campbell and the Internet Innovation Alliance, showing the startling disparities between the mostly unregulated and booming U.S. broadband market, and the more heavily regulated and far less innovative European market. In November, we showed this gap using the measure of Internet traffic. Here, Campbell compares levels of investment and competitive choice (see chart below). The bottom line is that the U.S. invests around four times as much in its wired broadband networks and about twice as much in wireless. It’s not even close. Why would the U.S. want to drop America’s hugely successful model in favor of “President Obama’s plan to regulate the Internet,” which is even more restrictive and intrusive than Europe’s?
Net neutrality activists have deployed a long series of rationales in their quest for government control of the Internet. As each rationale is found wanting, they simply move onto the next, more exotic theory. The debate has gone on so long that they’ve even begun recycling through old theories that were discredited long ago.
In the beginning, the activists argued that there should be no pay for performance anywhere on the Net. We pointed out the most obvious example of a harmful consequence of their proposal: their rules, as originally written, would have banned widely used content delivery networks (CDNs), which speed delivery of packets (for a price).
Then they argued that without strong government rules, broadband service providers would block innovation at the “edges” of the network. But for the last decade, under minimal regulation, we’ve enjoyed an explosion of new technologies, products, and services from content and app firms like YouTube, Facebook, Netflix, Amazon, Twitter, WhatsApp, Etsy, Snapchat, Pinterest, Twitch, and a thousand others. Many of these firms have built businesses worth billions of dollars.
They said we needed new rules because the light-touch regulatory environment had left broadband in the U.S. lagging its international rivals, whose farsighted industrial policies had catapulted them far ahead of America. Oops. Turns out, the U.S. leads the world in broadband. (See my colleague Richard Bennett’s detailed report on global broadband and my own.)
Then they argued that, regardless of how well the U.S. is doing, do you really trust a monopoly to serve consumer needs? We need to stop the broadband monopolist — the cable company. Turns out most Americans have several choices in broadband providers, and the list of options is growing — see FiOS, U-verse, Google Fiber, satellite, broadband wireless from multiple carriers, etc. No, broadband service is not like peanut butter. Because of the massive investments required to build networks, there will never be many dozens of wires running to each home. But neither is broadband a monopoly.
Artificially narrowing the market is the first refuge of nearly all bureaucrats concerned with competition. It’s an easy way to conjure a monopoly in almost any circumstance. My favorite example was the Federal Trade Commission’s initial opposition in 2003 to the merger of Haagen-Dasz (Nestle) and Godiva (Dreyer’s). The government argued it would harmfully reduce competition in the market for “super premium ice cream.” The relevant market, in the agency’s telling, wasn’t food, or desserts, or sweets, or ice cream, or even premium ice cream, but super premium ice cream.
See below our post from TechPolicyDaily.com responding to President Obama’s January 14 speech in Iowa. We’ve added some additional notes at the bottom of the post.
Yesterday, President Obama visited Cedar Falls, Iowa, to promote government-run broadband networks. On Tuesday, he gave a preview of the speech from the Oval Office. We need to help cities and towns build their own networks, he said, because the US has fallen behind the rest of the world. He pointed to a chart on his iPad, which showed many big US cities trailing Paris, Tokyo, Hong Kong, and Seoul in broadband speeds. Amazingly, however, some small US towns with government-owned broadband networks matched these world leaders with their taxpayer-funded deployment of gigabit broadband.
I wish I could find a more polite way to say this, but the President’s chart is utter nonsense. Most Parisians do not enjoy Gigabit broadband. Neither do most residents of Tokyo, Hong Kong, or Seoul, which do in fact participate in healthy broadband markets. Perhaps most importantly, neither do most of the citizens of American towns, like Cedar Falls, Chattanooga, or Lafayette, which are the supposed nirvanas of government-run broadband.*
The chart, which is based on a fundamentally flawed report, and others like it, deliberately obscures the true state of broadband around the world. As my AEI colleagues and I have shown, by the most important and systematic measures, the US not only doesn’t lag, it leads. The US, for example, generates two to three times the Internet traffic (per capita and per Internet user) of the most advanced European and Asian nations. (more…)
Combatants in the Net Neutrality wars often seem to talk past each other. Sometimes it’s legitimate miscommunication. More often, though, it arises from fundamental defects in the concept itself.
On December 2, Commissioner Ajit Pai wrote to Netflix, Inc., saying he “was surprised to learn of allegations that Netflix has been working to effectively secure ‘fast lanes’ for its own content on ISPs’ networks at the expense of its competitors.” Commissioner Pai noted press accounts that suggested Netflix’s Open Connect content delivery platform and its use of specialized video streaming protocols put video from non-Netflix sources at a disadvantage. Commissioner Pai concluded that “these allegations raise an apparent conflict with Netflix’s advocacy of strong net neutrality regulations” and thus asked for an explanation.
In its reply of December 11, Netflix made four basic points. Netflix (1) said it “designed Open Connect content delivery network (CDN) to provide consumers with a high-quality video experience”; (2) insisted “Open Connect is not a fast lane . . . . Open Connect helps ISPs reduce costs and better manage congestion, which results in a better Internet experience for all end users”; (3) said it “uses open-source software and readily-available hardware components”; and (4) applauded other firms for developing open video caching standards but “has focused” on its own proprietary system because it is more efficient and customer friendly than the collaborative industry efforts.
Three of Netflix’s four points are reasonable, as far as they go. The company is developing technologies and architectures to improve customer service and beat the competition. The firm, however, seems not to grasp Commissioner Pai’s central point: Netflix relishes aggressive competition on its own behalf but wants to outlaw similarly innovative behavior from the rest of the Internet economy.
Is the U.S. broadband market healthy or not? This question is central to the efforts to change the way we regulate the Internet. In a short new paper from the American Enterprise Institute, we look at a simple way to gauge whether the U.S. has in fact fallen behind other nations in coverage, speed, and price . . . and whether consumers enjoy access to content. Here’s a summary:
- Internet traffic volume is an important indicator of broadband health, as it encapsulates and distills the most important broadband factors, such as access, coverage, speed, price, and content availability.
- US Internet traffic — a measure of the nation’s “digital output” — is two to three times higher than most advanced nations, and the United States generates more Internet traffic per capita and per Internet user than any major nation except for South Korea.
- The US model of broadband investment and innovation — which operates in an environment that is largely free from government interference — has been a dramatic success.
- Overturning this successful policy by imposing heavy regulation on the Internet puts one of America’s most vital industries at risk.
Last week, M-Lab, a group that monitors select Internet network links, issued a report claiming interconnection disputes caused significant declines in consumer broadband speeds in 2013 and 2014.
This was not news. Everyone knew the disputes between Netflix and Comcast/Verizon/AT&T and others affected consumer speeds. We wrote about the controversy here, here, and here, and our “How the Net Works” report offered broader context.
The M-Lab study, “ISP Interconnection and Its Impact on Consumer Internet Performance,” however, does have some good new data. Although M-Lab says it doesn’t know who was “at fault,” advocates seized on the report as evidence of broadband provider mischief at big interconnection points.
But the M-Lab data actually show just the opposite. As you can see in the three graphs below, Comcast, Time Warner, Verizon, and to a lesser extent AT&T all show sharp drops in performance in May of 2013. Then network performance of all four networks at the three monitoring points in New York, Dallas, and Los Angeles all show sudden improvements in March of 2014.
The simultaneous drops and spikes for all four suggest these firms could not have been the cause. It would have required some sort of amazingly precise coordination among the four firms. Rather, the simultaneous action suggests the cause was some outside entity or event. Dan Rayburn of StreamingMedia agrees and offers very useful commentary on the M-Lab study here.
Last week Level 3 posted some new data from interconnection points with three large broadband service providers. The first column of the chart, with data from last spring, shows lots of congestion between Level 3 and the three BSPs. You might recall the battles of last winter and early spring when Netflix streaming slowed down and it accused Comcast and other BSPs of purposely “throttling” its video traffic. (We wrote about the incident here, here, here, and here.)
The second column of the Level 3 chart, with data from September, shows that traffic with two of the three BSPs is much less congested today. Level 3 says, reasonably, the cause for the change is Netflix’s on-net transit (or paid peering) agreements with Comcast and (presumably) Verizon, in which Netflix and the broadband firms established direct connections with one another. As Level 3 writes, “You might say that it’s good news overall.” And it is: these on-net transit agreements, which have been around for at least 15 years, and which are used by Google, Amazon, Microsoft, all the content delivery networks (CDNs), and many others, make the Net work better and more efficiently, cutting costs for content providers and delivering better, faster, more robust services to consumers.
But Level 3 says despite this apparent improvement, the data really shows the broadband providers demanding “tolls” and that this is bad for the Internet overall. It thinks Netflix and the broadband providers should be forced to employ an indirect A–>B–>C architecture even when a direct A–>C architecture is more efficient.
The Level 3 charts make another probably unintended point. Recall that Netflix, starting around two years ago, began building its own CDN called OpenConnect. Its intention was always to connect directly to the broadband providers (A–>C) and to bypass Level 3 and other backbone providers (B). This is exactly what happened. Netflix connected to Comcast, Verizon, and others (although for a small fee, rather than for free, as it had hoped). And it looks like the broadband providers were smart not to build out massive new interconnection capacity with Level 3 to satisfy a peering agreement that was out of balance, and which, as soon as Netflix left, regained balance. It would have been a huge waste (what they used to call stranded investment).
Below find our Reply Comments in the Federal Communications Commission’s Open Internet proceeding:
September 15, 2014
Twitch Proves the Net Is Working
On August 25, 2014, Amazon announced its acquisition of Twitch for around $1 billion. Twitch (twitch.tv) is a young but very large website that streams video games and the gamers who play them. The rise of Twitch demonstrates the Net is working and, we believe, also deals a severe blow to a central theory of the Order and NPRM.
The NPRM repeats the theory of the 2010 Open Internet Order that “providers of broadband Internet access service had multiple incentives to limit Internet openness.” The theory advances a concern that small start-up content providers might be discouraged or blocked from opportunities to grow. Neither the Order nor the current NPRM considers or even acknowledges evidence or arguments to the contrary — that broadband service providers (BSPs) may have substantial incentives to promote Internet openness. Nevertheless, the Commission now helpfully seeks comment “to update the record to reflect marketplace, technical, and other changes since the 2010 Open Internet Order was adopted that may have either exacerbated or mitigated broadband providers’ incentives and ability to limit Internet openness. We seek general comment on the Commission’s approach to analyzing broadband providers’ incentives and ability to engage in practices that would limit the open Internet.”
The continued growth of the Internet, and the general health of the U.S. Web, content, app, device, and Internet services markets — all occurring in the absence of Net Neutrality regulation — more than mitigate the Commission’s theory of BSP incentives. While there is scant evidence for the theory of bad BSP behavior, there is abundant evidence that openness generally benefits all players throughout the Internet value chain. The Commission cannot ignore this evidence.
The rise of Twitch is a perfect example. In three short years, Twitch went from brand new start-up to the fourth largest single source of traffic on the Internet. Google had previously signed a term sheet with Twitch, but so great was the momentum of this young, tiny company, that it could command a more attractive deal from Amazon. At the time of its acquisition by Amazon, Twitch said it had 55 million unique monthly viewers (consumers) and more than one million broadcasters (producers), generating 15 billion minutes of content viewed a month. According to measurements by the network scientist and Deepfield CEO Craig Labovitz, only Netflix, Google’s YouTube, and Apple’s iTunes generate more traffic.
Ok, maybe that’s a little harsh. But watch this video of T-Mobile CEO John Legere boasting that’s he’s “spent a lot of time in Washington lately, with the new chairman of the FCC,” and that “they love T-Mobile.”
Ah, spring. Love is in the air. Great for twenty-somethings, not so great for federal agencies. The FCC, however, is thinking about handing over valuable wireless spectrum to T-Mobile and denying it to T-Mobile’s rivals. This type of industrial policy is partially responsible for the sluggish economy.
From taxpayer subsidies for connected Wall Street banks to favored green energy firms with the right political allies, cronyism prevents the best firms from serving consumers with the best products in the most efficient way. Cronyism is good (at least temporarily) for a few at the top. But it hurts everyone else. Government favors ensure that bad ideas and business models are supported even if they would have proved wanting in a more neutral market. They transfer scarce taxpayer dollars to friends and family. They also hurt firms who aren’t fortunate enough to have the right friends in the right places at the right time. It’s hard to compete against a rival who has the backing of Washington. The specter of arbitrary government then hangs over the economy as firms and investors make decisions not on the merits but on a form of kremlinology — what will Washington do? In the case at hand, cronyism could blow up the whole spectrum auction, an act of wild irresponsibility in the service of a narrow special interest (we’ve written about it here, here, and here).
The U.S. has never been perfectly free of such cronyism, but our system was better than most and over the centuries attracted the world’s financial and human capital because investors and entrepreneurs knew that in the U.S. the best ideas and the hardest work tend to win out. Effort, smarts, and risk capital won’t be snuffed out by some arbitrary bureaucratic decision or favor. That was the stuff of Banana Republics — the reason financial and human capital fled those spots for America, preferring the Rule of Law to the Whim of Man.
The FCC’s prospective auction rules are perplexing in part because the U.S. mobile industry is healthy — world-leading healthy. More usage, faster speeds, plummeting prices, etc. Why risk interrupted that string of success? Economist Hal Singer shows that in the FCC’s voluminous reports on the wireless industry, it has failed to present any evidence of monopoly power that would justify its rigging of the spectrum auctions. On the other hand, an overly complex auction could derail spectrum policy for a decade.
On February 28, the Bureau of Economic Analysis revised fourth quarter U.S. GDP growth downward to just 2.4% from an initial estimate of 3.2%. For 2013, the economy expanded just 1.9%, nearly a point lower than the lackluster 2.8% growth of 2012. Five years after the sharp downturn of 2008-09, we are still just limping along.
Granted, the stock market keeps making all-time highs. That is not insignificant, and in the past rising stocks often signaled growth ahead. Another important consideration weighing against depressingly slow growth is a critique of our economic measures themselves. Does gross domestic product (GDP), for example, accurately capture output, let alone value, technical progress, and overall wellbeing? A new book GDP: A Brief But Affectionate History, by Diane Coyle, examines some of the shortcomings of GDP-the-measure. And lots of smart commentary has been written on the ways that technologies that improve standards of living often don’t show up on official ledgers — from anesthesia to the massive consumer surpluses afforded by information technology. In addition, although income inequality is said by many to have grown, consumption inequality has, by many measures, substantially fallen. All true and interesting and important, and worthy of much further discussion at a later date.
For now, however, we still must pay the butcher, the baker, and the aircraft carrier maker — with real dollars. And the dollar economy is not growing nearly fast enough. We’ve sliced and diced the poor employment data a thousand ways these last few years, but one of the most striking recent figures is the fall in the portion of American men 25-54 who are working. Looking at this cohort tends to minimize the possible retirement and schooling factors that could skew the analysis. We simply presume that most able-bodied men in this range should be working. And the numbers are bad. As Binyamin Appelbaum of the New York Times Economix blog writes:
In February 2008, 87.4 percent of men in that demographic had jobs. Six years later, only 83.2 percent of men in that bracket are working.
Are these working-age men not working because they are staying home with children? Because they don’t have the right skills for today’s economy? Because the economy is not growing fast enough and creating enough opportunities? Because they are discouraged? Because policies have actively discouraged work in favor of leisure, or at least non-work?
The polymathic thinker Herman Kahn, back in the 1970s book The Next 200 Years, suggested another possibility. Kahn first recounted the standard phases of economic history: a primary economy that focused on extraction — agriculture, mining, forestry; a secondary economy focused on construction and manufacturing; and a tertiary economy, primarily composed of services, management, and knowledge work. But Kahn went further, pointing toward a “quaternary society,” where work would be beside the point and various types of personal fulfillment would rise in importance. Where the primary society conducted games against nature, the secondary society conducted games against materials, and the tertiary society pitted organizations against other organizations, people in the quaternary society would play “games with and against themselves, . . . each other, and . . . communities.” He said much of this activity, from obsessions with gourmet cooking and interior design to hunting, hiking, and fishing, to exercise, adventures, and public campaigns and causes. He said quaternary activities would look a lot like leisure, or hobbies. He predicted many of us in the future would see this as “stagnation.”
If any of you have checked out twitch.tv, you might think Kahn was on to something. Twitch.tv is a website that broadcasts other people playing and commentating on video games in real-time. It appears to be an entirely “meta” activity. But twitch.tv is no tiny fringe curiosity. It is the fourth largest consumer of bandwidth on the Internet.
Is twitch.tv responsible for millions of American men dropping out of the labor force? No. But the Kahn hypothesis is, nevertheless, provocative and worth thinking about.
The possibility of a quaternary economy, however, depends in some measure on substantial wealth. And here one could make a case either way. Is it possible the large consumer surpluses of the modern knowledge economy allow us to provide for our basic needs quite easily and, if we are not driven by other ambitions or internal drives, live somewhat comfortably without sustained effort in a conventional job? Perhaps some of this is going on. Is America really so wealthy, however, that large portions of society — not merely the super wealthy — can drop out of work and pursue hobbies full time? Unlikely. There is evidence that many Baby Boomers near retirement, or even those who had retired, are working more than they’d planned to make up for lost savings. Kahn’s quaternary economy will have to wait.
I say we won’t know the answers to many of these questions until we remove the shackles around the economy’s neck and see what happens. If we start fresh with a simple tax code, substantially deregulate health, education, energy, and communications, and remove other barriers to work, investment, and entrepreneurship, will just 83% of working-age men continue choosing to work? And will GDP, as imperfect a measure as it is, limp along around 2%? (Charles Murray, presenting a new paper at a recent Hudson Institute roundtable on the future of American innovation, hit us with some seriously pessimistic cultural indicators. More on that next time.)
I doubt it. I don’t think human kind has permanently sloughed off its internal ambition toward improvement, growth, and (indirectly) GDP generation. I think new policy and new optimism could unleash an enormous boom.
Phone Company Screws Everyone: Forces Rural Simpletons and Elderly Into Broadband, Locks Young Suburbanites in Copper Cage
Big companies must often think, damned if we do, damned if we don’t.
See our coverage of Comcast-Netflix, which really began before any deal was announced. Two weeks ago we wrote about the stories that Netflix traffic had slowed, and we suggested a more plausible explanation (interconnection disputes negotiations) than the initial suspicion (so called “throttling”). Soon after, we released a short paper, long in the works, describing “How the Net Works” — a brief history of interconnection and peering. And this week we wrote about it all at TechPolicyDaily, Forbes, and USNews.
Netflix, Verizon, and the Interconnection Question – TechPolicyDaily.com – February 13, 2014
How the Net Works: A Brief History of Internet Interconnection – Entropy Economics – February 21, 2014
Comcast, Netflix Prove Internet Is Working – TechPolicyDaily.com – February 24, 2014
Netflix, Comcast Hook Up Sparks Web Drama – Forbes.com – February 26, 2014
Comcast, Netflix and the Future of the Internet – U.S. News & World Report – February 27, 2014
— Bret Swanson
Amazing! An iPhone is more capable than 13 distinct electronics gadgets, worth more than $3,000, from a 1991 Radio Shack ad. Buffalo writer Steve Cichon first dug up the old ad and made the point about the seemingly miraculous pace of digital advance, noting that an iPhone incorporates the features of the computer, CD player, phone, “phone answerer,” and video camera, among other items in the ad, all at a lower price. The Washington Post‘s tech blog The Switch picked up the analysis, and lots of people then ran with it on Twitter. Yet the comparison was, unintentionally, a huge dis to the digital economy. It massively underestimates the true pace of technological advance and, despite its humor and good intentions, actually exposes a shortcoming that plagues much economic and policy analysis.
To see why, let’s do a very rough, back-of-the-envelope estimate of what an iPhone would have cost in 1991.
In 1991, a gigabyte of hard disk storage cost around $10,000, perhaps a touch less. (Today, it costs around four cents ($0.04).) Back in 1991, a gigabyte of flash memory, which is what the iPhone uses, would have cost something like $45,000, or more. (Today, it’s around 55 cents ($0.55).)
The mid-level iPhone 5S has 32 GB of flash memory. Thirty-two GB, multiplied by $45,000, equals $1.44 million.
The iPhone 5S uses Apple’s latest A7 processor, a powerful CPU, with an integrated GPU (graphics processing unit), that totals around 1 billion transistors, and runs at a clock speed of 1.3 GHz, producing something like 20,500 MIPS (millions of instructions per second). In 1991, one of Intel’s top microprocessors, the 80486SX, oft used in Dell desktop computers, had 1.185 million transistors and ran at 20 MHz, yielding around 16.5 MIPS. (The Tandy computer in the Radio Shack ad used a processor not nearly as powerful.) A PC using the 80486SX processor at the time might have cost $3,000. The Apple A7, by the very rough measure of MIPS, which probably underestimates the true improvement, outpaces that leading edge desktop PC processor by a factor of 1,242. In 1991, the price per MIPS was something like $30.
So 20,500 MIPS in 1991 would have cost around $620,000.
But there’s more. The 5S also contains the high-resolution display, the touchscreen, Apple’s own M7 motion processing chip, Qualcomm’s LTE broadband modem and its multimode, multiband broadband transceiver, a Broadcom Wi-Fi processor, the Sony 8 megapixel iSight (video) camera, the fingerprint sensor, power amplifiers, and a host of other chips and motion-sensing MEMS devices, like the gyroscope and accelerometer.
In 1991, a mobile phone used the AMPS analog wireless network to deliver kilobit voice connections. A 1.44 megabit T1 line from the telephone company cost around $1,000 per month. Today’s LTE mobile network is delivering speeds in the 15 Mbps range. Wi-Fi delivers speeds up to 100 Mbps (limited, of course, by its wired connection). Safe to say, the iPhone’s communication capacity is at least 10,000 times that of a 1991 mobile phone. Almost the entire cost of a phone back then was dedicated to merely communicating. Say the 1991 cost of mobile communication (only at the device/component level, not considering the network infrastructure or monthly service) was something like $100 per kilobit per second.
Fifteen thousand Kbps (15 Mbps), multiplied by $100, is $1.5 million.
Considering only memory, processing, and broadband communications power, duplicating the iPhone back in 1991 would have (very roughly) cost: $1.44 million + $620,000 + $1.5 million = $3.56 million.
This doesn’t even account for the MEMS motion detectors, the camera, the iOS operating system, the brilliant display, or the endless worlds of the Internet and apps to which the iPhone connects us.
This account also ignores the crucial fact that no matter how much money one spent, it would have been impossible in 1991 to pack that much technological power into a form factor the size of the iPhone, or even a refrigerator.*
Tim Lee at The Switch noted the imprecision of the original analysis and correctly asked how typical analyses of inflation can hope to account for such radical price drops. (Harvard economist Larry Summers recently picked up on this point as well.)
But the fact that so many were so impressed by an assertion that an iPhone possesses the capabilities of $3,000 worth of 1991 electronics products — when the actual figure exceeds $3 million — reveals how fundamentally difficult it is to think in exponential terms.
Innovation blindness, I’ve long argued, is a key obstacle to sound economic and policy thinking. And this is a perfect example. When we make policy based on today’s technology, we don’t just operate mildly sub-optimally. No, we often close off entire pathways to amazing innovation.
Consider the way education policy has mostly enshrined a 150-year-old model, and in recent decades has thrown more money at the same broken system while blocking experimentation. The other day, the venture capitalist Marc Andreessen (@pmarca) noted in a Twitter missive the huge, but largely unforeseen, impact digital technologies are having on this industry that so desperately needs improvement:
“Four biggest K-12 education breakthroughs in last 20 years: (1) Google, (2) Wikipedia, (3) Khan Academy, (4) Wolfram Alpha.”
Maybe the biggest breakthroughs of the last 50 years. Point made, nonetheless. California is now closing down “coding bootcamps” — courses that teach people how to build apps and other software — because many of them are not state certified. This is crazy.
The importance of understanding the power of innovation applies to health care, energy, education, and fiscal policy, but no where is it more applicable than in Internet and technology policy, which is, at the moment, the subject of a much needed rethink by the House Energy and Commerce Committee.
— Bret Swanson
* To be fair, we do not account for the fact that back in 1991, had engineers tried to design and build chips and components with faster speeds and greater capacities than the consumer items mentioned, they could have in some cases scaled the technology in a more efficient manner than, for example, simply adding up consumer microprocessors totaling 20,500 MIPS. On the other hand, the extreme volumes of the consumer products in these memory, processing, and broadband communications categories, are what make the price drops possible. So this acknowledgment doesn’t change the analysis too much, if at all.
My AEI tech policy colleagues and I discussed today’s net neutrality ruling, which upheld the FCC’s basic ability to oversee broadband but vacated the two major, specific regulations.
Today, the D.C. Federal Appeals Court struck down the FCC’s “net neutrality” regulations, arguing the agency cannot regulate the Internet as a “common carrier” (that is, the way we used to regulate telephones). Here, from a pre-briefing I and several AEI colleagues did for reporters yesterday, is a summary of my statement:
Today, at the Consumer Electronics Show in Las Vegas, AT&T said it would begin letting content firms — Google, ESPN, Netflix, Amazon, a new app, etc. — pay for a portion of the mobile data used by consumers of this content. If a mobile user has a 2 GB plan but likes to watch lots of Yahoo! news video clips, which consume a lot of data, Yahoo! can now subsidize that user by paying for that data usage, which won’t count against the user’s data limit.
Lots of people were surprised — or “surprised” — at the announcement and reacted violently. They charged AT&T with “double dipping,” imposing “taxes,” and of course the all-purpose net neutrality violation.
But this new sponsored data program is typical of multisided markets where a platform provider offers value to two or more parties — think magazines who charge both subscribers and advertisers. We addressed this topic before the idea was a reality. Back in June 2013, we argued that sponsored data would make lots of mobile consumers better off and no one worse off.
Two weeks ago, for example, we got word ESPN had been talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.
Sounds like a reasonable deal all around. But not to everyone. “This is what a net neutrality violation looks like,” wrote Public Knowledge, a key backer of Internet regulation.
The idea that ESPN would pay to exempt its bits from data caps offends net neutrality’s abstract notion that all bits must be treated equal. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data — plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially if a non-watcher of ESPN, am worse off.
The critics’ real worry, then, is that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But is this government’s role — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms? This was our warning. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?
What if magazines were barred from carrying advertisements? They’d have to make all their money from subscribers and thus (attempt to) charge much higher prices or change their business model. Consumers would lose, either through higher prices or less diversity of product offerings. And advertisers, deprived of an outlet to reach an audience, would lose. That’s what we call a lose-lose-lose proposition.
Maybe sponsored data will take off. Maybe not. It’s clear, however, in the highly dynamic mobile Internet business, we should allow such voluntary experiments.
[W]e have these big agencies, some of which are outdated, some of which are not designed properly . . . . The White House is just a tiny part of what is a huge, widespread organization with increasingly complex tasks in a complex world.
That was President Obama, last week, explaining Obamacare’s failed launch. We couldn’t have said it better ourselves.
Where Washington thinks this is a reason to give itself more to do, with more resources, however, we see it as a blaring signal of overreach.
The Administration now says Healthcare.gov is operating with “private sector velocity and effectiveness.” But why seek to further governmentalize one-seventh of the economy if the private sector is faster and more effective than government?
Meanwhile, the New York Times notes that
The technology troubles that plagued the HealthCare.gov website rollout, may not have come as a shock to people who work for certain agencies of the government — especially those who still use floppy disks, the cutting-edge technology of the 1980s.
Every day, The Federal Register, the daily journal of the United States government, publishes on its website and in a thick booklet around 100 executive orders, proclamations, proposed rule changes and other government notices that federal agencies are mandated to submit for public inspection.
So far, so good.
It turns out, however, that the Federal Register employees who take in the information for publication from across the government still receive some of it on the 3.5-inch plastic storage squares that have become all but obsolete in the United States.
Floppy disks make us chuckle. But the costs of complexity are all too real.
A Bloomberg study found the six largest U.S. banks, between 2008 and August of this year, spent $103 billion on lawyers and related legal expenses. These costs pale compared to the far larger economic distortions imposed by metastasizing financial regulation. Even Barney Frank is questioning whether his signature law, Dodd-Frank, is a good idea. The bureaucracy’s decision to push regulations intended for big banks onto money managers and mutual funds seems to have tipped his thinking.
This is not an aberration. This is what happens with vast, complex, ambiguous laws, which ask “huge, widespread” bureaucracies to implement them.
It is the norm of today’s sprawling Administrative State and of Congress’s penchant for 2,000-page wish lists, which ineluctably empower that Administrative State.
We resist, however, the idea that the problem is merely “outdated” or “inefficient” bureaucracy.
We do not need better people to administer these “laws.” With laws and regulations this extensive and ambiguous, they are inherently political. The best managers would seek efficient and effective outcomes based on common-sense readings and would resist political tampering. Effective implementation of conflicting and economically irrational rules would still yield big problems. Regardless, the goal is not effective management — it is political control.
Agency “reform” is not the answer, although in most cases reform is preferable to no reform. Even reformed agencies do not possess the information to manage a “complex world.” Anyway, “competent” management is not what the political branches want. Agencies routinely evade existing controls — such as procurement rules — when convenient. The largest Healthcare.gov contractor, for example, reportedly got the work without any contesting bids. That is not an oversight, it is a decision.
The laws and rules are uninterpretable by the courts. Depending on which judges hear the cases, we get dramatically and unpredictably divergent analyses, or the type of baby splitting Chief Justice Roberts gave us on Obamacare. Judges thus end up either making their own law or throwing the question back into the political arena.
Infinite complexity of law means there is no law.
“With great power,” Peter Parker’s (aka Spiderman’s) uncle told us, “comes great responsibility.” For Washington, however, ambiguity and complexity are features, not bugs. Ambiguity and complexity promote control without accountability, power without responsibility.
The only solution to this crisis of complexity is to reform the very laws, rules, scope, and aims of government itself.
In a paper last spring called “Keep It Simple,” we highlighted two instances — one from the labor markets and one from the capital markets — where even the most well-intended rules yielded catastrophic results. We showed how the interactions among these rules and the supporting bureaucracies produced unintended consequences. And we outlined a basic framework for assessing “good rules and bad rules.”
As our motto and objective, we adopted Richard Epstein’s aspiration of “simple rules for a complex world.” Which, you will notice, is the just opposite of the problem so incisively outlined by the President — Washington’s failed attempts to perform “complex tasks in a complex world.”
As we wrote elsewhere,
The private sector is good at mastering complexity and turning it into apparent simplicity — it’s the essence of wealth creation. At its best, the government is a neutral arbiter of basic rules. The Administration says it is ‘discovering’ how these ‘complicated’ things can blow up. We’ll see if government is capable of learning.
See our new 20-page report — Digital Dynamism: Competition in the Internet Ecosystem:
The Internet is altering the communications landscape even faster than most imagined.
Data, apps, and content are delivered by a growing and diverse set of firms and platforms, interconnected in ever more complex ways. The new network, content, and service providers increasingly build their varied businesses on a common foundation — the universal Internet Protocol (IP). We thus witness an interesting phenomenon — the divergence of providers, platforms, services, content, and apps, and the convergence on IP.
The dynamism of the Internet ecosystem is its chief virtue. Infrastructure, services, and content are produced by an ever wider array of firms and platforms in overlapping and constantly shifting markets.
The simple, integrated telephone network, segregated entertainment networks, and early tiered Internet still exist, but have now been eclipsed by a far larger, more powerful phenomenon. A new, horizontal, hypercon- nected ecosystem has emerged. It is characterized by large investments, rapid innovation, and extreme product differentiation.
- Consumers now enjoy at least five distinct, competing modes of broadband connectivity — cable modem, DSL, fiber optic, wireless broadband, and satellite — from at least five types of firms. Widespread wireless Wi- Fi nodes then extend these broadband connections.
- Firms like Google, Microsoft, Amazon, Apple, Facebook, and Netflix are now major Internet infrastructure providers in the form of massive data centers, fiber networks, content delivery systems, cloud computing clusters, ecommerce and entertainment hubs, network protocols and software, and, in Google’s case, fiber optic access net- works. Some also build network devices and operating systems. Each competes to be the hub — or at least a hub — of the consumer’s digital life. So large are these new players that up to 80 percent of network traffic now bypasses the traditional public Internet backbone.
- Billions of diverse consumer and enterprise devices plug into these networks, from PCs and laptops to smartphones and tablets, from game consoles and flat panel displays to automobiles, web cams, medical devices, and untold sensors and industrial machines.
The communications playing field is continually shifting. Cable disrupted telecom through broadband cable modem services. Mobile is a massively successful business, yet it is cannibalizing wireline services, with further disruptions from Skype and other IP communications apps. Mobile service providers used to control the handset market, but today handsets are mobile computers that wield their own substantial power with consumers. While the old networks typically delivered a single service — voice, video, or data — today’s broadband networks deliver multiple services, with the “Cloud” offering endless possibilities.
Also view the accompanying graphic, showing the progression of network innovation over time: Hyperconnected: The New Network Map.
Over the last half decade, during a protracted economic slump, we’ve documented the persistent successes of Digital America — for example the rise of the App Economy. Measuring the health of our tech sectors is important, in part because policy agendas are often based on assertions of market failure (or regulatory failure) and often include comparisons with other nations. Several years ago we developed a simple new metric that we thought better reflected the health of broadband in international comparisons. Instead of measuring broadband using “penetration rates,” or the number of connections per capita, we thought a much better indicator was actual Internet usage. So we started looking at Internet traffic per capita and per Internet user (see here, here, here, and, for more context, here).
We’ve update the numbers here, using Cisco’s Visual Networking Index for traffic estimates and Internet user figures from the International Telecommunications Union. And the numbers suggest the U.S. digital economy, and its broadband networks, are healthy and extending their lead internationally. (Patrick Brogan of USTelecom has also done excellent work on this front; see his new update.)
If we look at regional comparisons of traffic per person, we see North America generates and consumes nearly seven times the world average and more around two and a half times that of Western Europe.
Looking at individual nations, and switching to the metric of traffic per user, we find that the U.S. is actually pulling away from the rest of the world. In our previous reports, the U.S. trailed only South Korea, was essentially tied with Canada, and generated around 60-70% more traffic than Western European nations. Now, the U.S. has separated itself from Canada and is generating two to three times the traffic per user of Western Europe and Japan.
Perhaps the most remarkable fact, as Brogan notes, is that the U.S. has nearly caught up with South Korea, which, for the last decade, was a real outlier — far and away the worldwide leader in Internet infrastructure and usage.
Traffic is difficult to measure and its nature and composition can change quickly. There are a number of factors we’ll talk more about later, such as how much of this traffic originates in the U.S. but is destined for foreign lands. Yet these are some of the best numbers we have, and the general magnitudes reinforce the idea that the U.S. digital economy, under a relatively light-touch regulatory model, is performing well.
My TechPolicyDaily colleague Roslyn Layton has begun a series comparing the European and U.S. broadband markets.
As a complement to her work, I thought I’d address a common misperception — the notion that American broadband networks are “pathetically slow.” Backers of heavier regulation of the communications market have used this line over the past several years, and for a time it achieved a sort of conventional wisdom. But is it true? I don’t think so.
Real-time speed data collected by the Internet infrastructure firm Akamai shows U.S. broadband is the fastest of any large nation, and trails only a few tiny, densely populated countries. Akamai lists the top 10 nations in categories such as average connection speed; average peak speed; percent of connections with “fast” broadband; and percent of connections with broadband. The U.S., for example, ranks eighth among nations in average connection speed. And this is the number that is oft quoted. (This is a bit better than the no-longer-oft-used broadband penetration figures, which perennially showed the U.S. further down the list, at 15th or 26th place, for example.) Nearly all the the nations on these speed lists, however, with the exception of the U.S., are small, densely populated countries where it is far easier and more economical to build high-speed networks.
How to fix this? Well, Akamai also lists the top 10 American states in these categories. Because states are smaller, like the small nations that top the global list, they are a more appropriate basis for comparison. Last winter I combined the national and state figures and compiled a more appropriate comparative list. Using the newest data, I’ve updated the tables, which show that U.S. states (highlighted in green) dominate.
- Ten of the top 13 entities for “average connection speed” are U.S. states.
- Ten of the top 15 in “average peak connection speed” are U.S. states.
- Ten of the top 12 in “percent of connections above 10 megabits per second” are U.S. states.
- Ten of the top 20 in “percent of connections above 4 megabits per second” are U.S. states.
U.S. states thus account for 40 of the top 60 slots — or two-thirds — in these measures of actual global broadband speeds.
This is not a comprehensive analysis of the entire U.S. Less populated geographic areas, where it is more expensive to build networks, don’t enjoy speeds this high. But the same is true throughout the world.
On Tuesday this week, the American Enterprise Institute launched an exciting new project — the Center for Internet, Communications, and Technology. I was happy to participate in the inaugural event, which included talks by CEA chairman Jason Furman and Rep. Greg Walden (R-OR). We discussed broadband’s potential to boost economic productivity and focused on the importance and key questions of wireless spectrum policy. See the video below:
Wealth, however, can be a double-edged sword. With wealth comes resilience and thus an increased capacity to take risk. More risk can lead to further riches. Yet greater wealth also increases potential losses. In other words, we have a lot more to gain and a lot more to lose.
Perhaps it is not surprising then that many modern elites and policymakers see danger around every corner—from terrorism to climate change to financial calamity. In one sense, an obsession with risk is a luxury of wealth. It is prudent to identify present shortcomings and contemplate future problems and attempt to avoid them. Preventing hunger, unemployment, bomb plots, wars, and financial panics is a good thing.
What happens, though, when we develop a hyper-focus on shortcomings and potential losses? What happens when we seek a public policy remedy for every perceived problem? This kind of obsession with risk, danger, and downside may be counterproductive. It may exacerbate known problems and unleash dangers never dreamed of. . . . read the entire article.
Today the D.C. Federal Appeals Court hears Verizon’s challenge to the Federal Communications Commission’s “Open Internet Order” — better known as “net neutrality.”
Hard to believe, but we’ve been arguing over net neutrality for a decade. I just pulled up some testimony George Gilder and I prepared for a Senate Commerce Committee hearing in April 2004. In it, we asserted that a newish “horizontal layers” regulatory proposal, then circulating among comm-law policy wonks, would become the next big tech policy battlefield. Horizontal layers became net neutrality, the Bush FCC adopted the non-binding Four Principles of an open Internet in 2005, the Obama FCC pushed through actual regulations in 2010, and now today’s court challenge, which argues that the FCC has no authority to regulate the Internet and that, in fact, Congress told the FCC not to regulate the Internet.
Over the years we’ve followed the debate, and often weighed in. Here’s a sampling of our articles, reports, reply comments, and even some doggerel:
- “CBS-Time Warner Cable Spat Shows (Once Again) Why ‘Net Neutrality’ Won’t Work” – by Bret Swanson – August 9, 2013
- “Verizon, ESPN, and the Future of Broadband” – by Bret Swanson – Forbes.com – June 4, 2013
- “The Internet Survives, and Thrives, For Now” – by Bret Swanson – RealClearMarkets – December 6, 2010
- “Reply Comments to the FCC’s Open Internet Further Inquiry” – by Bret Swanson – November 4, 2010
- “Net Neutrality, Investment, and Jobs: Assessing the Potential Impact on the Broadband Ecosystem” – by Charles M. Davidson and Bret T. Swanson, Advanced Communications Law and Policy Institute, New York Law School, June 16, 2010
- “The Regulatory Threat to Web Video” – by Bret Swanson – Forbes.com – May 17, 2010
- “Reply Comments in the FCC Matter of ‘Preserving the Open Internet’” – by Bret Swanson – April 26, 2010
- “What Would Net Neutrality Mean for U.S. Jobs?” – by Bret Swanson – February 5, 2010
- “Net Neutrality’s Impact on Internet Innovation” – prepared for the New York City Council – by Bret Swanson – November 20, 2009
- “Google and the Problem With ‘Net Neutrality’” – by Bret Swanson, The Wall Street Journal, October 5, 2009
- “Leviathan Spam” – by Bret Swanson – A whimsical take on “Net Neutrality” – September 23, 2009
- “Unleashing the ‘Exaflood’” – by Bret Swanson and George Gilder, The Wall Street Journal, February 22, 2008
- “The Coming Exaflood” – by Bret Swanson, The Wall Street Journal, January 20, 2007
- “Let There Be Bandwidth” – by Bret Swanson, The Wall Street Journal, March 7, 2006
- “Testimony For Telecommunications Policy: A Look Ahead” – testimony before the Senate Commerce Committee – by George Gilder – April 28, 2004
— Bret Swanson
Washington is getting closer to unleashing more spectrum to fuel the digital economy and stay ahead of capacity constraints that will stymie innovation and raise prices for consumers. Ahead of the July 23 Congressional hearing on spectrum auctions, we should keep a couple things in mind. First and foremost, we need “Simple Rules for a Complex World.” It’s a basic idea that should apply to all policymaking. But especially in the exceedingly complex and fast-moving digital ecosystem.
A number of firms are seeking special rules that would complicate — and possibly undermine — the auctions. They want to exclude some rival firms from bidding in the auctions. They are suggesting exclusions, triggers, “one-third caps,” and other Rube Goldberg mechanisms they hope will tip the auction scales in their favor.
Using examples from the labor markets and capital markets, we showed in a recent paper that complex policies — even though well intended and designed by smart people — often yield perverse results. Laws and regulations should be few, simple, and neutral. Those advocating the special auction rules favor a process that is complex and biased.
They are also using complicated arguments to back their preferred complicated process. Some are asserting a “less is more” theory of auctions — the idea that fewer bidders can yield higher auction revenues. If it seems counterintuitive, it is. Their theory is based on a very specific, hypothetical auction where a dominant monopolist might scare off a potential new market entrant from bidding at all and walk away with the underpriced auction items. This hypothetical does not apply to America’s actual wireless spectrum market.
The U.S. has four national mobile service providers and a number of regional providers. We have lots of existing players, most of whom plan to bid in the auctions. As all the theory and evidence shows, in this situation, an open process with more bidders means a better auction — spectrum flowing to its highest value uses and more overall revenue.
Some studies show a policy excluding the top two bidders in the auction could reduce revenue by up to 40% — or $12 billion. This would not only prevent spectrum from flowing to its best use but could also jeopardize the whole purpose of the incentive auction, because lower prices could discourage TV broadcasters from “selling” their valuable airwaves. If the auction falls short, that means less spectrum, less mobile capacity, slower mobile broadband, and higher consumer prices. (See our recent Forbes article on the topic.)
Fortunately, several Members of Congress are adhering to the Simple Rules idea. They want to keep the spectrum auction open and competitive. They think this will yield the most auction revenues and ensure the maximum amount of underutilized broadcast spectrum is repurposed for wireless broadband.
The argument for simple auction rules is simple. The argument for complex auction rules is very complicated.
The complexities of the Affordable Care Act (aka Obamacare) are multiple, metastasizing, and increasingly well-known. Less known is an additional layer of health care regulation slated for implementation next year: the system by which doctors and hospitals code for conditions, injuries, and treatments. By way of illustration, in the old system, a broken arm might get the code 156; pneumonia might be 397. The new system is much more advanced. As Ben Domenech notes:
- The new codes account for injury sites, ranging from opera houses to chicken coops to squash courts.
- One code is for “burn due to water-skis on fire” and another for “walked into lamppost.”
- The codes are so nuanced that “bitten by turtle” and “struck by turtle” are separate codes.
- There are a total of nine codes strictly pertaining to injuries that occur in and around a mobile home.
- There are 72 codes pertaining to birds and 312 codes related to animals.
- There’s even a code for civilian drone strikes.
In all, the new system, known as ICD-10, will boast 140,000 codes, a near-eight-fold rise over the mere 18,000 codes in ICD-9. It is a good example of the way bureaucracies grow in size and complexity in an attempt to match the complexity of society and the economy. This temptation, however, is usually perverse.
Complexity in the economy means new technologies, more specialized goods and services, and more consumer and vocational choice. Economic complexity, however, is built upon a foundation of simplicity – clear, basic rules and institutions. Simple rules encourage experimentation, promote long-term investments and entrepreneurial ventures, and allow the flexibility to drive and accommodate diversity.
Complex rules, on the other hand, often lead to just the opposite: less experimentation, investment, entrepreneurship, diversity, and choice.
It is difficult to quantify the effects of the metastasizing Administrative State. It is impossible to calculate, say, the cost of regulations that prohibit, discourage, or delay innovation. Likewise, what is the cost of the regulations that, arguably, helped cause the Financial Panic of 2008 and its policy fallout? No one can say with precision. For twenty years, though, the Competitive Enterprise Institute has catalogued regulatory complexity as well as anyone, and its latest report is astonishing.
Federal regulation, CEI’s latest “10,000 Commandments” survey finds, costs the U.S. economy some $1.8 trillion annually. That’s more than 10% of GDP, or nearly $15,000 per citizen. These estimates largely predate the implementation of the ACA, Dodd-Frank, and new rounds of EPA intervention. In other words, it’s only getting worse.
The legal scholar Richard Epstein argues that
The dismal performance of the IRS is but a symptom of a much larger disease which has taken root in the charters of many of the major administrative agencies in the United States today: the permit power. Private individuals are not allowed to engage in certain activities or to claim certain benefits without the approval of some major government agency. The standards for approval are nebulous at best, which makes it hard for any outside reviewer to overturn the agency’s decision on a particular application.
That power also gives the agency discretion to drag out its review, since few individuals or groups are foolhardy enough to jump the gun and set up shop without obtaining the necessary approvals first. It takes literally a few minutes for a skilled government administrator to demand information that costs millions of dollars to collect and that can tie up a project for years. That delay becomes even longer for projects that need approval from multiple agencies at the federal or state level, or both.
The beauty of all of this (for the government) is that there is no effective legal remedy. Any lawsuit that protests the improper government delay only delays the matter more. Worse still, it also invites that agency (and other agencies with which it has good relations) to slow down the clock on any other applications that the same party brings to the table. Faced with this unappetizing scenario, most sophisticated applicants prefer quiet diplomacy to frontal assault, especially if their solid connections or campaign contributions might expedite the application process. Every eager applicant may also be stymied by astute competitors intent on slowing the approval process down, in order to protect their own financial profits. So more quiet diplomacy leads to further social waste.
Epstein argues the FDA, EPA, FCC, and other agencies all use this permit power to control firms, industries, and people beyond any reasonable belief they are providing a net benefit to society.
The agencies are guilty of overreach and promoting their own metastasis. Yet without Congress and the President, they would have little or no power. Congresses and Presidents increasingly pass thousand-page laws that ask agencies to produce tens of thousands of pages of attendant rules and regulations. Complexity grows. Accountability is lost. The economy suffers. And the corrective paths, which fix mistakes and promote renewal in the rest of the economy and society, are blocked. We then pile on tomorrow’s complexity to “fix” the flaws created by yesterday’s complexity.
The hyper-regulation of the economy is not merely an annoyance, not merely about inefficient paperwork. It is damaging our innovative and productive capacity — and thus employment, the budget, and our standard of living.
The McKinsey Global Institute helps us understand why this matters from a macro perspective. McKinsey chose a dozen existing and emerging technologies and estimated their potential economic impact in the year 2025. It found industries like the mobile Internet, cloud computing, self-driving cars, and genomics could produce economic benefits of up to $33 trillion worldwide. The operative word, however, is “might.” Innovation is all about what’s new. And regulation is often about disallowing or discouraging what’s new. The growing complexity of the regulatory state, therefore, can only block some number of these innovations and is thus likely to leave us with a simpler, and thus poorer, world.
— Bret Swanson
ESPN is reportedly talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.
As good a deal as this may be for consumers (and the companies involved), the potential arrangement offends some people’s very particular notion of “network neutrality.” They often have trouble defining what they mean by net neutrality, but they know rule breakers when they see them. Sure enough, long time net neutrality advocate Public Knowledge noted, “This is what a network neutrality violation looks like.”
The basic notion is that all bits on communications networks should be treated the same. No prioritization, no discrimination, and no partnerships between content companies and conduit companies. Over the last decade, however, as we debated net neutrality in great depth and breadth, we would point out that such a notional rule would likely result in many perverse consequences. For example, we noted that, had net neutrality existed at the time, the outlawing of pay-for-prioritization would have banned the rise of content delivery networks (CDNs), which have fundamentally improved the user experience for viewing online content. When challenged in this way, the net neutrality proponents would often reply, Well, we didn’t mean that. Of course that should be allowed. We also would point out that yesterday’s and today’s networks discriminate among bits in all sorts of ways, and that we would continue doing so in the future. Their arguments often deteriorated into a general view that Bad things should be banned. Good things should be allowed. And who do you think would be the arbiter of good and evil? You guessed it.
So what is the argument in the case of ESPN? The idea that ESPN would pay to exempt its bits from data caps apparently offends the abstract all-bits-equal notion. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially as an ESPN non-watcher, am worse off.
So the critics’ real worry is, I suppose, that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But this is NOT what government policy should be — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms. This is what we warned would happen. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?
These practical and utilitarian arguments about technology and economics are important. Yet they ignore perhaps the biggest point of all: the FCC has no authority to regulate the Internet. The Internet is perhaps the greatest free-flowing, fast-growing, dynamic engine of cultural and economic value we’ve known. The Internet’s great virtue is its ability to change and grow, to foster experimentation and innovation. Diversity in networks, content, services, apps, and business models is a feature, not a bug. Regulation necessarily limits this freedom and diversity, making everything more homogeneous and diminishing the possibilities for entrepreneurship and innovation. Congress has given the FCC no authority to regulate the Internet. The FCC invented this job for itself and is now being challenged in court.
Possible ESPN-mobile partnerships are just the latest reminder of why we don’t want government limiting our choices — and all the possibilities — on the Internet.
— Bret Swanson
A critique of Carmen Reinhart and Ken Rogoff’s paper examining debt’s effect on growth dominated the economic news over the last week. Reinhart and Rogoff’s 2010 offering, Growth in a Time of Debt, compiled lots of data on debt-to-GDP ratios from nations around the globe and found that higher debt ratios, especially those at 90% or above, tended to be associated with slower growth. Three UMass-Amherst economists, however, noticed an error in R&R’s spreadsheet and argued that it (along with two other statistical choices) significantly altered the results. R&R acknowledged the spreadsheet error in a reply but defended the thrust of their work and its conclusions.
Champions of government spending jumped on the critique, charging that the R&R paper had given aid and comfort to widespread “austerity” policies and that their now-discredited ideas had sunk the world economy. They dubbed it “The Excel Depression.”
The coding error in Reinhart and Rogoff has gotten a lot more media attention than it deserves.
Then there is the entertaining contrarian Nassim Nicholas Taleb, who, in a tweet, goes further:
The coding error, I agree, is not remotely dispositive in this very big debate. So where does that leave us? We’ve still got these enormous debts, slow growth, and a still-yawning intellectual chasm on all the big public finance and monetary policy issues. As some have pointed out, a problem with this type of research is causation. Even if R&R are correct about the correlation, in other words, does high debt cause slow growth, or does slow growth cause high debt? These questions really get to the heart of economics and, like Taleb, I’m skeptical conventional macro is very enlightening.
We’ve been debating these very topics for centuries, or millennia. In The History of England, for example, Thomas Babington Macaulay reminded us of his nation’s apparently insurmountable debts following the interminable wars of the seventeenth and eighteenth centuries.*
When the great contest with Lewis the Fourteenth was finally terminated by the Peace of Utrecht the nation owed about fifty million; and that debt was considered not merely by the rude multitude, not merely by fox hunting squires and coffee-house orators, but by acute and profound thinkers, as an encumbrance which would permanently cripple the body politic . . . .
Soon war again broke forth; and under the energetic and prodigal administration of the first William Pitt, the debt rapidly swelled to a hundred and forty million. As soon as the first intoxication of victory was over, men of theory and men of business almost unanimously pronounced that the fatal day had now really arrived.
David Hume said the nation’s madness exceeded that of the Crusades. Among the intellectuals, only Edmund Burke demurred. “Adam Smith,” Macaulay continued,
saw a little, and but a little further. He admitted that, immense as the pressure was, the nation did actually sustain it and thrive. . . . But he warned his countrymen even a small increase [in debt] might be fatal.
Thus Britain’s attempt to tax its American colonies to pay down its debts. And thus another war — the Revolutionary — and thus another 100 million in new debts. More wars stemming from the French Revolution pushed Britain’s debts to 800 million, surely beyond any possibility of repayment.
Yet like Addison’s valetudinarian, who continued to whimper that he was dying of consumption till he became sofat that he was shamed into silence, [England] went on complaining that she was sunk in poverty till her wealth showed itself by tokens which made her complaints ridiculous . . . .
The beggared, the bankrupt society not only proved able to meet all its obligations, but while meeting these obligations, grew richer and richer so fast that the growth could almost be discerned by the eye . . . . While shallow politicians were repeating that the energies of the people were borne down by the weight of public burdens, the first journey was performed by steam on a railway. Soon the island was intersected by railways. A sum exceeding the whole amount of he national debt at the end of the American war was, in a few years, voluntarily expended by this ruined people on viaducts, tunnels, embankments, bridges, stations, engines. Meanwhile, taxation was almost constantly becoming lighter and lighter, yet still the Exchequer was full . . . .
Macaulay pinpointed the chief defect in the thinking of the alarmists.
They made no allowance for the effect produced by the incessant progress of every experimental science, and by the incessant effort of every man to get on in life. They saw that the debt grew and they forgot that other things grew as well as the debt.
Does this mean the spendthrifts are right? That we can — indeed, should — spend our way out of our predicaments, without much regard for the growing debt?
A defect of the debt alarmists may be their curmudgeonly suspicion that budget imbalances always drive the economy downward. An even more egregious defect of the debt apologists, however, is their assumption that budget imbalances lift the economy upward and that spending is equal to growth, rather than a result of growth. The debt alarmists too often forget the possibilities of human achievement that are the basis for wealth. The debt apologists, however, assume wealth is inevitable, that it can be redistributed, and that their policies will have no harmful impact on wealth creation. The crucial point in Macaulay is not that any nation can sustain growing debts but that vibrantly growing economies (like that of the scientifically-advanced, exploratory, industrial British Empire) can sustain debts in larger amounts than is commonly assumed.
The debt alarmists, moreover, play into the hands of the spendthrifts. By making budget balance their sine qua non of policy, they equate spending restraint with tax increases. The spendthrifts say “fine, if budget balance is so important, let’s raise taxes.” Never mind the possible negative growth effects of higher tax rates (and regulations and the like). This is what has happened in much of Europe and now to some extent in the U.S. An obsession with debt too often impels policies that slow economic growth — real economic growth, based on productivity and innovation, not spending — thus greatly exacerbating the burden of debt. And make no mistake, the burdens of debt are real. Defaults, inflations, and bankruptcies happen. If interest rates rise several percentage points, the U.S. might be paying hundreds of billions more in interest. And this is why the shortened term structure of our debt is an even bigger concern. We should have been locking in very long terms at these historically low rates.
Like the British Empire, with its pound sterling, the U.S. has a great advantage in the dollar’s status as world reserve currency. We are probably able to sustain higher debts than would otherwise be the case because our debts are in our own currency and the safe haven status of Treasurys. Yet, how did the pound sterling or the dollar achieve reserve status? Through powerful economic growth of the currencies’ issuers.
In the current growth and policy environment, America’s debts are a substantial worry. Yet no policy should focus first on debt. We should ask whether each policy encourages or discourages entrepreneurship and real productivity enhancements. And whether each spending program is legitimate, effective, and efficient. If policy were driven, more often than not, by thoughtful answers to these questions, then the debt question would answer itself. Our debt ratio would likely decline, yet the amount of debt our economy could sustain would rise.
Here is David Malpass concisely making the point on CNBC:
— Bret Swanson
* Macaulay quotes from George Gilder’s book Wealth & Poverty.
Each year the Federal Communications Commission is required to report on competition in the mobile phone market. Following Congress’s mandate to determine the level of industry competition, the FCC, for many years, labeled the industry “effectively competitive.” Then, starting a few years ago, the FCC declined to make such a determination. Yes, there had been some consolidation, it was acknowledged, yet the industry was healthier than ever — more subscribers, more devices, more services, lots of innovation. The failure to achieve the “effectively competitive” label was thus a point of contention.
This year’s “CMRS” — commercial mobile radio services — report again fails to make a designation, one way or the other. Yet whatever the report lacks in official labels, it more than makes up in impressive data.
For example, it shows that as of October 2012, 97.2% of Americans have access to three or more mobile providers, and 92.8% have access to four or more. As for mobile broadband data services, 97.8% have access to two or more providers, and 91.6% have access to three or more.
Rural America is also doing well. The FCC finds 87% of rural consumers have access to three or more mobile voice providers, and 69.1% have access to four or more. For mobile broadband, 89.9% have access to two or more providers, while 65.4% enjoy access to three or more.
Call this what you will — to most laypeople, these choices count as robust competition. Yet the FCC has a point when it
refrain[s] from providing any single conclusion because such an assessment would be incomplete and possibly misleading in light of the variations and complexities we observe.
The industry has grown so large, with so many interconnected and dynamic players, it may have outgrown Congress’s request for a specific label.
14. Given the Report’s expansive view of mobile wireless services and its examination of competition across the entire mobile wireless ecosystem, we find that the mobile wireless ecosystem is sufficiently complex and multi-faceted that it would not be meaningful to try to make a single, all-inclusive finding regarding effective competition that adequately encompasses the level of competition in the various interrelated segments, types of services, and vast geographic areas of the mobile wireless industry.
Or as economist George Ford of the Phoenix Center put it,
The statute wants a competitive analysis, but as the Commission correctly points out, competition is not the goal, it [is] the means. Better performance is the goal. When the evidence presented in the Sixteenth Report is viewed in this way, the conclusion to be reached about the mobile industry, at least to me, is obvious: the U.S. mobile wireless industry is performing exceptionally well for consumers, regardless of whether or not it satisfies someone’s arbitrarily-defined standard of “effective competition.”
I’m in good company. Outgoing FCC Chairman Julius Genachowski lists among his proudest achievements that “the U.S. is now the envy of the world in advanced wireless networks, devices, applications, among other areas.”
The report shows that in the last decade, U.S. mobile connections have nearly tripled. The U.S. now has more mobile connections than people.
More important, the proliferation of smartphones, which are powerful mobile computers, is the foundation for a new American software industry widely known as the App Economy. We detailed the short but amazing history of the app and its impact on the economy in our report “Soft Power: Zero to 60 Billion in Four Years.” Likewise, these devices and software applications are changing industries that need changing. Last week, experts testified before Congress about mobile health, or mHealth, and we wrote about the coming health care productivity revolution in “The App-ification of Medicine.”
One factor that still threatens to limit mobile growth is the availability of spectrum. The report details past spectrum allocations that have borne fruit, but the pipeline of future spectrum allocations is uncertain. A more robust commitment to spectrum availability and a free-flowing spectrum market would ensure continued investment in networks, content, and services.
What Congress once called the mobile “phone” industry is now a sprawling global ecosystem and a central driver of economic advance. By most measures, the industry is effectively competitive. By any measure, it’s positively healthy.
— Bret Swanson
The statute wants a competitive analysis, but as the Commission correctly points out, competition is not the goal, it the means. Better performance is the goal. When the evidence presented in the Sixteenth Report is viewed in this way, the conclusion to be reached about the mobile industry, at least to me, is obvious: the U.S. mobile wireless industry is performing exceptionally well for consumers, regardless of whether or not it satisfies someone’s arbitrarily-defined standard of “effective competition.”
— George Ford, Phoenix Center chief economist, commenting on the FCC’s 16th Wireless Competition report.
FCC chairman Julius Genachowski opens a new op-ed with a bang:
As Washington continues to wrangle over raising revenue and cutting spending, let’s not forget a crucial third element for reining in the deficit: economic growth. To sustain long-term economic health, America needs growth engines, areas of the economy that hold real promise of major expansion. Few sectors have more job-creating innovation potential than broadband, particularly mobile broadband.
Private-sector innovation in mobile broadband has been extraordinary. But maintaining the creative momentum in wireless networks, devices and apps will need an equally innovative wireless policy, or jobs and growth will be left on the table.
Economic growth is indeed the crucial missing link to employment, opportunity, and healthier government budgets. Technology is the key driver of long term growth, and even during the downturn the broadband economy has delivered. Michael Mandel estimates the “app economy,” for example, has created more than 500,000 jobs in less than five short years of existence.
We emphatically do need policies that will facilitate the next wave of digital innovation and growth. Chairman Genachowski’s top line assessment — that U.S. broadband is a success — is important. It rebuts the many false but persistent claims that U.S. broadband lags the world. Chairman Genachowski’s diagnosis of how we got here and his prescriptions for the future, however, are off the mark.
For example, he suggests U.S. mobile innovation is newer than it really is.
Over the past few years, after trailing Europe and Asia in mobile infrastructure and innovation, the U.S. has regained global leadership in mobile technology.
This American mobile resurgence did not take place in just the last “few years.” It began a little more than decade ago with smart decisions to:
(1) allow reasonable industry consolidation and relatively free spectrum allocation, after years of forced “competition,” which mandated network duplication and thus underinvestment in coverage and speed (we did in fact trail Europe in some important mobile metrics in the late 1990s and briefly into the 2000s);
(2) refrain from any but the most basic regulation of broadband in general and the mobile market in particular, encouraging experimental innovation; and
(3) finally implement the digital TV / 700 MHz transition in 2007, which put more of the best spectrum into the market.
These policies, among others, encouraged some $165 billion in mobile capital investment between 2001 and 2008 and launched a wave of mobile innovation. Development on the iPhone began in 2004, the iPhone itself arrived in 2007, and the app store in 2008. Google’s Android mobile OS came along in 2009, the year Mr. Genachowski arrived at the FCC. By this time, the American mobile juggernaut had already been in full flight for years, and the foundation was set — the U.S. topped the world in 3G mobile networks and device and software innovation. Wi-Fi, meanwhile surged from 2003 onward, creating an organic network of tens of millions of wireless nodes in homes, offices, and public spaces. Mr. Genachowski gets some points for not impeding the market as aggressively as some other more zealous regulators might have. But taking credit for America’s mobile miracle smacks of the rooster proudly puffing his chest at sunrise.
More important than who gets the credit, however, is determining what policies led to the current success . . . and which are likely to spur future growth. Chairman Genachowski is right to herald the incentive auctions that could unleash hundreds of megahertz of un- and under-used spectrum from the old TV broadcasters. Yet wrangling over the rules of the auctions could stretch on, delaying the the process. Worse, the rules themselves could restrict who can bid on or buy new spectrum, effectively allowing the FCC to favor certain firms, technologies, or friends at the expense of the best spectrum allocation. We’ve seen before that centrally planned spectrum allocations don’t work. The fact that the FCC is contemplating such an approach is worrisome. It runs counter to the policies that led to today’s mobile success.
The FCC also has a bad habit of changing the metrics and the rules in the middle of the game. For example, the FCC has been caught changing its “spectrum screen” to fit its needs. The screen attempts to show how much spectrum mobile operators hold in particular markets. During M&A reviews, however, the FCC has changed its screen procedures to make the data fit its opinion.
In a more recent example, Fred Campbell shows that the FCC alters its count of total available commercial spectrum to fit the argument it wants to make from day to day. We’ve shown that the U.S. trails other nations in the sum of currently available spectrum plus spectrum in the pipeline. Below, see a chart from last year showing how the U.S. compares favorably in existing commercially available spectrum but trails severely in pipeline spectrum. Translation: the U.S. did a pretty good job unleashing spectrum in 1990s through he mid-2000s. But, contrary to Chairman Genachowski’s implication, it has stalled in the last few years.
When the FCC wants to argue that particular companies shouldn’t be allowed to acquire more spectrum (whether through merger or secondary markets), it adopts this view that the U.S. trails in spectrum allocation. Yet when challenged on the more general point that the U.S. lags other nations, the FCC turns around and includes an extra 139 MHz in spectrum in the 2.5 GHz range to avoid the charge it’s fallen behind the curve.
Next, Chairman Genachowski heralds a new spectrum “sharing” policy where private companies would be allowed to access tiny portions of government-owned airwaves. This really is weak tea. The government, depending on how you measure, controls between 60% and 85% of the best spectrum for wireless broadband. It uses very little of it. Yet it refuses to part with meaningful portions, even though it would still be left with more than enough for its important uses — military and otherwise. If they can make it work (I’m skeptical), sharing may offer a marginal benefit. But it does not remotely fit the scale of the challenge.
Along the way, the FCC has been whittling away at mobile’s incentives for investment and its environment of experimentation. Chairman Genachowski, for example, imposed price controls on “data roaming,” even though it’s highly questionable he had the legal authority to do so. The Commission has also, with varied degrees of “success,” been attempting to impose its extralegal net neutrality framework to wireless. And of course the FCC has blocked, altered, and/or discouraged a number of important wireless mergers and secondary spectrum transactions.
Chairman Genachowski’s big picture is a pretty one: broadband innovation is key to economic growth. Look at the brush strokes, however, and there are reasons to believe sloppy and overanxious regulators are threatening to diminish America’s mobile masterpiece.
— Bret Swanson
That’s the question Jim Tankersley asked in a page one Washington Post story this week.
Here is how he summarized the situation:
“In the past three recoveries from recession, U.S. growth has not produced anywhere close to the job and income gains that previous generations of workers enjoyed. The wealthy have continued to do well. But a percentage point of increased growth today simply delivers fewer jobs across the economy and less money in the pockets of middle-class families than an identical point of growth produced in the 40 years after World War II.
That has been painfully apparent in the current recovery. Even as the Obama administration touts the return of economic growth, millions of Americans are not seeing an accompanying revival of better, higher-paying jobs.
The consequences of this breakdown are only now dawning on many economists and have not gained widespread attention among policymakers in Washington. Many lawmakers have yet to even acknowledge the problem. But repairing this link is arguably the most critical policy challenge for anyone who wants to lift the middle class.”
Tankersley cites the historical heuristic that a percentage point of GDP growth usually delivers about a half-point (0.5-0.6%) of employment growth.
“Three and a half years into the recovery that began in 2001 under President George W. Bush, job intensity was stuck at less than 0.2 percent. The recovery under President Obama is now up to an intensity of 0.3 percent, or about half the historical average.”
If we measure incomes, rather than employment, the situation appears even more dire:
“Middle-class income growth looks even worse for those recoveries. From 1992 to 1994, and again from 2002 to 2004, real median household incomes fell — even though the economy grew more than 6 percent, after adjustments for inflation, in both cases. From 2009 to 2011 the economy grew more than 4 percent, but real median incomes grew by 0.5 percent.”
What’s going on? Is the American middle class really in such bad shape? If so, why? And can we do anything about it? If not, why do these data appear to show a fundamental shift in the link between GDP growth and overall prosperity? These are big, complicated questions. For which I don’t have lots of concrete answers. I would, however, suggest a number of factors that may help us think about.
First, our economy does look different from the 1950s or 1960s. It is more complex. Back then, during a recession, factories laid off shifts of workers, leading to sharp employment downturns. Coming out of recessions, factories often hired back those same workers to build the same products. It was a simple process.
Today, although American manufacturing output is larger than ever, it employs a much smaller portion of the economy. The service and knowledge economies now dominate employment. And when jobs are not so closely tied to making widgets and the output is more ambiguous, the simple lay-off/hire-back formula disappears. In other words, we have lots more organizational and human capital today, and less “labor.”
This could be one reason the 1990 and 2001 recessions were shallower, but the job bounce-backs were slower.
Another factor, which everyone points out, is education. The United States may dominate many of the high-end professions in technology and finance because we have large cohorts of highly educated people (and immigrants). During the Great Recession and its aftermath, for example, the new App Economy, based on smartphones, broadband, and software, has created an estimated 500,000-600,000 jobs. Perhaps an also large cohort, however, not nearly as well educated or without the necessary knowledge skills, has been caught in a two-decade wave of globalization that quickly reduced the jobs this cohort was used to doing, without the possibility for quick changes to higher-value industries.
The Great Recession, however, was deeper and its employment rebound slower than the 1990 and 2001 recessions.
So we look to other factors that appear to be suppressing employment. In his new book The Redistribution Recession, University of Chicago economist Casey Mulligan argues that a host of well-intended safety-net programs are the chief culprit. Unemployment insurance, disability payments, the minimum wage, Medicaid, the earned income tax credit, food stamps and other programs can create deep disincentives to work and/or hire. Mulligan estimated that the average marginal tax rate on the relevant population increased eight percentage points, from 40% to 48%, during the Great Recession. For many individuals and families, the complex effects of these programs conspire to yield 100% marginal tax rates — that is, an extra dollar earned loses a dollar or more in benefits and taxes.
I would throw out another possible factor: monetary policy. The Fed’s unorthodox zero-interest-rate-plus-bond-buying policy has created free money for large firms and for government. We see government growing and corporate profits at record highs. But for small and medium-sized firms, credit is being rationed by regulators. Low rates are meaningless if credit is unavailable. The slow recovery for small firms, which are often acknowledged to create most jobs, could be part of the equation.
Switching from employment to income, a few factors are commonly mentioned:
- Education and globalization may, as with employment, be boosting income for the top but limiting income prospects for the broad middle.
- Health care and other benefits are rising as a portion of overall compensation, thus limiting the measured portion that we call wages or salaries.
- Immigration has added millions of low-wage workers that may depress average measured incomes. These particular workers may be much better off than they were in their home countries and, by lowering wages for jobs few Americans want to do, may “harm” only a very small number of Americans.
- Many income measures do not account for taxes and larger transfer payments in recent times through EITC, Medicaid, disability, unemployment, food stamps, etc. When these are factored in, the numbers look much different.
Alan Reynolds made the case for these underestimates in his 2006 book, Income and Wealth. And now Bruce D. Meyer of the University of Chicago and James X. Sullivan of Notre Dame find that median income growth has not suffered nearly as much as the conventional wisdom says.
“After appropriately accounting for inflation, taxes, and noncash benefits, we show that median income rose by more than 50 percent over the past three decades. This increase is considerably greater than the gains implied by official statistics—official median income rose by only 14 percent between 1980 and 2009. Our improved measure of income increased in each of the past three decades, although the growth has been much slower since 2000. Median consumption also rose at a similar rate over the whole period but at a faster rate than income over the past decade.”
The real income slowdown in the 2000s is not surprising. The decade included two recessions—including the big one. The decade also saw, for the first time since the 1970s, a good whiff of inflation, especially in food, fuel, and housing. Add in spiraling health care and education costs. So, despite spectacular gains in computers, communications, and consumer goods, the middle class squeeze often seems real.
Mark Perry and Don Boudreaux, however, are even more emphatic than Meyer and Sullivan. They say the “trope” of the stagnant middle class is “spectacularly wrong”:
“It is true enough that, when adjusted for inflation using the Consumer Price Index, the average hourly wage of nonsupervisory workers in America has remained about the same. But not just for three decades. The average hourly wage in real dollars has remained largely unchanged from at least 1964—when the Bureau of Labor Statistics (BLS) started reporting it.
“Moreover, there are several problems with this measurement of wages. First, the CPI overestimates inflation by underestimating the value of improvements in product quality and variety. Would you prefer 1980 medical care at 1980 prices, or 2013 care at 2013 prices? Most of us wouldn’t hesitate to choose the latter.
“Second, this wage figure ignores the rise over the past few decades in the portion of worker pay taken as (nontaxable) fringe benefits. This is no small matter—health benefits, pensions, paid leave and the rest now amount to an average of almost 31% of total compensation for all civilian workers according to the BLS.
“Third and most important, the average hourly wage is held down by the great increase of women and immigrants into the workforce over the past three decades. Precisely because the U.S. economy was flexible and strong, it created millions of jobs for the influx of many often lesser-skilled workers who sought employment during these years.”
Perry and Boudreaux go on to say that no income figures—whether the officially stagnant ones or the higher adjusted figures—can account for the dramatic rise in the quantity and quality of consumption that income yields.
“Bill Gates in his private jet flies with more personal space than does Joe Six-Pack when making a similar trip on a commercial jetliner. But unlike his 1970s counterpart, Joe routinely travels the same great distances in roughly the same time as do the world’s wealthiest tycoons.
What’s true for long-distance travel is also true for food, cars, entertainment, electronics, communications and many other aspects of ‘consumability.’ Today, the quantities and qualities of what ordinary Americans consume are closer to that of rich Americans than they were in decades past. Consider the electronic products that every middle-class teenager can now afford—iPhones, iPads, iPods and laptop computers. They aren’t much inferior to the electronic gadgets now used by the top 1% of American income earners, and often they are exactly the same.”
Despite all the factors in this multifaceted debate, one thing is certain. Economic growth is better for the middle class than is economic stagnation.
It is currently in fashion to say, with great contrarian flair, that federal spending growth is the slowest since the Eisenhower Administration. Or, as someone famous recently put it, “We don’t have a spending problem.”
This assertion is, to put it mildly, debatable. Spending jumped 18% in just one year during the Panic of 2008-09. If the government keeps spending at that level, but starts counting after the jump, then the growth rate will appear modest. Spending as a share of GDP is higher than at anytime since World War II, and so is the debt-to-GDP ratio. As the OMB chart below shows, it gets much worse.
Nevertheless, does anyone disagree that we have a growth problem, and a serious one? Yesterday’s negative GDP estimate for the fourth quarter of 2012 (-0.1%) should jolt the nation.
Let’s stipulate the GDP reading’s anomalies — lower than expected inventories and defense spending, which could reverse and add a bit to future growth. Yet economists had expected fourth quarter growth of 1.1% — itself an abysmal projection — and actual growth for the entire year was a barely mediocre 2.2%. Consider, too, that lots of economic activity was moved forward into 2012 to beat the Fiscal Cliff taxman. And don’t forget the Federal Reserve’s extraordinary QE programs, which are supposed to boost growth.
Whatever we’re doing, it’s not working. Not nearly well enough to create jobs. And not nearly well enough to help the budget. Because whatever you think about spending or taxes, the key factor in the health of the budget is economic growth.
OMB projects spending will grow (from today’s historically high level) around 2.96% per year through 2050. It projects annual economic growth over the period of 2.5%. That gets us a debt crisis somewhere down the line, and lots of other economic and social problems along the way.
Last year, however, keep in mind, growth was just 2.2%, following 2011’s even worse reading of 1.8%. If we can’t even match the modest 2.5% long-term projection coming out of a severe downturn, our problems may be worse than we think. Economist Robert Gordon of Northwestern asks “Is U.S. Growth Over?” Outlining seven economic headwinds, he projects growth of around 1.5% over the next few decades. In the chart below, you can see what a budget disaster such a slowdown would produce. Deficits quickly grow from a trillion dollars a year today into the many trillions per year.
Perhaps, many are now suggesting, we can tax our way out of the problem. Almost all academic research, however, suggests higher taxes (in terms of rates and as a portion of the economy) hurt economic growth. The Tax Foundation, for example, surveyed the 26 major studies on the topic going back to the early 1980s. Twenty-three of the studies found that taxes hurt economic growth. No study found higher taxes helped growth. Recent experience in Europe tends to confirm these findings.
Today, most of the policy discussion revolves around debt ceilings, sequesters, and the (fading) possibility of grand bargain budget deal. Mostly lost in the equation is economic growth. One question should dominate the thinking of policymakers: What policies would encourage more productive economic activity?
The new possibility of a breakthrough on immigration reform is an encouraging example. A more rational immigration policy for both low-skilled and high-skilled workers could boost economic growth significantly. Can we find more such policies? As you can see in the chart below, higher taxes can’t make up the budget shortfall. Faster growth and modest spending restraint can. This chart once again shows the OMB projected spending path (solid black line). The solid blue line shows what would happen to tax receipts if (1) growth remains mediocre and (2) we somehow find a way to dramatically raise the portion of the economy Washington taxes from the historical 18% to 23%.
That’s a major jump in taxation. Yet it doesn’t get us close to a healthy budget.
Faster growth and modest spending restraint, on the other hand, close the budget gap. And they do so without increasing the share Washington historically takes from the economy. The orange dashed line shows tax receipts under an economy growing at 3.5% with the historic 18% tax-to-GDP ratio. (Growth of 3.5% may sound like an ambitious goal. Keep in mind, however, that we are still far below trend — we’ve never really recovered from the Great Recession. Long term growth of 3.5%, therefore, merely includes a more rapid recovery to trend over the next several years and then a resumption of the long-term average of 3%.) In the medium to long term, a faster growth-lower tax regime generates more tax revenue than a slow growth-high tax regime.
Faster growth alone would be enough to stabilize budget deficits at today’s levels. But that is not enough. Trillion dollar deficits and Washington spending an ever rising share of the economy are not acceptable. Look, however, at the very modest spending restraint that would be required to essentially balance the budget by 2050. If we slowed spending growth from the projected 2.96% annual rate to just 2.7%, we could close the gap.
Does anyone think spending growth of 2.7% per year versus 2.96% is going to tear apart Social Security, Medicare, the military, or other essential government functions. Many of us could imagine responsible ways to reduce projected spending far, far more than that. All this shows is that a little restraint and robust economic growth go a long way.
The slow growth-high tax scenario produces a budget deficit of almost $3.5 trillion in 2050. Under the faster growth-lower tax scenario, with a touch of spending restraint, the 2050 budget deficit would be just $58 billion.
Now, I’m not pretending I know that a higher tax-to-GDP ratio will produce a particular rate of economic growth. The above are just rough scenarios. Lots of factors are in play. And that is precisely the point. Given an complex, uncertain world, we should attempt to align all our policies for economic growth. We know what policies tend to encourage growth, and those that tend to stunt it.
That means getting immigration policy right — and it appears we may finally be getting somewhere. It means smart, reasonable regulatory policies in energy, health care, education, communications, and intellectual property. It means a healthy division of powers between the federal and state governments. And, yes, it means sweeping tax reform — both individual and corporate.
What we are doing today isn’t working. We are on a dangerous path. Two percent growth won’t get us anywhere. No matter how much we tax ourselves. Only robust growth fueled by entrepreneurship and investment, with a healthy faith in the unknown possibilities of America’s future, will get us there.
Grab a cup of coffee and check out our new article at The American, the online magazine of the American Enterprise Institute.
Several years ago, some American lawyers and policymakers were looking for ways to boost government control of the Internet. So they launched a campaign to portray U.S. broadband as a pathetic patchwork of tin-cans-and-strings from the 1950s. The implication was that broadband could use a good bit of government “help.”
They initially had some success with a gullible press. The favorite tools were several reports that measured, nation by nation, the number of broadband connections per 100 inhabitants. The U.S. emerged from these reports looking very mediocre. How many times did we read, “The U.S. is 16th in the world in broadband”? Upon inspection, however, the reports weren’t very useful. Among other problems, they were better at measuring household size than broadband health. America, with its larger households, would naturally have fewer residential broadband subscriptions (not broadband users) than nations with smaller households (and thus more households per capita). And as the Phoenix Center demonstrated, rather hilariously, if the U.S. and other nations achieved 100% residential broadband penetration, America would actually fall to 20th from 15th.
In the fall of 2009, a voluminous report from Harvard’s Berkman Center tried to stitch the supposedly ominous global evidence into a case-closed indictment of U.S. broadband. The Berkman report, however, was a complete bust (see, for example, these thorough critiques: 1, 2, and 3 as well as my brief summary analysis).
Berkman’s statistical analyses had failed on their own terms. Yet it was still important to think about the broadband economy in a larger context. We asked the question, how could U.S. broadband be so backward if so much of the world’s innovation in broadband content, services, and devices was happening here?
To name just a few: cloud computing, YouTube, Twitter, Facebook, Netflix, iPhone, Android, ebooks, app stores, iPad. We also showed that the U.S. generates around 60% more network traffic per capita and per Internet user than Western Europe, the supposed world broadband leader. The examples multiply by the day. As FCC chairman Julius Genachowski likes to remind us, the U.S. now has more 4G LTE wireless subscribers than the rest of the world combined.
Yet here comes a new book with the same general thrust — that the structure of the U.S. communications market is delivering poor information services to American consumers. In several new commentaries summarizing the forthcoming book’s arguments, author Susan Crawford’s key assertion is that U.S. broadband is slow. It’s so bad, she thinks broadband should be a government utility. But is U.S. broadband slow?
According to actual network throughput measured by Akamai, the world’s largest content delivery network, the U.S. ranks in the top ten or 15 across a range of bandwidth metrics. It is ninth in average connection speed, for instance, and 13th in average peak speed. Looking at proportions of populations who enjoy speeds above a certain threshold, Akamai finds the U.S. is seventh in the percentage of connections exceeding 10 megabits per second (Mbps) and 13th in the percentage exceeding 4 Mbps. (See the State of the Internet report, 2Q 2012.)
You may not be impressed with rankings of seventh or 13th. But did you look at the top nations on the list? Hong Kong, South Korea, Latvia, Switzerland, the Netherlands, Japan, etc.
Each one of them is a relatively small, densely populated country. The national rankings are largely artifacts of geography and the size of the jurisdictions observed. Small nations with high population densities fare well. It is far more economical to build high-speed communications links in cities and other relatively dense populations. Accounting for this size factor, the U.S. actually looks amazingly good. Only Canada comes close to the U.S. among geographically larger nations.
But let’s look even further into the data. Akamai also supplies speeds for individual U.S. states. If we merge the tables of nations and states, the U.S. begins to look not like a broadband backwater or even a middling performer but an overwhelming success. Here are the two sets of Akamai data combined into tables that directly compare the successful small nations with their more natural counterparts, the U.S. states (shaded in blue).
Average Broadband Connection Speed — Nine of the top 15 entities are U.S. states.
Average Peak Connection Speed — Ten of the top 15 entities are U.S. states.
Percent of Connections Over 10 Megabits per Second — Ten of the top 15 entities are U.S. states.
Percent of Connections Over 4 Megabits per Second — Ten of the top 16 entities are U.S. states.
Among the 61 ranked entities on these four measures of broadband speed, 39, or almost two-thirds, are U.S. states. American broadband is not “pitifully slow.” In fact, if we were to summarize U.S. broadband, we’d have to say, compared to the rest of the world, it is very fast.
It is true that not every state or region in the U.S. enjoys top speeds. Yes, we need more, better, faster, wider coverage of wired and wireless broadband. In underserved neighborhoods as well as our already advanced areas. We need constant improvement both to accommodate today’s content and services and to drive tomorrow’s innovations. We should not, however, be making broad policy under the illusion that U.S. broadband, taken as a whole, is deficient. The quickest way to make U.S. broadband deficient is probably to enact policies that discourage investment and innovation — such as trying to turn a pretty successful and healthy industry that invests $60 billion a year into a government utility.
— Bret Swanson
See our new report summarizing the short but amazing life of the mobile app: Soft Power: Zero to 60 Billion in Four Years.
What would “the New Normal” of a mere 1% per capita GDP growth mean for the American economy over the next few decades? What if it’s even worse, as many are now predicting? Is there anything we can do about it? If so, what? We address these items in our new article for the Business Horizon Quarterly — “Beyond the New Normal, a New Era of Growth.”
Sixty-six billion dollars over the next three years. That’s AT&T’s new infrastructure plan, announced yesterday. It’s a bold commitment to extend fiber optics and 4G wireless to most of the country and thus dramatically expand the key platform for growth in the modern U.S. economy.
The company specifically will boost its capital investments by an additional $14 billion over previous estimates. This should enable coverage of 300 million Americans (around 97% of the population) with LTE wireless and 75% of AT&T’s residential service area with fast IP broadband. It’s adding 10,000 new cell towers, a thousand distributed antenna systems, and 40,000 “small cells” that augment and extend the wireless network to, for example, heavily trafficked public spaces. Also planned are fiber optic connections to an additional 1 million businesses.
As the company expands its fiber optic and wireless networks — to drive and accommodate the type of growth seen in the chart above — it will be retiring parts of its hundred-year-old copper telephone network. To do this, it will need cooperation from federal and state regulators. This is the end of phone network, the transition to all Internet, all the time, everywhere.
Today, Princeton’s Alan Blinder says things are looking up, that we’re finally traveling the road to prosperity, albeit slowly. It’s a rather timid claim:
there are definitely positive signs. The stock market is near a five-year high. Recent data on consumer spending and confidence show improvement, though we need more data before declaring victory. At long last, the housing market is growing rapidly, albeit from a very low base . . . .
On balance, the U.S. economy is healing its wounds—that’s another fact. But none of this puts us on the verge of an exuberant boom. Still, if the fiscal cliff is avoided and the European debt crisis doesn’t explode in our face, both GDP growth and job growth should be higher in 2013 than in 2012—even under current policies. But that’s a forecast, not a fact.
Stanford’s John Taylor counters some of Blinder’s claims:
First, he admits that real GDP growth—the most comprehensive measure we have of the state of the economy—is declining; that’s not an improvement.
Second, he admits that, according to the payroll survey, job growth isn’t faster in 2012 than 2011; that’s not an improvement either.
Third, he mentions that the household survey shows employment growth is faster, but that growth must be measured relative to a growing population. If you look at the employment to population ratio, it is the same (58.5%) in the 12 month period starting in October 2009 (the month he chooses as the low point) as in the past 12 months. That’s not an improvement.
Fourth, he shows that the unemployment rate is coming down. But much of that improvement is due to the decline in the labor force participation rate as people drop out of the labor force. According to the CBO, unemployment would be 9 percent if that unusual and distressing decline–certainly not an improvement–had not occurred.
He then goes on to consider forecasts, saying that there are promising signs, such as the housing market. The problem here, however, is that growth is weakening even as housing is less of a drag, because other components of GDP are flagging.
Meanwhile, there is Northwestern’s Bob Gordon, who is making a much stronger, longer term forecast — that the next several decades will be pretty awful. Specifically, that real U.S. economic growth is likely to halve — or worse — from its recent and historical trend of about 2% per-capita per-year.
We’ve been emphasizing just how important it is to get the economy moving again, and how important long term growth is for jobs, incomes, overall opportunity, and for governmental budgets. The Gordon scenario is even worse than the so-called New Normal of around 1% per-capita growth (or 2% overall growth). Gordon projects per-capita growth over the next few decades of around 0.7%. (In non-per-capita terms, the way GDP figures are most often reported, that’s about 1.7%). He thinks growth for the “99%” will be far worse — just 0.2% per-capita.
In the chart below, you can see just how devastating a New Normal scenario would be, let alone Gordon’s even more pessimistic projection. It’s urgent that we implement a sweeping new set pro-growth reforms on taxes, regulation, immigration, trade, education, and monetary policy.
It was nice of Ball State University’s Digital Policy Institute (@DigitalPolicy) to include me last Friday in a webinar discussion on broadband policy. Joining the virtual discussion were Leslie Marx, of Duke and formerly FCC chief economist; Anna-Maria Kovacs, well-known regulatory analyst and fellow at Georgetown; and Michael Santorelli of New York Law School.
You can find a replay of the webinar here. Our broadband discussion, which begins at 1:55:48, was preceded by a good discussion of consumer online privacy, which you might also enjoy.
The energy boom is an apparent surprise to many. I don’t know why. Here’s the photo, caption, and story right now (Wednesday night) on the front page of The Wall Street Journal :
Here was our take in 2006:
there is no inherent shortage of oil. One tiny shale formation right in America’s backyard — the 1,200 square mile Piceance Basin of western Colorado — contains a trillion barrels, more than all the proven reserves in the world. Vast open spaces across the globe remain unexplored or untapped.
Today, it’s Dakota, Texas, and Pennsylvania shale that is leading the new boom. As a few smart guys wrote, we have a “bottomless well” of energy, if only we allow ourselves to find, refine, and innovate.
Life’s only certainty is change. Nowhere more true than with modern technologies, particularly broadband. Problem is, lots of government rules are not coming along for the ride.
Yesterday the Communications Liberty and Innovation Project (CLIP) hosted regulatory experts to discuss ways the FCC might incent more investment in digital infrastructure.
A fresh voice at the FCC is focusing the agency and the country on such a policy path of abundant wired and wireless broadband. New FCC Commissioner Ajit Pai (@AjitPaiFCC) yesterday called for the creation of an IP Transition Task Force as a way to accelerate the transition from analog networks to faster and more ubiquitous digital networks. Network providers, he said, want to know how IP services will be regulated before making major infrastructure investments. Commissioner Pai also discussed economic growth and job creation, asserting every $1 billion spent on fiber deployment creates between 15,000 and 20,000 jobs. Therefore to pave the way for robust private sector investment in the IP infrastructure, the FCC must signal a clear intention not to apply outdated 20th century regulations to these 21st century technologies.
The follow-up discussion focused on the need for a regulatory framework that will promote competition and economic growth while also maximizing consumer benefits. Jonathan Banks of US Telecom pointed out that the telecommunications industry is investing $65 billion per year, every year, in broadband infrastructure — a huge boost to current and future economic growth. Whoever occupies the White House after November should make it clear that expanding the nation’s “infostructure” with private investment dollars is a key national priority that will generate huge dividends — digital and otherwise.
The central economic problem — one that exacerbates all our other serious challenges, from debt to entitlements to persistently low employment — is a sluggish rate of economic growth. Worse than sluggish, really. At less than 2% per annum real growth, the economy is barely limping along. We are growing at perhaps just a third or a fourth the speed (or worse!) compared to previous recoveries from recessions of similar severity.
One school of thought, however, says that there’s not much we can do about it. The nature of the panic — with housing and financial institutions at its core — makes stagnation all but certain. Nonsense, says John Taylor of Stanford, in this new video (part 2 of 3) hosted by the Hoover Institution’s Russ Roberts:
In the next video, Yale’s Robert Shiller reinforces the point about housing. The author of the Case-Shiller Home Price Index questions whether the Fed can reflate home prices with “one button” and whether its zero-rates-forever policy might not do more harm than good. It’s more about “animal spirits,” Shiller says, which means housing is more a function of economic growth than growth is a function of housing.
For years we’ve been highlighting the need for policies that encourage communications infrastructure investment. Fiber, cell towers, data centers — these are the foundation of our growing digital economy, the tools of which are increasingly integral components of every business in every industry. One of the most crucial inputs that makes the digital economy go, however, is invisible. It’s wireless spectrum, and today we don’t have the right spectrum allocation to ensure continued wireless growth and innovation.
So it was good news to hear that former FCC commissioner Jonathan Adelstein is the new CEO of the Personal Communications Industry Association, also known as the “Wireless Infrastructure Association.” The companies he will represent are the mobile service providers, cell tower operators, and associated service companies that build these often unseen networks.
“The ultimate goal for consumers and the economy is to accommodate the need for more wireless data,” Adelstein told Communications Daily. “More spectrum is sort of the effective means for getting there . . . As more spectrum comes online it will ultimately require new infrastructure to accomplish the goal of meeting the data crunch.”
This gives a boost to the prospects for better spectrum policy.
Is the persistently high unemployment rate a secular, rather than cyclical, occurrence? Is it, in other words, a basic shift in the labor market that will leave us with semi-permanently higher joblessness for years or decades to come — no matter what we try to do about it?
Ed Lazear of Stanford and James Spletzer of the U.S. Census Bureau dug into the matter and presented their findings over the weekend at the Fed’s Jackson Hole economic gathering. Lazear also summarized the research in The Wall Street Journal. “The unemployment rate has exceeded 8% for more than three years,” wrote Lazear.
This has led some commentators and policy makers to speculate that there has been a fundamental change in the labor market. The view is that today’s economy cannot support unemployment rates below 5%—like the levels that prevailed before the recession and in the late 1990s. Those in government may take some comfort in this view. It lowers expectations and provides a rationale for the dismal labor market.
Lazear and Spletzer looked at what happened in particular industries and specific jobs, asking whether the real problem is that some industries are too old and aren’t coming back and whether there is substantial “mismatch” between job requirements and worker skills that prevent jobs from being filled. No doubt the economy is always changing, and few industries or jobs stay the same forever, but they found, for example, that
mismatch increased dramatically from 2007 to 2009. But just as rapidly, it decreased from 2009 to 2012. Like unemployment itself, industrial mismatch rises during recessions and falls as the economy recovers. The measure of mismatch that we use, which is an index of how far out of balance are supply and demand, is already back to 2005 levels.
Whatever mismatch exists today was also present when the labor market was booming. Turning construction workers into nurses might help a little, because some of the shortages in health and other industries are a long-run problem. But high unemployment today is not a result of the job openings being where the appropriately skilled workers are unavailable.
Lazear and Spletzer concluded that no, the jobless problem is not mostly secular, and we shouldn’t accept high unemployment.
The reason for the high level of unemployment is the obvious one: Overall economic growth has been very slow. Since the recession formally ended in June 2009, the economy has grown at 2.2% per year, or 6.6% in total. An empirical rule of thumb is that each percentage point of growth contributes about one-half a percentage point to employment.
The economy has regained about four million jobs since bottoming out in early 2010, which is right around 3% of employment—just the gain that would be predicted from past experience. Things aren’t great, but the failure is a result of weak economic growth, not of a labor market that is not in sync with the rest of the economy.
The evidence suggests that to reduce unemployment, all we need to do is grow the economy. Unfortunately, current policies aren’t doing that. The problems in the economy are not structural and this is not a jobless recovery. A more accurate view is that it is not a recovery at all.
The upside of this dismal situation is that we can do something about it. Think about what a different set of pro-growth policies could mean for American workers. Using Lazear’s very rough rule of “one point growth, half a point employment,” we can get an idea of what faster growth might yield in the labor market.
At today’s feeble 2% growth rate, we might expect to add several tens of thousands, or maybe a hundred thousand or two, of jobs each month. Over the next five years, at 2%, we might add something like seven million jobs. But that’s barely enough to keep up with population growth. Three percent growth, the historic average, meanwhile, would likely yield around 10 million net new jobs, 3 million more than at today’s 2% growth rate.
But three percent growth coming out of a deep recession and slow recovery is itself slower-than-usual recovery speed. It is certainly not an ambitious objective. Coming out of a slump like today’s we should be able to grow at 4, 5, or 6% for several years, as we did in the mid-1980s. Four percent growth for the next five years could add 14 million net new jobs, and 5% growth could add 17.5 million — meaning in 2017 something approaching 11 million more Americans would be working compared to today’s sclerotic 2% growth.
Keep in mind, these are rough rules of thumb, not forecasts or projections, and we’re leaving out lots of technical dynamics. There’s a lot going on in an economy, and we do not pretend these are precise estimates. The point is to show the magnitudes involved — that faster growth can provide jobs for millions more Americans in a relatively short period of time.
The problem is that U.S. policy before and after the financial panic and recession has not supported growth — I’d argue it has impeded growth. Faster growth is so important, we should be doing everything possible to enact policies that encourage it — or, if we can’t enact them today, then at least pointing the nation in the right direction.
A more efficient tax code that rewards rather than punishes investment and entrepreneurship would make a huge difference. Unfortunately, some in Washington and the states are proposing higher tax rates and new carve-outs and favors that will make real tax reform impossible. We need to ensure that Washington doesn’t keep consuming an ever greater share of the economy. But again, we’ve just seen a huge jump in the government-economy ratio, from 20% to 25%, and the current budget path just makes this ratio worse and worse over time.
Does anyone believe we have a regulatory system that promotes economic growth? In each of the last two years, the Federal Register of government regulations has grown by more than 81,000 pages. We’ve recently seen Washington drape vast new blankets of regulation over finance and health care and interfere at every turn with our energy economy — a sector that is poised to deliver explosive growth in coming years. Other regulatory actions, like FCC interference in broadband and mobile networks, can slow growth at the margins or, depending on how zealous regulators choose to be, severely disrupt an innovation ecosystem.
The economy is too complex to dial up exactly what we want. I am not suggesting a simple flip of a switch can achieve this dramatic improvement. But we should be giving ourselves — and American citizens — as many chances as possible. Given what’s at stake, there’s no excuse for not lining up policy to maximize the opportunities for faster growth.
Yesterday the Federal Communications Commission issued 181 pages of metrics demonstrating, to any fair reader, the continuing rapid rise of the U.S. broadband economy — and then concluded, naturally, that “broadband is not yet being deployed to all Americans in a reasonable and timely fashion.” A computer, being fed the data and the conclusion, would, unable to process the logical contradictions, crash.
The report is a response to section 706(b) of the 1996 Telecom Act that asks the FCC to report annually whether broadband “is being deployed . . . in a reasonable and timely fashion.” From 1999 to 2008, the FCC concluded that yes, it was. But now, as more Americans than ever have broadband and use it to an often maniacal extent, the FCC has concluded for the third year in a row that no, broadband deployment is not “reasonable and timely.”
The FCC finds that 19 million Americans, mostly in very rural areas, don’t have access to fixed line terrestrial broadband. But Congress specifically asked the FCC to analyze broadband deployment using “any technology.”
“Any technology” includes DSL, cable modems, fiber-to-the-x, satellite, and of course fixed wireless and mobile. If we include wireless broadband, the unserved number falls to 5.5 million from the FCC’s headline 19 million. Five and a half million is 1.74% of the U.S. population. Not exactly a headline-grabbing figure.
Even if we stipulate the FCC’s framework, data, and analysis, we’re still left with the FCC’s own admission that between June 2010 and June 2011, an additional 7.4 million Americans gained access to fixed broadband service. That dropped the portion of Americans without access to 6% in 2011 from around 8.55% in 2010 — a 30% drop in the unserved population in one year. Most Americans have had broadband for many years, and the rate of deployment will necessarily slow toward the tail-end of any build-out. When most American households are served, there just aren’t very many to go, and those that have yet to gain access are likely to be in the very most difficult to serve areas (e.g. “on tops of mountains in the middle of nowhere”). The fact that we still added 7.4 million broadband in the last year, lowering the unserved population by 30%, even using the FCC’s faulty framework, demonstrates in any rational world that broadband “is being deployed” in a “reasonable and timely fashion.”
But this is not the rational world — it’s D.C. in the perpetual political silly season.
One might conclude that because the vast majority of these unserved Americans live in very rural areas — Alaska, Montana, West Virginia — the FCC would, if anything, suggest policies tailored to boost infrastructure investment in these hard-to-reach geographies. We could debate whether these are sound investments and whether the government would do a good job expanding access, but if rural deployment is a problem, then presumably policy should attempt to target and remediate the rural underserved. Commissioner McDowell, however, knows the real impetus for the FCC’s tortured no-confidence vote — its regulatory agenda.
McDowell notes that the report repeatedly mentions the FCC’s net neutrality rules (now being contested in court), which are as far from a pro-broadband policy, let alone a targeted one, as you could imagine. If anything, net neutrality is an impediment to broader, faster, better broadband. But the FCC is using its thumbs-down on broadband deployment to prop up its intrusions into a healthy industry. As McDowell concluded, “the majority has used this process as an opportunity to create a pretext to justify more regulation.”
There’s more to life than economics, but almost nothing matters more to more people than the rate of long-term economic growth. It completely changes the life possibilities for individuals and families and determines the prospects of nations. It also happens to be the central factor in governmental budgets.
We’ve been saying for the last few years that growth is our biggest problem — but also our biggest opportunity. Faster growth would not only put Americans back to work but also help resolve budget impasses and assist in the long-overdue transformations of our entitlement programs. The current recovery, however, is worse than mediocre. It is dangerously feeble. With every passing day, we fall further behind. Investments aren’t made. Risks aren’t taken. Business ideas are shelved. Joblessness persists, and millions of Americans drop out of the labor force altogether. Continued stagnation would of course exacerbate an already dire long-term unemployment problem. It would also, however, turn America’s unattractive habitual overspending into a possible catastrophe of debt.
John Cochrane of the University of Chicago shows, in the chart below, just how far we’ve slipped from our historical growth path. The red line is the 1965-2007 trend line growth of 3.07%, and the thin black line shows the recession and weak recovery.
Recessions are of course downward deviations from a trend line of growth. Trendlines, however, include recessions, and recoveries thus usually exhibit faster-than-trend growth that catches up to trend. To be sure, trends may not continue forever. Historical performance, as they say, is not a guarantee of future results. Perhaps structural factors in the U.S. and world economies have lowered our “potential” growth rate. This possibility is shown in the blue “CBO Potential” line, which depicts the “new normal” of diminished expectations. Yet the current recovery cannot even catch up to this anemic trend line, which supposedly reflects the downgraded potential of the U.S. economy.
Here is another way to visualize today’s stagnation, from Scott Grannis:
Economies are built on expectations. If the “new normal” of 2.35% growth is correct, then we’ve got problems. All our individual, family, business, and government plans will have to downshift. If growth is even lower than that, tomorrow’s problems will tower over today’s. If, on the other hand, we can reignite the American growth engine, then we’ve got a shot to not only reverse today’s decline but also to open the door to a new era of renewed optimism and, yes, rising expectations.
Faster compounding growth over time makes all the difference. One new paper shows how, with a fundamentally new policy direction on taxes and regulation, real GDP in the U.S. could be “between $2.1 and $3.1 trillion higher in 2022 than it would be under a continuation of current slow growth.” Think of that — an American economy perhaps trillions of dollars larger in a single year a decade from now, with better pro-growth policies. That’s a lot of jobs, a lot of higher incomes, a lot of new businesses, and — whether your preference is more or less government spending — much healthier government budgets . . . summed up in one last chart.
Mobile communications and computing are among the most innovative and competitive markets in the world. They have created a new world of software and offer dramatic opportunities to improve productivity and creativity across the industrial spectrum.
Last week we published a tech note documenting the rapid growth of mobile and the importance of expanding wireless spectrum availability. More clean spectrum is necessary both to accommodate fast-rising demand and drive future innovations. Expanding spectrum availability might seem uncontroversial. In the report, however, we noted that one obstacle to expanding spectrum availability has been a cramped notion of what constitutes competition in the Internet era. As we wrote:
Opponents of open spectrum auctions and flexible secondary markets often ignore falling prices, expanding choices, and new features available to consumers. Instead they sometimes seek to limit new spectrum availability, or micromanage its allocation or deployment characteristics, charging that a few companies are set to dominate the market. Although the FCC found that 77% of the U.S. population has access to three or more 3G wireless providers, charges of a coming “duopoly” are now common.
This view, however, relies on the old analysis of static utility or commodity markets and ignores the new realities of broadband communications. The new landscape is one of overlapping competitors with overlapping products and services, multi-sided markets, network effects, rapid innovation, falling prices, and unpredictability.
Sure enough, yesterday Sprint CEO Dan Hesse made the duopoly charge and helped show why getting spectrum policy right has been so difficult.
Q: You were a vocal opponent of the AT&T/T-Mobile merger. Are you satisfied you can compete now that the merger did not go through?
A: We’re certainly working very hard. There’s no question that the industry does have an issue with the size of the duopoly of AT&T and Verizon. I believe that over time we’ll see more consolidation in the industry outside of the big two, because the gap in size between two and three is so enormous. Consolidation is healthy for the industry as long as it’s not AT&T and Verizon getting larger.
Hesse goes even further.
Hesse also seemed to be likening Sprint’s struggles in competing with AT&T-Rex and Big Red as a fight against good and evil. Sprint wants to wear the white hat, according to Hesse. “At Sprint, we describe it internally as being the good guys, of doing the right thing,” he said.
This type of thinking is always a danger if you’re trying to make sound policy. Picking winners and losers is inevitably — at best — an arbitrary exercise. Doing so based on some notion of corporate morality is plain silly, but even more reasonable sounding metrics and arguments — like those based on market share — are often just as misleading and harmful.
The mobile Internet ecosystem is growing so fast and changing with such rapidity and unpredictability that making policy based on static and narrow market definitions will likely yield poor policy. As we noted in our report:
It is, for example, worth emphasizing: Google and Apple were not in this business just a few short years ago.
Yet by the fourth quarter of 2011 Apple could boast an amazing 75% of the handset market’s profits. Apple’s iPhone business, it was widely noted after Apple’s historic 2011, is larger than all of Microsoft. In fact, Apple’s non-iPhone products are also larger than Microsoft.
Android, the mobile operating system of Google, has been growing even faster than Apple’s iOS. In December 2011, Google was activating 700,000 Android devices a day, and now, in the summer of 2012, it estimates 900,000 activations per day. From a nearly zero share at the beginning of 2009, Android today boasts roughly a 55% share of the global smartphone OS market.
. . .
Apple’s iPhone changed the structure of the industry in several ways, not least the relationships between mobile service providers and handset makers. Mobile operators used to tell handset makers what to make, how to make it, and what software and firmware could be loaded on it. They would then slap their own brand label on someone else’s phone.
Apple’s quick rise to mobile dominance has been matched by Blackberry maker Research In Motion’s fall. RIM dominated the 2000s with its email software, its qwerty keyboard, and its popularity with enterprise IT departments. But it couldn’t match Apple’s or Android’s general purpose computing platforms, with user-friendly operating systems, large, bright touch-screens, and creative and diverse app communities.
Sprinkled among these developments were the rise, fall, and resurgence of Motorola, and then its sale to Google; the rise and fall of Palm; the rise of HTC; and the decline of once dominant Nokia.
Apple, Google, Amazon, Microsoft, and others are building cloud ecosystems, sometimes complemented with consumer devices, often tied to Web apps and services, multimedia content, and retail stores. Many of these products and services compete with each other, but they also compete with broadband service providers. Some of these business models rely primarily on hardware, some software, some subscriptions, some advertising. Each of the companies listed above — a computer company, a search company, an ecommerce company, and a software company — are now major Internet infrastructure companies.
As Jeffrey Eisenach concluded in a pathbreaking analysis of the digital ecosystem (“Theories of Broadband Competition”), there may be market concentration in one (or more) layer(s) of the industry (broadly considered), yet prices are falling, access is expanding, products are proliferating, and innovation is as rapid as in any market we know.
In all the recent debates over deficits, debt, unemployment, entitlements, bond markets, the euro, housing, etc., the absolutely central factor has too often been ignored. A new book, however, deals with nothing but this central factor — economic growth. If we’re going to improve the economic discussion, and the economy itself, The 4% Solution: Unleashing the Economic Growth America Needs is likely to serve as a good foundation.
The book contains chapters by five Nobel economists, including the modern dean of economic growth Robert Lucas, Ed Prescott on marginal tax rates, and Myron Scholes on true innovation; also Bob Litan on “home run” start-up firms, Nick Schulz on intangible assets, David Malpass on monetary policy, and others on entrepreneurs, immigration, debt, and budgets.
I’ve only skimmed many of the chapters, but one thing that jumped out is an important point about the links, and distinctions, between supply and demand. When economic growth has been discussed these last few years, the cause/cure usually cited is a drop in aggregate demand and the “stimulus” measures needed to boost it. It’s of course true that the housing bust and banking troubles caused lots of deleveraging and that government spending and interest rate cuts may help tide over certain consumers and businesses during temporary tough times. Despite substantial Keynesian fiscal and monetary “stimulus,” however — wild deficit spending, four years of zero-interest-rates, and a tripling of the Fed’s balance sheet — businesses, consumers, and the economy-at-large have not responded as hoped. Even if you believe in the efficacy of short term Keynesian growth policies, you ignore at great forecasting peril the array of countervailing anti-growth policies.
Here is how I put it in a Forbes online column last December:
the real problem with demand is supply. Consumption is partly based on current income and needs, sure, but more importantly it is a function of the expected future. Milton Friedman’s version of this idea was the permanent income hypothesis. More generally, we might ask, what are the prospects for prosperity?
We live in a complex, uncertain world. But it’s not unreasonable to believe, even after the Great Recession, that America and the globe still have prodigious potential to create new wealth. It’s also not unreasonable to believe that Washington has severely impaired America’s innovative capacity and our ability to grow.
If you think ObamaCare reinforces and expands many of the worst features of our overpriced, government-heavy health system, then you worry we might not get the productivity revolution we need in one of the largest sectors of our economy. If you think Dodd-Frank and other post-crisis ideas will discourage true financial innovation while preserving “too big to fail,” then you worry more financial disruptions are in store. If you think tax rates on capital and entrepreneurship are going up, then you might downgrade your estimates of the amount of investment and dynamism — and thus good jobs — America will enjoy.
A downgrade of expected long term growth impairs growth today.
In the new book, Lucas makes a similar argument:
imagine that households and businesses were somehow convinced that the United States would soon move toward a European-level welfare state, financed by a European tax structure. These beliefs would naturally be translated into beliefs that labor costs would soon increase and returns on investment decrease. Beliefs of a future GDP reduction of 30% would be brought forward into the present even before these beliefs could be realized (or refuted).
This is just hypothetical, of course, but it is a hypothesis that is entirely consistent with the way that we know economies work, everyone basing current decisions on expectations about future returns. What I have called recovery growth has happened after previous U.S. recessions and depressions and is certainly a worthy and attainable objective for economic policy today, but it would be foolish to take it as a foregone conclusion.
In the next chapter, Ed Prescott reinforces the point:
what people expect policies to be in the future determines what happens now. Bad policies can and often do depress the economy even before they are implemented. Peoples actions now depend on what they think policy will be — not what it was.
. . .
The disturbing fact is that, as of the beginning of 2012, the economy has not even partially recovered from the this recession. When it will recover is a political question and not an economic question. Only if the Americans making personal economic decisions knew what future policy would be could economists predict when recovery would occur.
This is one reason long term growth policies are often more important, even in the short term, than most short term “growth” policies.
Is American broadband broken?
Tim Lee thinks so. Where he once leaned against intervention in the broadband marketplace, Lee says four things are leading him to rethink and tilt toward more government control.
First, Lee cites the “voluminous” 2009 Berkman Report. Which is surprising. The report published by Harvard’s Berkman Center may have been voluminous, but it lacked accuracy in its details and persuasiveness in its big-picture take-aways. Berkman used every trick in the book to claim “open access” regulation around the world boosted other nation’s broadband economies and lack of such regulation in the U.S. harmed ours. But the report’s data and methodology were so thoroughly discredited (especially in two detailed reports issued by economists Robert Crandall, Everett Ehrlich, and Jeff Eisenach and Robert Hahn) that the FCC, which commissioned the report, essentially abandoned it. Here was my summary of the economists’ critiques:
The [Berkman] report botched its chief statistical model in half a dozen ways. It used loads of questionable data. It didn’t account for the unique market structure of U.S. broadband. It reversed the arrow of time in its country case studies. It ignored the high-profile history of open access regulation in the U.S. It didn’t conduct the literature review the FCC asked for. It excommunicated Switzerland.
. . .
Berkman’s qualitative analysis was, if possible, just as misleading. It passed along faulty data on broadband speeds and prices. It asserted South Korea’s broadband boom was due to open access regulation, but in fact most of South Korea’s surge happened before it instituted any regulation. The study said Japanese broadband, likewise, is a winner because of regulation. But regulated DSL is declining fast even as facilities-based (unshared, proprietary) fiber-to-the-home is surging.
Berkman also enjoyed comparing broadband speeds of tiny European and Asian countries to the whole U.S. But if we examine individual American states — New York or Arizona, for example — we find many of them outrank most European nations and Europe as a whole. In fact, applying the same Speedtest.com data Berkman used, the U.S. as a whole outpaces Europe as a whole! Comparing small islands of excellence to much larger, more diverse populations or geographies is bound to skew your analysis.
The Berkman report twisted itself in pretzels trying to paint a miserable picture of the U.S. Internet economy and a glowing picture of heavy regulation in foreign nations. Berkman, however, ignored the prima facie evidence of a vibrant U.S. broadband marketplace, manifest in the boom in Web video, mobile devices, the App Economy, cloud computing, and on and on.
How could the bulk of the world’s best broadband apps, services, and sites be developed and achieve their highest successes in the U.S. if American broadband were so slow and thinly deployed? We came up with a metric that seemed to refute the notion that U.S. broadband was lagging, namely, how much network traffic Americans generate vis-à-vis the rest of the world. It turned out the U.S. generates more network traffic per capita and per Internet user than any nation but South Korea and generates about two-thirds more per-user traffic than the closest advanced economy of comparable size, Western Europe.
Berkman based its conclusions almost solely on (incorrect) measures of “broadband penetration” — the number of broadband subscriptions per capita — but that metric turned out to be a better indicator of household size than broadband health. Lee acknowledges the faulty analysis but still assumes “broadband penetration” is the sine qua non measure of Internet health. Maybe we’re not awful, as Berkman claimed, Lee seems to be saying, but even if we correct for their methodological mistakes, U.S. broadband penetration is still just OK. “That matters,” Lee writes,
because a key argument for America’s relatively hands-off approach to broadband regulation has been that giving incumbents free rein would give them incentive to invest more in their networks. The United States is practically the only country to pursue this policy, so if the incentive argument was right, its advocates should have been able to point to statistics showing we’re doing much better than the rest of the world. Instead, the argument has been over just how close to the middle of the pack we are.
No, I don’t agree that the argument has consisted of bickering over whether the U.S. is more or less mediocre. Not at all. I do agree that advocates of government regulation have had to adjust their argument — U.S. broadband is awful mediocre. Yet they still hang their hat on “broadband penetration” because most other evidence on the health of the U.S. digital economy is even less supportive of their case.
In each of the last seven years, U.S. broadband providers have invested between $60 and $70 billion in their networks. Overall, the U.S. leads the world in info-tech investment — totaling nearly $500 billion last year. The U.S. now boasts more than 80 million residential broadband links and 200+ million mobile broadband subscribers. U.S. mobile operators have deployed more 4G mobile network capacity than anyone, and Verizon just announced its FiOS fiber service will offer 300 megabit-per-second residential connections — perhaps the fastest large-scale deployment in the world.
Eisenach and Crandall followed up their critique of the Berkman study with a fresh March 2012 analysis of “open access” regulation around the world (this time with Allan Ingraham). They found:
- “it is clear that copper loop unbundling did not accelerate the deployment or increase the penetration of first-generation broadband networks, and that it had a depressing effect on network investment”
- “By contrast, it seems clear that platform competition was very important in promoting broadband deployment and uptake in the earlier era of DSL and cable modem competition.”
- “to the extent new fiber networks are being deployed in Europe, they are largely being deployed by unregulated, non-ILEC carriers, not by the regulated incumbent telecom companies, and not by entrants that have relied on copper-loop unbundling.”
Lee doesn’t mention the incisive criticisms of the Berkman study nor the voluminous literature, including this latest example, showing open access policies are ineffective at best, and more likely harmful.
In coming posts, I’ll address Lee’s three other worries.
— Bret Swanson
Last week Apple unveiled its third-generation iPad. Yesterday the company said the LTE versions of the device, which can connect via Verizon and AT&T mobile broadband networks, are sold out.
It took 15 years for laptops to reach 50 million units sold in a year. It took smartphones seven years. For tablets (not including Microsoft’s clunky attempt a decade ago), just two years. Mobile device volumes are astounding. In each of the last five years, global mobile phone sales topped a billion units. Last year smartphones outsold PCs for the first time – 488 million versus 432 million. This year well over 500 million smartphones and perhaps 100 million tablets could be sold.
Smartphones and tablets represent the first fundamentally new consumer computing platforms since the PC, which arrived in the late ’70s and early ’80s. Unlike mere mobile phones, they’ve got serious processing power inside. But their game-changing potency is really based on their capacity to communicate via the Internet. And this power is, of course, dependent on the cloud infrastructure and wireless networks.
But are wireless networks today prepared for this new surge of bandwidth-hungry mobile devices? Probably not. When we started to build 3G mobile networks in the middle of last decade, many thought it was a huge waste. Mobile phones were used for talking, and some texting. They had small low-res screens and were terrible at browsing the Web. What in the world would we do with all this new wireless capacity? Then the iPhone came, and, boom — in big cities we went from laughable overcapacity to severe shortage seemingly overnight. The iPhone’s brilliant screen, its real Web browsing experience, and the world of apps it helped us discover totally changed the game. Wi-Fi helped supply the burgeoning iPhone with bandwidth, and Wi-Fi will continue to grow and play an important role. Yet Credit Suisse, in a 2011 survey of the industry, found that mobile networks overall were running at 80% of capacity and that many network nodes were tapped out.
Today, we are still expanding 3G networks and launching 4G in most cities. Verizon says it offers 4G LTE in 196 cities, while AT&T says it offers 4G LTE in 28 markets (and combined with its HSPA+ networks offers 4G-like speeds to 200 million people in the U.S.). Lots of things affect how fast we can build new networks — from cell site permitting to the fact that these things are expensive ($20 billion worth of wireless infrastructure in the U.S. last year). But another limiting factor is spectrum availability.
Do we have enough radio waves to efficiently and cost-effectively serve these hundreds of millions of increasingly powerful mobile devices, which generate and consume increasingly rich content, with ever more stringent latency requirements, and which depend upon robust access to cloud storage and computing resources?
Capacity is a function of money, network nodes, technology, and radio waves. But spectrum is grossly misallocated. The U.S. government owns 61% of the best airwaves, while mobile broadband providers — where all the action is — own just 10%. Another portion is controlled by the old TV broadcasters, where much of this beachfront spectrum lay fallow or underused.
They key is allowing spectrum to flow to its most valuable uses. Last month Congress finally authorized the FCC to conduct incentive auctions to free up some unused and underused TV spectrum. Good news. But other recent developments discourage us from too much optimism on this front.
In December the FCC and Justice Department vetoed AT&T’s attempt to augment its spectrum and cell-site position via merger with T-Mobile. Now the FCC and DoJ are questioning Verizon’s announced purchase of Spectrum Co. — valuable but unused spectrum owned by a consortium of cable TV companies. The FCC has also threatened to tilt any spectrum auctions so that it decides who can bid, how much bidders can buy, and what buyers may or may not do with their spectrum — pretending Washington knows exactly how this fast-changing industry should be structured, thus reducing the value of spectrum and probably delaying availability of new spectrum and possibly reducing the sector’s pace of innovation.
It’s very difficult to see how it’s at all productive for the government to block companies who desperately need more spectrum from buying it from those who don’t want it, don’t need it, or can’t make good use of it. The big argument against AT&T and Verizon’s attempted spectrum purchases is “competition.” But T-Mobile wanted to sell to AT&T because it admitted it didn’t have the financial (or spectrum) wherewithal to build a super expensive 4G network. Apparently the same for the cable companies, who chose to sell to Verizon. Last week Dish Network took another step toward entering the 4G market with the FCC’s approval of spectrum transfers from two defunct companies, TerreStar and DBSD.
Some people say the proliferation of Wi-Fi or the increased use of new wireless technologies that economize on spectrum will make more spectrum availability unnecessary. I agree Wi-Fi is terrific and will keep growing and that software radios, cognitive radios, mesh networks and all the other great technologies that increase the flexibility and power of wireless will make big inroads. So fine, let’s stipulate that perhaps these very real complements will reduce the need for more spectrum at the margin. Then the joke is on the big companies that want to overpay for unnecessary spectrum. We still allow big, rich companies to make mistakes, right? Why, then, do proponents of these complementary technologies still oppose allowing spectrum to flow to its highest use?
Free spectrum auctions would allow lots of companies to access spectrum — upstarts, middle tier, and yes, the big boys, who desperately need more capacity to serve the new iPad.
— Bret Swanson
We’ve published a lot of linear and log-scale line charts of Internet traffic growth. Here’s just another way to visualize what’s been happening since 1990. The first image shows 1990-2004.
The second image scales down the first to make room for the next period.
The third image, using the same scale as image 2, shows 2005-2011.
These images use data compiled by MINTS, with our own further analysis and estimations. Other estimates from Cisco and Arbor/Labovitz — and our own analysis based on those studies — show even higher traffic levels, though roughly similar growth rates.
Steve Jobs designed great products. It’s very, very hard to make the case that he created large numbers of jobs in this country.
— Prof. Paul Krugman, New York Times, January 25, 2012
Turns out, not very hard at all.
The App Economy now is responsible for roughly 466,000 jobs in the United States, up from zero in 2007 when the iPhone was introduced.
— Dr. Michael Mandel, TechNet study, February 7, 2012
See our earlier rough estimate of Apple’s employment effects: “Jobs: Steve vs. the Stimulus.”
— Bret Swanson
“It is the single worst telecom bill that I have ever seen.”
— Reed Hundt, Jan. 31, 2012
Isn’t this rich?
One of the most zealous regulators America has known says Congress is overstepping its bounds because it wants to unleash lots of new wireless spectrum but also wants to erect a few guardrails so that FCC regulators don’t run roughshod over the booming mobile broadband market.
At a New America Foundation event yesterday, former FCC chairman Reed Hundt said Congress shouldn’t micromanage the FCC’s ability to micromanage the wireless industry. Mr. Congressman, you don’t know anything about how the FCC should regulate the Internet. But the FCC does know how to build networks, run mobile Internet businesses, and perfectly structure a wildly tumultuous economic sector. It’s just the latest remarkable example of the growing hubris of the regulatory state.
In his book, You Say You Want a Revolution, Hundt famously recounted his staff’s interpretation and implementation of the 1996 Telecom Act.
The passage of the new law placed me on a far more public stage. But I felt Congress — in the constitutional sense — had asked me to exercise the full power of all ideas I could summon. And I believed that I and my team had learned, through many failures, how to succeed. Later, I realized that we knew almost nothing of the complexity and importance of the tasks in front of the FCC.
Meeting in several overlapping groups of about a dozen people each . . . we dedicated almost three weeks to studying the possible readings of each word in the 150-page statute. The conference committee compromises had produced a mountain of ambiguity that was generally tilted toward the local phone companies’ advantage. But under the principles of statutory interpretation, we had broad authority to exercise our discretion in writing the implementing regulations. Indeed, like the modern engineers trying to straighten the Leaning Tower of Pisa, we could aspire to provide the new entrants to the local telephone markets a fairer chance to compete than they might find in any explicit provision of the law. In addition, the law gave almost no guidance about how to treat the Internet, data networks, . . . and many other critical issues. (Three years later, Justice Antonin Scalia agreed, on behalf of the Supreme Court, that the law was profoundly ambiguous.)
The more my team studied the law, the more we realized our decisions could determine the winners and losers of the new economy. We did not want to confer advantage on particular companies; that seemed inequitable. But inevitably
a decision that promoted entry into the local market would benefit a company that followed such a strategy.
There are so many angles here.
(1) Hundt says he and his team basically stretched the statute to mean whatever they wanted. The law may have been ambiguous — and it was, I’m not going to defend the ’96 Act — yet the Supreme Court still found in a series of early-2000s cases that Hundt’s FCC had wildly overstepped even these flimsy bounds. That’s how aggressive and unconstrained Hundt was.
(2) Hundt’s rules helped crash the tech and telecom sectors in 2000-2002. His rules were so complex and intrusive that, whatever your views about the CLEC wars, the PCS C block spectrum debacle, and other battles, it’s hard to deny that the paralysis caused by the rules hurt broadband and the nascent Net.
(3) Is it surprising that, given the FCC’s poor record of reaching way past its granted powers, some in Congress want to circumscribe FCC regulators by giving them less-than-omnipotent authority? Is the new view of elite regulators that Congress should pass laws, the full text of which might read: “§1. Congress grants to the Internet Agency the authority to regulate the Internet. Go forth and regulate.”
(4) On the other hand, it’s not clear why Hundt would care particularly what Congress says in any new spectrum statute. He didn’t care much for the words or intent of the ’96 Act, and he thinks regulators should “aspire” to grand self-appointed projects. Who knows, maybe all those Supreme Court smack downs in the early 2000s made an impression.
(5) Hundt says he and his team later realized, in effect, how naive they were about “the complexity and importance of the tasks in front of the FCC.” So he’s acknowledging after things didn’t go so well that his FCC underestimated the complexity and thus overestimated their own expertise . . . yet he says today’s FCC deserves comprehensive power to structure the mobile Internet as it sees fit?
(6) Hundt admitted his FCC relished its capacity to pick winners and losers. Not particular companies, mind you — that would be improper — merely the types of companies who win and lose. A distinction without very much of a difference.
(7) We don’t argue that Congress, instead of the FCC, should impose intrusive regulation through statute. We don’t advocate long and complex laws. That’s not the point. Laws should be clear and simple, but stating the boundaries of a regulator’s authority is not a controversial act. No one should be imposing intrusive regulation or overdetermining the structure of an industry. And that’s what Congress — perhaps in a rare case! — is protecting against here.
On Tuesday afternoon, Apple said it earned $13 billion in the fourth quarter on $46 billion in revenue. Thirty-seven million iPhones and 15 million iPads sold in the quarter helped boost its market cap to $415 billion. A few hours later, Indiana Gov. Mitch Daniels, in his State of the Union response message, contrasted the technology juggernaut with Washington’s impotent jobs efforts: “The late Steve Jobs – what a fitting name he had – created more of them than all those stimulus dollars the President borrowed and blew.”
First thing Wednesday morning, however, Paul Krugman countered with a devastating argument – “Mitch Daniels Doesn’t Read the New York Times.” Prof. Krugman referred to the first of the Times‘ multipart series on Apple’s Chinese manufacturing operations.
From Sunday’s Times:
Not long ago, Apple boasted that its products were made in America. Today, few are. Almost all of the 70 million iPhones, 30 million iPads and 59 million other products Apple sold last year were manufactured overseas.
Apple employs 43,000 people in the United States and 20,000 overseas, a small fraction of the over 400,000 American workers at General Motors in the 1950s, or the hundreds of thousands at General Electric in the 1980s. Many more people work for Apple’s contractors: an additional 700,000 people engineer, build and assemble iPads, iPhones and Apple’s other products. But almost none of them work in the United States. Instead, they work for foreign companies in Asia, Europe and elsewhere, at factories that almost all electronics designers rely upon to build their wares.
Steve Jobs designed great products. It’s very, very hard to make the case that he created large numbers of jobs in this country. Obama’s auto bailout, just by itself, saved a lot more jobs than Apple’s US employment.
So the New York Times thinks all those Chinese Foxconn assembly workers are the primary employment effect of Apple. And Prof. Krugman sidesteps the argument by noting the “auto bailout” – not the stimulus – “saved” – not created, mind you – more jobs than Apple’s under-roof American workforce.
CNNMoney jumped in:
Daniels’ math just doesn’t add up, no matter how successful and valuable Apple has become.
Not even close.
This little episode exposes quite a lot about the fundamentally different ways people think about the economy.
The economy is dynamic and complex. It’s a cooperative, competitive, and evolutionary. In recent pre-Great Recession history, the U.S. lost around 15 millions jobs every year — holy depression! But we created some 17 million a year, netting two million. There’s no way to quantify Jobs’ jobs impact exactly, which is one of the great virtues of capitalism.
An attempt to estimate in a very rough way, however, might be useful:
Apple has 60,000 total employees, around 43,000 in U.S.
Multiply these numbers by the years these jobs have existed, decades in the case of many. That’s many hundreds of thousands of “job-years.”
Then consider the broad software industry, especially the world of “apps” being developed for iPhone and iPad, and now for Macs. More than 500,000 iOS apps now exist, and 1.2 billion were downloaded in the last week of December 2011. Lots of people are trying to quantify how many jobs this app ecosystem has created. Likely it will mean many tens of thousands of jobs for decades to come, meaning hundreds of thousands of job-years, though even the “app” won’t look this way forever or even for long. We’ll see.
Apple computers, iPhones, iPads, and multimedia software, like OSX, iOS, Quicktime, and WebKit, drive the Internet and wireless industries. (WebKit is an open software platform developed by Apple that most people have never heard of. But it’s crucial to Internet browsers and webpage development.) These devices allow people and companies to create content. They improve productivity and create new kinds of jobs. How many graphic designers would we have had over the years without the Mac?
Apple devices devour bandwidth and storage and drive new generations of broadband and mobile network build-outs, totaling about $65 billion per year in the U.S. So add some significant portion of networking equipment salesmen and telecom pole-climbers and Verizon and Comcast workers and data center technicians. The iPhone alone completely reinvigorated the U.S. mobile industry and ushered in a new paradigm of computing, moving from PC to mobile device. Apple jolted AT&T back to life when the two companies partnered on the first iPhone. How many jobs across the economy did the iPhone “save” by boosting our digital industries when the PC era had about run its course? A lot.
Jobs created a new digital music industry. It’s impossible to gauge how many jobs were created versus eliminated. But clearly the new jobs are higher value jobs.
Apple is now the largest buyer of microchips in the world. It buys 23% of all the world’s flash memory, for example. Much of that is South Korean. But Apple probably buys something like 20 million Intel microprocessors each year. That’s a huge part of Intel’s business. Intel employs 100,000 people (not all in the U.S.).
The notion that “almost none” of the “additional 700,000″ people who contribute to designing and building Apple products work in the U.S. is false. And silly.
Apple’s list of suppliers includes many of America’s leading-edge technology companies: Qualcomm, Intel, Corning, LSI, Broadcom, Seagate, Micron, Analog Devices, Linear, Maxim, Marvell, International Rectifier, Western Digital, ON Semi, Nvidia, AMD, Cypress, Texas Instruments, TriQuint, SanDisk, etc.
Lots of Apple’s foreign suppliers have substantial workforces in the U.S. Oft cited are the two Austin, Texas, Samsung fabs, which employ 3,500 workers who make NAND flash memory and Apple’s A5 chip. But many Asian and European Apple suppliers have sales, marketing, and support staff in America.
And of course no government or stimulus jobs are possible without private wealth creation. During the “stimulus” period — 2009-11 — Apple paid $16.5 billion in corporate income taxes, thus financing about 2% of the entire $821 billion stimulus package and thus 2% of the stimulus “jobs.” One might counter that stimulus was funded with debt, but money is fungible, and issuing debt depends on future claims on wealth. Moreover, because stimulus jobs were so extraordinarily expensive, a different accounting says that Apple’s $16.5 billion in taxes could have paid for 330,000 $50,000-a-year salaries.
In 1986, Steve Jobs bought a tiny division of George Lucas’s LucasFilm and created what we know as Pixar, the leading movie animation studio. In 2006, Pixar merged with Walt Disney. Disney has 156,000 employees and $41 billion in sales, a growing portion of which directly or indirectly relate to Pixar properties, film development, characters, licensing, and distribution. Pixar really saved Hollywood during a dark time for film and spawned a whole new animation boom. Pixar developed and inspired many new technologies for film making, video games, and other interactive visual media.
An additional consideration: Over the 2009-11 period, Disney paid $7 billion in income taxes, thus financing just under 1% of the stimulus and 1% of the “jobs.” That $7 billion could have funded 140,000 $50,000-a-year salaries.
The economy-wide effects of Steve Jobs are of course impossible to measure with precision. But a new study from Robert Shapiro and Kevin Hassett estimates that advances in mobile Internet technologies boosted U.S. employment by around 400,000 per year from 2007 to 2011, or by a total of around 1.2 million over the 2009-11 stimulus period. The Phoenix Center found similar employment effects. What proportion of these can be attributed to Steve Jobs is, again, impossible to say. But it’s clear Apple was the primary innovator in mobile Internet technologies in this period, towering over a multitude of other important technologies. More than any other device, the iPhone exploited the new, larger-capacity 3G mobile networks of the period, and once it proved wildly popular it was the chief impetus for additional 3G mobile capacity.
CBO estimates ARRA (the Stimulus bill) yielded between 1.3 and 3.5 million job-years net, meaning created or saved. But as the stimulus wanes, many of these jobs go away, or at least are not attributable to the stimulus.
Robert Barro of Harvard questions whether ARRA created any jobs at all. He says the question isn’t whether the Keynesian multiplier is greater than 1 (meaning break even; spend $1, get $1 in GDP), let alone whether it’s 1.5 (spend a dollar, get $1.50), but whether the multiplier is greater than zero.
Stanford’s John Taylor also thinks ARRA had no positive effect.
And do stimulus-boosters really want to equate these two activities?
(1) the federal government pays a state worker’s salary for a year instead of the state paying the salary;
(2) a new job derived from an entrepreneur who’s created whole new industries with new kinds of higher value jobs that last for decades, spurring yet more growth and jobs.
In Keynesian macro world those two jobs are equivalent, I guess.
The CNNMoney report acknowledged the 43,000 U.S. employees of Apple and also the 850 employees of Pixar at the time it merged with Disney in 2006. It even allowed that perhaps Pixar could employ twice as many people now. It also grudgingly admitted that maybe some Americans are building apps for the App Store. That’s about it.
This imprecise exercise misses the deeper truths of entrepreneurial capitalism and short-changes the dynamic versus static view of the economy. In a new article today, which I just see as I’m finishing this post, Prof. Krugman quite rightly notes the importance of industrial clusters to growth. He cites the Chinese supply-chains highlighted in the NYT series. But he entirely ignores the most famous and successful cluster on earth — Silicon Valley. How many jobs in Silicon Valley do we think are dependent on or symbiotic with Apple. It’s incalculable, but its a lot.
I asked Gov. Daniels what he thought.
“I won’t be reading Herr Krugman,” Gov. Daniels replied, “but I did read the New York Times, and it changes nothing. Just means Dr. K doesn’t understand the dynamism of innovation, either.”
— Bret Swanson
The U.S. wireless sector has been only mildly regulated over the last decade. We’d argue this is a key reason for its success. But this presumption of mostly unfettered experimentation and dynamism may be changing.
Consider Sprint’s apparent decision to use “roaming” in Oklahoma and Kansas instead of building its own network. Now, roaming is a standard feature of mobile networks worldwide. Company A might not have as much capacity as it would like in some geography, so it pays company B, who does have capacity there, for access. Company A’s customers therefore get wider coverage, and Company B is paid for use of its network.
The problem comes with the FCC’s 2011 “digital roaming” order. Last spring three FCC commissioners decided that private mobile services — which the Communications Act says “shall not . . . be treated as a common carrier” — are a common carrier. Only D.C. lawyers smarter than you and me can figure out how to transfigure “shall not” into “may.” Anyway, the possible effect is to subject mobile data — one of the fastest growing sectors anywhere on earth — to all sorts of forced access mandates and price controls.
We warned here and here that turning competitive broadband infrastructure into a “common carrier” could discourage all players in the market from building more capacity and covering wider geographies. If company A can piggyback on company B’s network at below market rates, why would it build its own expensive network? And if company B’s network capacity is going to company A’s customers, instead of its own customers, do we think company B is likely to build yet more cell sites and purchase more spectrum?
With 37 million iPhones and 15million iPads sold last quarter, we need more spectrum, more cell towers, more capacity. This isn’t the way to get it. And what we are seeing with Sprint’s decision to roam instead of build in Oklahoma and Kansas may be the tip of this anti-investment iceberg.
Last spring when the data roaming order came down we began wondering about a possible “slow walk to a reregulated communications market.” Among other items, we cited net neutrality, possible new price controls for Special Access links to cell sites, and a host of proposed regulations affecting things like behavioral advertising and intellectual property (see, PIPA/SOPA). Since then we’ve seen the government block the AT&T-T-Mobile merger. And the FCC is now holding up its own important push for more wireless spectrum because it wants the right to micromanage who gets what spectrum and how mobile carriers can use it.
Many of these items can be thoughtfully debated. But the number of new encroachments onto the communications sector threatens to slow its growth. Many of these encroachments, moreover, are taking place outside any basic legislative authority. In the digital roaming and net neutrality cases, for example, the FCC appeared clearly to grant itself extra- if not il-legal authority. These new regulations are now being challenged in court.
We need some restraint across the board on these matters. The Internet is too important. We can’t allow a quiet, gradual reregulation of the sector to slow down our chief engine of economic growth.
— Bret Swanson
“One solution is giving back to bank creditors the job of policing bank risk-taking. Roll back deposit insurance, for instance. We may not be able to see the future, but we can incentivize caution as a general matter. And we can improve the odds that, when banks make mistakes, they won’t all make the same mistake at the same time.”
— Holman Jenkins, The Wall Street Journal, January 18, 2011
For the third year in a row, FCC chairman Julius Genachowski used his speech at the Consumer Electronics Show in Las Vegas to push for more wireless spectrum. He wants Congress to pass the incentive auction law that would unleash hundreds of megahertz of spectrum to new and higher uses. Most of Congress agrees: we need lots more wireless capacity and spectrum auctions are a good way to get there.
Genachowski, however, wants overarching control of the new spectrum and, by extension, the mobile broadband ecosystem. The FCC wants the authority to micromanage the newly available radio waves — who can buy it, how much they can buy, how they can use it, what content flows over it, what business models can be employed with it. But this is an arena that is growing wildly fast, where new technologies appear every day, and where experimentation is paramount to see which business models work. Auctions are supposed to be a way to get more spectrum into the marketplace, where lots of companies and entrepreneurs can find the best ways to use it to deliver new communications services. “Any restrictions” by Congress on the FCC “would be a real mistake,” said Genachowski. In other words, he doesn’t want Congress to restrict his ability to restrict the mobile business. It seems the liberty of regulators to act without restraint is a higher virtue than the liberty of private actors.
At the end of 2011, the FCC and Justice Department vetoed AT&T’s proposed merger with T-Mobile, a deal that would have immediately expanded 3G mobile capacity across the nation and accelerated AT&T’s next generation 4G rollout by several years. That deal was all about a more effective use of spectrum, more cell towers, more capacity to better serve insatiable smart-phone and tablet equipped consumers. Now the FCC is holding hostage the spectrum auction bill with its my-way-or-the-highway approach. And one has to ask: Is the FCC really serious about spectrum, mobile capacity, and a healthy broadband Internet?
— Bret Swanson
“If the Greeks had skimped on the olive oil in a liter bottle, that wouldn’t threaten the metric system.”
— John Cochrane, Bloomberg View, December 21, 2011
For readers interested in either Indiana or investment strategy, see my letter (subscription) to the Indianapolis Business Journal commenting on the new asset allocation and risk management strategies at INPRS, the state’s $25-billion pension fund.
Ken Skarbeck’s column (Nov. 19) addressed a new strategy the Indiana Public Retirement System is using to diversify its portfolio. The new strategy, known as risk parity, has been around for over 20 years and will eventually compose 10% of INPRS assets.
Since the financial crisis of 2008, INPRS has dedicated significant time and resources to improve its risk management infrastructure. The decision to move a portion of the assets into risk parity – which seeks to diversify risk, rather than merely diversify asset classes – is one direct outcome of the new risk management program.
Risk parity attempts to balance risk across equities, bonds, commodities, and inflation-linked bonds. It recognizes the distinct performance characteristics of these assets during periods of robust or slow growth, for instance, or high or low inflation. For any given rate of return target, risk can be mitigated. Likewise, for a given risk appetite, returns can be improved. Nothing is a sure bet, but risk parity strategies have achieved robust returns while minimizing risk over most time periods.
Mr. Skarbeck makes a good point that historical volatility does not measure all types of risk. We heartily agree.
Mr. Skarbeck thinks stocks are a good bet right now. He may be correct. INPRS owns billions of dollars of equities and works with investment managers who have strong views, perhaps similar to Mr. Skarbeck’s, about the direction of stocks, bonds, and other assets. But as an entity charged with funding the retirements of 500,000 Hoosier workers and retirees, INPRS as a whole should not make overly concentrated bets.
Truly balanced portfolios recognize that neither INPRS, nor anyone else, knows with certainty what the global economy has in store. Committing to a concentrated asset mix because of a particular view on equities would represent the very type of risk Mr. Skarbeck warns against.
Fortunately, risk parity has performed well in all environments – from low inflation, high growth periods where stocks might outperform to high-inflation periods where commodities and TIPS might do better. That’s the point of the strategy: seek healthy returns sufficient to fund the retirements of INPRS members while minimizing downside risk.
Bret T. Swanson
Trustee and Investment Committee Member
Indiana Public Retirement System (INPRS)
More bad news for U.S. economic growth. In the face of multiplying obstacles deployed by Washington regulators, AT&T today abandoned its pursuit of T-Mobile. The most important outcome of the merger would have been a quicker and broader roll-out of 4G mobile broadband services. Now AT&T will have to find other paths to the wireless radio spectrum (and cell towers) it needs to meet growing demand and build tomorrow’s networks. T-Mobile is left in purgatory, short of the spectrum and long-term financial wherewithal to effectively compete.
Some say, don’t worry, assuming that another U.S. mobile provider will pick up T-Mobile. Not so fast. If Washington disallowed AT&T, it would do the same for Verizon. Sprint was pursuing T-Mobile before AT&T swooped in, but a Sprint-TMo combo makes much less sense. The spectrum-technology-tower infrastructure positions of AT&T and TMo were almost perfectly complementary. Not so for Sprint, who uses mostly higher frequencies, has always been a CDMA company (as opposed to WCDMA), and is already finding it challenging to raise the funds to build its own LTE network, given rocky times with partner Clearwire.
The U.S. mobile industry has been a shining star in an otherwise dark U.S. economy. But with Washington nixing the AT&T- T-Mobile merger, and given recent struggles at Clearwire and engineering disputes with upstart LightSquared, it’s not clear mobile will continue on its steep ascent. The FCC “staff report” opposing the AT&T-TMo deal didn’t even address the elephant in the room – spectrum. It’s odd. The FCC declared a spectrum crisis two years ago and repeatedly emphasized the urgent need for broadband expansion. Then, poof, not hardly a mention of either in its report. Not a good sign when the expert agency has taken its eye off the ball.
The industry is still full of potential, but there will be near-term disruptions as companies sort out new spectrum, business, and technology strategies. And as millions of un- and underemployed Americans know, time is money. Regulatory impediments and foot-dragging are especially harmful – and even infuriating – for an industry that desperately wants to grow. For an industry that is in many ways the bedrock of the 21st century American knowledge economy.
Beyond the disquieting roller-coaster in the mobile industry, one wonders more broadly about the American economy. Just what kind of business are we allowed to conduct? What investments are preferred – by whom? How far will the tilt of decision-making from private entities to public bureaucracies go?
— Bret Swanson
The New York Times reports today that scientists reading human genomes are generating so much data that they must use snail mail instead of the Internet to send the DNA readouts around the globe.
BGI, based in China, is the world’s largest genomics research institute, with 167 DNA sequencers producing the equivalent of 2,000 human genomes a day.
BGI churns out so much data that it often cannot transmit its results to clients or collaborators over the Internet or other communications lines because that would take weeks. Instead, it sends computer disks containing the data, via FedEx.
“It sounds like an analog solution in a digital age,” conceded Sifei He, the head of cloud computing for BGI, formerly known as the Beijing Genomics Institute. But for now, he said, there is no better way.
The field of genomics is caught in a data deluge. DNA sequencing is becoming faster and cheaper at a pace far outstripping Moore’s law, which describes the rate at which computing gets faster and cheaper.
The result is that the ability to determine DNA sequences is starting to outrun the ability of researchers to store, transmit and especially to analyze the data.
We’ve been talking about the oncoming rush of biomedical data for a while. A human genome consists of some 2.9 billion base pairs, easily stored in around 725 megabytes with standard compression techniques. Two thousand genomes a day, times 725 MB, equals 1,450,000 MB, or 1.45 terabytes. That’s a lot of data for one entity to transmit in a day’s time. Some researchers believe a genome can be losslessly compressed to approximately 4 megabytes. In compressed form, 2,000 genomes would total around 8,000 MB, or just 8 gigabytes. Easily doable for a major institution.
Interested to know more.
Here’s a critique of the FCC’s new “staff report” from AT&T itself. Obviously, AT&T is an interested party and has a robust point of view. But it’s striking the FCC was so sloppy in a staff report — for instance not addressing the key issue at hand: spectrum — let alone releasing this not-ready-for-prime-time report to the public.
Surely, it is neither fair nor logical for the FCC to trumpet a national spectrum crisis for much of the past year, and then draft a report claiming that two major wireless companies face no such constraints despite sworn declarations demonstrating the opposite.
The report is so off-base and one-sided that the FCC may actually have hurt its own case.
America is in desperate need of economic growth. But as the U.S. economy limps along, with unemployment stuck at 9%, the Federal Communications Commission is playing procedural tiddlywinks with the nation’s largest infrastructure investor, in the sector of the economy that offers the most promise for innovation and 21st century jobs. In normal times, we might chalk this up to clever Beltway maneuvering. But do we really have the time or money to indulge bureaucratic gamesmanship?
On Thanksgiving Eve, the FCC surprised everyone. It hadn’t yet completed its investigation into the proposed AT&T-T-Mobile wireless merger, and the parties had not had a chance to discuss or rebut the agency’s initial findings. Yet the FCC preempted the normal process by announcing it would send the case to an administrative law judge — essentially a vote of no-confidence in the deal. I say “vote,” but the FCC commissioners hadn’t actually voted on the order.
FCC Chairman Julius Genachowski called AT&T CEO Randall Stevenson, who, on Thanksgiving Day, had to tell investors he was setting aside $4 billion in case Washington blocked the deal.
The deal is already being scrutinized by the Department of Justice, which sued to block the merger last summer. The fact that telecom mergers and acquisitions must negotiate two levels of federal scrutiny, at DoJ and FCC, is already an extra burden on the Internet industry. But when one agency on this dual-track games the system by trying to influence the other track — maybe because the FCC felt AT&T had a good chance of winning its antitrust case — the obstacles to promising economic activity multiply.
After the FCC’s surprise move, AT&T and T-Mobile withdrew their merger application at the FCC. No sense in preparing for an additional hearing before an administrative law judge when they are already deep in preparation for the antitrust trial early next year. Moreover, the terms of the merger agreement are likely to have changed after the companies (perhaps) negotiate conditions with the DoJ. They’d have to refile an updated application anyway. Not so fast, said the FCC. We’re not going to allow AT&T and T-Mobile to withdraw their application. Or we if we do allow it, we will do so “with prejudice,” meaning the parties can’t refile a revised application at a later date. On Tuesday the FCC relented — the law is clear: an applicant has the right to withdraw an application without consent from the FCC. But the very fact the FCC initially sought to deny the withdrawal is itself highly unusual. Again, more procedural gamesmanship.
If that weren’t enough, the FCC then said it would release its “findings” in the case — another highly unusual (maybe unprecedented) action. The agency hadn’t completed its process, and there had been no vote on the matter. So the FCC instead released what it calls a “staff report” — a highly critical internal opinion that hadn’t been reviewed by the parties nor approved by the commissioners. We’re eager to analyze the substance of this “staff report,” but the fact the FCC felt the need to shove it out the door was itself remarkable.
It appears the FCC is twisting legal procedure any which way to fit its desired outcome, rather than letting the normal merger process play out. Indeed, “twisting legal procedure” may be too kind. It has now thrown law and procedure out the window and is in full public relations mode. These extralegal PR games tilt the playing field against the companies, against investment and innovation, and against the health of the U.S. economy.
— Bret Swanson
See our new report “Into the Exacloud” . . . including analysis of:
> Why cloud computing requires a major expansion of wireless spectrum and investment
> An exaflood update: what Mobile, Video, Big Data, and Cloud mean for network traffic
> Plus, a new paradigm for online games, Web video, and cloud software
After the decision to separate its online streaming and DVD-in-the mail services, Wall St. Cheat Sheet asked, “Is Netflix the new Research In Motion?”
Translation: Will Netflix be just the latest technology titan to suffer a parabolic plunge? We don’t know ourselves. Netflix’s streaming-DVD split is a reaction to the overwhelming popularity of its streaming service. CEO Reed Hastings is trying to avoid complacency and stay ahead of the curve. Maybe he is panicking. Maybe he’s a genius. But that is just the point: the digital curve these days is shifting and steepening faster than ever.
Which makes the government’s attempted damming of this digital river all the more harmful. Wireless spectrum is a central resource in the digital economy, and a chief enabler of services like Netflix. Yet Washington hogs the best airwaves – at last count the government owned 61%, the mobile service providers just 10%. So AT&T, its pipes bursting with iPhone and iPad traffic, tries to add capacity by merging with T-Mobile. Nope. The Department of Justice won’t allow that either.
Something, however, has got to give. New data from wireless infrastructure maker Ericsson shows that mobile data traffic jumped 130% in the first quarter of 2011 from 2010. Just four years ago, mobile data traffic was perhaps 1/15th of mobile voice traffic. Today, mobile data is likely three times voice. Credit Suisse, meanwhile, reports that U.S. mobile networks are running at 80% of capacity, meaning many network nodes are tapped out.
More mobile traffic drivers are on the way, like mass adoption of video chat apps and Apple’s imminent iCloud service. iCloud will create an environment of pervasive computing, where all your computers and devices are in continuous communication, integrating your digital life through a virtual presence in the cloud. No doubt too, software app downloads and the rich content they unleash will only grow. As of July, 425,000 distinct Apple apps had been downloaded 15 billion times on 200 million devices. The Android ecosystem of devices and apps has been growing even faster.
Perhaps the iCloud service in particular won’t succeed, but no doubt others like it will, not to mention all the apps and services we haven’t thought of. We do know that more bandwidth and connectivity will encourage more new ideas, and thus more traffic. In all, IDC estimates that by 2015 we will create or replicate around 8 zettabytes (8,000,000,000,000,000,000,000 bytes) of new data each year.
Big Data, in turn, will yield large economic benefits, from medical research to retail. The McKinsey Global Institute estimates that Big Data – the sophisticated exploitation of large sets of fine-grained information – could boost annual economic value in the U.S. health care sector by $300 billion. McKinsey thinks personal geolocation services could expand annual consumer surplus by $600 billion globally.
The wide array of Big Data techniques and services is crucially dependent on robust and capacious networks. U.S. service providers invested $26 billion in 2010 – and $232 billion over the last decade – on wireless infrastructure alone. Total info-tech investment in the U.S. last year was $488 billion. We’ll need more of the same to spur and accommodate Big Data, Cloud, Mobile, Netflix, and the rest. But without more spectrum, the whole enterprise of building the digital infrastructure could slow.
Picocells and femtocells – smaller network nodes that cover less area – can effectively expand capacity for some users by reusing existing wireless spectrum. These mini cells work together as HetNets (heterogeneous networks) and will be a central feature in the next decade of wireless expansion. But the new 4G mobile standard, called LTE, gets the biggest bang for the buck in wider spectrum bands. LTE also is by far the most powerful and flexible standard to manage the complexities and unlock the big potential of HetNets. So we see a virtuous complementarity: more, better spectrum will boost spectrum reuse efficiencies. In other words, spectrum reuse and more spectrum are not either-or alternatives but are mutually helpful and reinforcing.
We don’t know whether the new Netflix strategy will fly, whether iCloud will succeed, how HetNets will evolve, or exactly what the mobile ecosystem will look like. But in such an arena, we do know that maximum flexibility – and LOTS more spectrum – will give a beneficial tilt toward innovation and growth.
— Bret Swanson
A paper out today challenges the assertion that the AT&T-T-Mobile merger will create jobs. AT&T has said it would invest an additional $8 billion in wireless network infrastructure, above and beyond its usual $8-10 billion per year, and the Economic Policy Institute estimated this would result in between 55,000 and 96,000 job-years. The Communication Workers of America has cited the EPI study as one reason it supports the mobile union.
In a study prepared for Sprint, however, professor David Neumark says the EPI estimate fails to account for the fact that T-Mobile will no longer be investing its normal couple billion dollars per year after it is subsumed by AT&T. He says EPI is only looking at AT&T’s gross increase, not the net industry effect. He thinks the net effect will be negative and will thus cost jobs.
This is a fair point. We should analyze these things in as dynamic and realistic a way as possible. But the Sprint study appears to be relying on its own static, simplistic view of the world. Namely, it assumes an independent T-Mobile would keep investing billions a year on network infrastructure. Even though T-Mobile says it has neither the spectrum nor the financial resources from its parent Deutche Telekom to continue as an effective competitor in the highly dynamic mobile market where companies must constantly upgrade their networks to exploit all the good stuff offered by Moore’s law. In other words, it’s unlikely T-Mobile will continue investing several billion per year as a stand-alone company.
Another point that needs clarification: Some smart people think the AT&T estimate of $8 billion in additional capex is specific to the merger — connecting the two networks, expanding LTE beyond its previous plans, etc. But if these people are right, it’s still the case that AT&T will have to adopt at least some portion of network upgrades and maintenance that T-Mobile does every day on its own network. So AT&T’s capex spend is likely to go up beyond this additional $8 billion. In a merger scenario, therefore, not all, perhaps not even most, of the existing T-Mobile network investment “goes away.”
Another scenario in which a non-AT&T carrier acquired T-Mobile would result in whatever similar loss of T-Mobile specific investment that Sprint claims under the AT&T-T-Mobile scenario. But it doesn’t account for this possibility either.
So it seems the new Neumark-Sprint analysis also is not really a net estimate, just another form of gross estimate.
Ultimately, no one knows exactly what will happen in an ever-changing economy in our ever-changing world. But it is pretty safe to say that a healthy, growing, vibrant mobile industry will support more sustainable jobs than an unhealthy industry. The Sprint paper correctly acknowledges that efficiencies from mergers can result in all sorts of economic welfare gains, both for consumers and for workers who move into higher-value jobs.
A stand-alone T-Mobile is not a healthy company, and without T-Mobile, AT&T, although healthy, doesn’t have the spectrum or cell towers it needs to match current growth and fuel new growth. The proposed merger would result in a major supplier of next gen 4G broadband mobile services across most of the U.S. The benefits of this go far beyond the capex it takes to build the network (though very important) and extend to every citizen and industry that will enjoy ubiquitous go-anywhere broadband. These jobs created across the economy are incalculable but are likely to be substantial.
Where to begin. The economy is still in the doldrums some three years after an historic crash, the Administration is having a tough time boosting output and job growth, and so its Justice Department thinks it would be a good idea to discourage one of the nation’s biggest investors and employers from building yet more high-tech infrastructure in a sector of the economy that is manifestly healthy and which serves as a productivity platform for the rest of the economy.
It’s hard to believe, but that’s exactly what’s happening with the DoJ’s attempt to block AT&T’s merger with T-Mobile.
AT&T wants T-Mobile’s wireless spectrum and compatible cell-tower infrastructure so it can more quickly roll out next generation 4G mobile broadband services. It can’t wait for much needed spectrum auctions that will hopefully occur over the next several years. Meanwhile, T-Mobile doesn’t have the spectrum or financial wherewithal (through its parent Deutche Telekom) to build its own 4G network. Perfect fit, right? Join forces to rapidly deploy new network capacity and coverage for the next iteration of iPads, Androids, Thunderbolts, Galaxy Tabs, and broadband everywhere.
The Communication Workers of America union thinks the union is a good idea, estimating the merger will create 96,000 jobs. AT&T even this morning sweetened the pot by announcing – before DoJ’s surprise announcement – that on completion of the merger it would bring back 5,000 call center jobs from overseas and guarantee no job cuts for T-Mobile call center employees.
DoJ says a combination will hurt competition, but T-Mobile itself says it can’t really compete in the next generation of 4G. And DoJ ignores the fact, reported by the FCC, that 90% of the U.S. population has five or more mobile service provider choices, with brand new entrants like Clearwire, LightSquared, and Dish Network coming online and expanding every day. DoJ relies on indirect evidence of current market share to infer that bad things might happen in the future even as it ignores direct evidence of low prices, wild innovation, and widespread consumer choice in networks and devices.
This July 11 paper from economists Gerald Faulhaber, Robert Hahn, and Hal Singer really says it all.
With the economy in crisis, you’d think someone with a bit of business sense would be seeking every way to expand investment and employment, not find creative ways to quash it. Antitrust lawyers imagine themselves guardians of the public good, but there’s a big problem: they usually see the world through a rear-view mirror, wearing blinders, while experiencing tunnel vision.
Was it antitrust that saved the world from big, bad Microsoft. No, the Internet, Google, and Apple, among hundreds of other innovators, diluted Microsoft’s very temporary dominance. Did the AOL-TimeWarner merger kill competition in the online content or broadband markets? No. To remember the alarmism over that merger is to laugh. DoJ did block WorldCom’s bid for Sprint, and of course WorldCom went bankrupt. Did Verizon’s acquisition of Alltel kill innovation in the mobile market? What? Who’s Alltel?
There’s just no way a few attorneys in Washington can decree the proper organization of an industry that is so exceedingly dynamic. Meanwhile, the economy shuffles along slowly, very slowly.
— Bret Swanson
See our new column in Forbes:
As we entered August, a time of family vacations and corporate retreats, a CEO friend, who is a director of several companies, made a darkly humorous observation. “I’m impressed,” he said. “At our upcoming retreat, the CEO is dedicating an entire day to talk about . . . the business.”
This was a break from the new normal, where management is consumed with compliance, legality, accounting, risk mitigation, and political prognostication and manipulation. Carving time out of a business retreat to talk strategy, execution, product, and sales was a welcome novelty. It also revealed a chief challenge of our times – the obsession with and aversion to risk.
Update: Steve Lohr, the excellent New York Times technology reporter, offers his own take on risk-taking through the lens of Steve Jobs. Lohr and I picked the same great quote from Jobs’ Stanford commencement address.
A host of telecom and cable companies today announced a new plan to reform the Universal Service Fund and extend broadband further into rural America. I’ve spent years only partially understanding how USF works. Or how it doesn’t work, as seems the case. I think even in the old days, when it may have made some kind of sense, USF probably retarded investment and new technology in the areas it aimed to support. Unsubsidized potential entrants sporting new technologies couldn’t hope to compete with heavily subsidized incumbents. Even incumbents effectively couldn’t deploy newer, more efficient unsubsidized technologies. The result was probably some extension of phone service in the early days but lots of stagnation for decades after that. In today’s communications market, however, where many companies and many technologies supply many wholesale, commercial, and consumer services — and where broadband, Internet cloud, and wireless complement, compete, and overlap — USF has really broken down. Reform is long overdue, and this consensus industry plan should finally help move USF into the Internet age.
The new proposal — called America’s Broadband Connectivity Plan — also reforms the antiquated and broken Inter Carrier Compensation system, which sets the terms for traffic exchange among communications companies. In a broadband-mobile-Internet world, ICC, like USF, no longer works and is often exploited with arbitrage schemes that add no value but shuffle money via clever manipulation of the rules.
For too long wrangling and indecision between industry and government — and among industry players themselves — has delayed action. We now have a good consensus leap on the road to modernization.
The OECD published its annual Communications Outlook last week, and the 390 pages offer a wealth of information on all-things-Internet — fixed line, mobile, data traffic, price comparisons, etc. Among other remarkable findings, OECD notes that:
In 1960, only three countries — Canada, Sweden and the United States — had more than one phone for every four inhabitants. For most of what would become OECD countries a year later, the figure was less than 1 for every 10 inhabitants, and less than 1 in 100 in a couple of cases. At that time, the 84 million telephones in OECD countries represented 93% of the global total. Half a century later there are 1.7 billion telephones in OECD countries and a further 4.1 billion around the world. More than two in every three people on Earth now have a mobile phone.
Very useful stuff. But in recent times the report has also served as a chance for some to misrepresent the relative health of international broadband markets. The common refrain the past several years was that the U.S. had fallen way behind many European and Asian nations in broadband. The mantra that the U.S. is “15th in the world in broadband” — or 16th, 21st, 24th, take your pick — became a sort of common lament. Except it wasn’t true.
As we showed here, the second half of the two-thousand-aughts saw an American broadband boom. The Phoenix Center and others showed that the most cited stat in those previous OECD reports — broadband connections per 100 inhabitants — actually told you more about household size than broadband. And we developed metrics to better capture the overall health of a nation’s Internet market — IP traffic per Internet user and per capita.
Below you’ll see an update of the IP traffic per Internet user chart, built upon Cisco’s most recent (June 1, 2011) Visual Networking Index report. The numbers, as they did last year, show the U.S. leads every region of the world in the amount of IP traffic we generate and consume both in per user and per capita terms. Among nations, only South Korea tops the U.S., and only Canada matches the U.S.
Although Asia contains broadband stalwarts like Korea, Japan, and Singapore, it also has many laggards. If we compare the U.S. to the most uniformly advanced region, Western Europe, we find the U.S. generates 62% more traffic per user. (These figures are based on Cisco’s 2010 traffic estimates and the ITU’s 2010 Internet user numbers.)
As we noted last year, it’s not possible for the U.S. to both lead the world by a large margin in Internet usage and lag so far behind in broadband. We think these traffic per user and per capita figures show that our residential, mobile, and business broadband networks are among the world’s most advanced and ubiquitous.
Lots of other quantitative and qualitative evidence — from our smart-phone adoption rates to the breakthrough products and services of world-leading device (Apple), software (Google, Apple), and content companies (Netflix) — reaffirms the fairly obvious fact that the U.S. Internet ecosystem is in fact healthy, vibrant, and growing. Far from lagging, it leads the world in most of the important digital innovation indicators.
— Bret Swanson
I’m no in-the-weeds budget expert — not even close — but it seemed to me that among all the important debates over deficits, entitlements, and debt ceilings, the biggest factor of all is being mostly ignored. That factor is the compound rate of economic growth, and I made the case for “The Growth Imperative” at a Tuesday meeting of the National Chamber Foundation Fellows. Here’s my column at Forbes. See the slides below:
The generally light-touch regulatory approach to America’s Internet industry has been a big success story. Broadband, wireless, digital devices, Internet content and apps — these technology sectors have exploded over the last half-dozen years, even through the Great Recession.
So why are Washington regulators gradually encroaching on the Net’s every nook and cranny? Perhaps the explanation is a paraphrased line about Washington’s upside-down ways: If it fails, subsidize it. If it succeeds, tax it. And if it succeeds wildly, regulate it.
Whatever the reason, we should watch out and speak up, lest D.C. do-gooders slow the growth of our most dynamic economic engine.
Last December, the FCC imposed a watered down version of Net Neutrality. A few weeks ago the FCC asserted authority to regulate prices and terms in the data roaming market for mobile phones. There are endless Washington proposals to regulate digital advertising markets and impose strict new rules to (supposedly) protect consumer privacy. The latest new idea (but surely not the last) is to regulate prices and terms of “special access,” or Internet connectivity in the middle of the network.
Special access refers to high-speed links that connect, say, cell phone towers to the larger network, or an office building to a metro fiber ring. Another common name for these network links is “backhaul.” Washington lobbyists have for years been trying to get the FCC to dictate terms in this market, without success. But now, as part of the proposed AT&T-T-Mobile merger, they are pushing harder than ever to incorporate regulation of these high-speed Internet lines into the government’s prospective approval of the acquisition.
As the chief opponent of the merger, Sprint especially is lobbying for the new regulations. Sprint claims that just a few companies control most the available backhaul links to its cell phone towers and wants the FCC to set rates and terms for its backhaul leases. But from the available information, it’s clear that many companies — not just Verizon and AT&T — provide these Special Access backhaul services. It’s not clear why an AT&T-T-Mobile combination should have a big effect on the market, nor why the FCC should use the event to regulate a well-functioning market.
Sprint is a majority owner and major partner of 4G mobile network Clearwire, which uses its own microwave wireless links for 90% of its backhaul capacity. Sprint used Clearwire backhaul for its Xohm Wi-Max network beginning in 2008 and will pay Clearwire around a billion dollars over the next two years to lease backhaul capacity.
T-Mobile, meanwhile, uses mostly non-AT&T, non-Verizon backhaul for its towers. Recent estimates say something like 80% of T-Mobile sites are linked by smaller Special Access providers like Bright House, FiberNet, Zayo Bandwidth, and IP Networks. Lots of other providers exist, from the large cable companies like Comcast, Cox, and TimeWarner to smaller specialty firms like FiberTower and TowerCloud to large backbone providers like Level 3. The cable companies all report fast growing cell site backhaul sales, accounting for large shares of their wholesale revenue.
One of the rationales for AT&T’s purchase of T-Mobile was that the two companies’ cell sites are complementary, not duplicative, meaning AT&T may not have links to many or most of T-Mobile’s sites. So at least in the short term it’s likely the T-Mobile cells will continue to use their existing backhaul providers, who are, again, mostly not Verizon or AT&T. It’s possible over time AT&T would expand its network and use its own links to serve the sites, but the backhaul business by then will only be more competitive than today.
This is a mostly unseen part of the Internet. Few of us every think about Special Access or Backhaul when we fire up our Blackberry, Android, or iPhone. But these lines are key components in mobile ecosystem, essential to delivering the voices and bits to and from our phones, tablets, and laptops. The wireless industry, moreover, is in the midst of a massive upgrade of its backhaul lines to accommodate first 3G and now 4G networks that will carry ever richer multimedia content. This means replacing the old T-1 and T-3 copper phone lines with new fiber optic lines and high-speed radio links. These are big investments in a very competitive market.
Given the Internet industry’s overwhelming contribution to the U.S. economy — not just as an innovative platform but as a leading investor in the capital base of the nation — one might think we wouldn’t lightly trifle with success. The chart below, compiled by economist Michael Mandel, shows that the top two — and three out of the top seven — domestic investors are communications companies. These are huge sums of money supporting hundreds of thousands of jobs directly and many millions indirectly.
We’ve seen the damage micromanagement can cause — in the communications sector no less. The type of regulation of prices and terms on infrastructure leases now proposed for Special Access was, in my view, a key to the 2000 tech/telecom crash. FCC intrusions (remember line sharing, TELRIC, and UNE-P, etc.) discouraged investments in the first generation of broadband. We fell behind nations like Korea. Over the last half-dozen years, however, we righted our communications ship and leapt to the top of the world in broadband and especially mobile services.
I’m not arguing these regulations would crash the sector. But the accumulated costs of these creeping Washington intrusions could disrupt the crucial price mechanisms and investment incentives that are no where more important than the fastest growing, most dynamic markets, like mobile networks.Time for FCC lawyers to hit the beach — for Memorial Day weekend . . . and beyond. They should sit back and enjoy the stupendous success of the sector they oversee. The market is working.
— Bret Swanson
I wrote last week about the hugely successful legislative agenda of Indiana Gov. Mitch Daniels — and the possibility he might offer his leadership to all of America. In the video below, the Governor himself outlines the nation’s most far-reaching education reforms.
Brink Lindsey of the Kauffman Foundation summarizes a new paper on the imperative of constantly exploring the economic frontier:
Section 332(c)(2) of the Communications Act says that “a private mobile service shall not . . . be treated as a common carrier for any purpose under this Act.”
So of course the Federal Communications Commission on Thursday declared mobile data roaming (which is a private mobile service) a common carrier. Got it? The law says “shall not.” Three FCC commissioners say, We know better.
This up-is-down determination could allow the FCC to impose price controls on the dynamic broadband mobile Internet industry. Up-is-down legal determinations for the FCC are nothing new. After a decade trying, I’ve still not been able to penetrate the legal realm where “shall not” means “may.” Clearly the FCC operates in some alternate jurisprudential universe.
I do know the decision’s practical effect could be to slow mobile investment and innovation. It takes lots of money and know-how to build the Internet and beam real-time videos from anywhere in the world to an iPad as you sit on your comfy couch or a speeding train. Last year the U.S. invested $489 billion in info-tech, which made up 47% of all non-structure capital expenditures. Two decades ago, info-tech comprised just 33% of U.S. non-structure capital investment. This is a healthy, growing sector.
As I noted a couple weeks ago,
You remember that “roaming” is when service provider A pays provider B for access to B’s network so that A’s customers can get service when they are outside A’s service area, or where it has capacity constraints, or for redundancy. These roaming agreements are numerous and have always been privately negotiated. The system works fine.
But now a group of provider A’s, who may not want to build large amounts of new network capacity to meet rising demand for mobile data, like video, Facebook, Twitter, and app downloads, etc., want the FCC to mandate access to B’s networks at regulated prices. And in this case, the B’s have spent many tens of billions of dollars in spectrum and network equipment to provide fast data services, though even these investments can barely keep up with blazing demand. . . .
It is perhaps not surprising that a small number of service providers who don’t invest as much in high-capacity networks might wish to gain artificially cheap access to the networks of the companies who invest tens of billions of dollars per year in their mobile networks alone. Who doesn’t like lower input prices? Who doesn’t like his competitors to do the heavy lifting and surf in his wake? But the also not surprising result of such a policy could be to reduce the amount that everyone invests in new networks. And this is simply an outcome the technology industry, and the entire country, cannot afford. The FCC itself has said that “broadband is the great infrastructure challenge of the early 21st century.”
But if Washington actually wants more infrastructure investment, it has a funny way of showing it. On Sunday at a Boston conference organized by Free Press, former Obama White House technology advisor Susan Crawford talked about America’s major communications companies. “[R]egulating these guys into to an inch of their life is exactly what needs to happen,” she said. You’d think the topic was tobacco or human trafficking rather than the companies that have pretty successfully brought us the wonders of the Internet.
It’s the view of an academic lawyer who has never visited that exotic place called the real world. Does she think that the management, boards, and investors of these companies will continue to fund massive infrastructure projects in the tens of billions of dollars if Washington dangles them within “an inch of their life”? Investment would dry up long before we ever saw the precipice. This is exactly what’s happened economy-wide over the last few years as every company, every investor, in every industry worried about Washington marching them off the cost cliff. The White House supposedly has a newfound appreciation for the harms of over-regulation and has vowed to rein in the regulators. But in case after case, it continues to toss more regulatory pebbles into the economic river.
Perhaps Nick Schulz of the American Enterprise Institute has it right. Take a look. He calls it the Tommy Boy theory of regulation, and just maybe it explains Washington’s obsession — yes, obsession; when you watch the video, you will note that is the correct word — with managing every nook and cranny of the economy.