Great Stagnation? Or Technology Renaissance?

AEI event 2016-01-28a

On Thursday, my colleagues in the American Enterprise Institute’s technology program offered views on “Cyberspace policy at home and abroad,” covering the increasingly contentious realms of hacking, encryption, IP, and global Internet governance and the domestic effects of FCC regulation. I spoke for 10 minutes on technology’s broader impact on the economy and addressed the Great Stagnation question. Has a four-decade dearth of technology caused slow growth and inequality, with more disappointment to come? Or could better policy quickly encourage new bursts of innovation and resurgent economic growth? Watch here (with my segment beginning at 3:20:30, if it doesn’t jump there automatically).

Here’s a longer talk covering many of the same topics, from Purdue’s Dawn or Doom 2 tech conference in September.

The GOP tax debate

Fifteen years ago, Art Laffer, the principal advocate of the Reagan tax reforms, outlined his ideal tax code for the 21st century. In December 2001, I visited Laffer in San Diego and asked him:

What does your perfect tax code look like?

Number one, it should start out on the first dollar you earn. Then take all federal taxes, (except the sin taxes, which are there to discourage behavior not collect revenue) – I’m talking payroll taxes, income taxes, corporate profits taxes, all federal excise taxes, tariffs, telecom taxes – get rid of them all. And have two taxes. One on business value added. And one on personal unadjusted gross income.

Why do you like a value added tax?

Because it’s got a huge tax base. And it’s all value added. You want to tax the value added to the GDP because that’s what you’re getting the resource base out of. You want to tax both unadjusted gross income and business value added because that way you get the whole GDP twice, so you can have half the rate.

What’s the rationale for that?

If you beat a dog, it’s gonna run, but you don’t know which direction. If you feed a dog, you know where it will be. Taxes are like that. People will do all they can to avoid paying taxes. Evasion, avoidance, underground economy, tax shelters, etc. Going out of work. So the theory behind the flat tax is you want the lowest possible rate on the broadest possible base. By having the lowest base, you provide the fewest incentives to evade, avoid or otherwise not report taxable income.

Isn’t there double taxation involved?

Oh, there is. But it’s double taxation of everything the same. There are no distortions. You can tax GDP at, what is it today, 22 percent of GDP. Or you can tax it at 11 percent at the individual level and 11 percent at the production level. I think it makes a lot of sense to tax 11 percent of each because you make the base that much bigger and the rate that much smaller.

If this looks like the tax reform plan of presidential candidate Sen. Ted Cruz, that’s be cause it basically is. Cruz calls the value added portion of his plan a business flat tax and even referred to Laffer’s support for his plan in last week’s debate. Other candidates, however, have attacked this plan as a “VAT” – a value added tax. They assert that this dreaded three letter tax is an obvious menace of high taxation and big government. But why do they say this? Do they really think that Laffer, the economist most widely associated with pro-growth tax policy, and Cruz, a fierce advocate for taxpayers, want to over-tax the U.S. economy?

For the first few days of the debate, the attackers seemed to be emphasizing the semantics rather than the substance – it is too a VAT! they insisted. If they are trying to equate the Cruz flat tax with a European-style VAT, however, I think they are wrong. Most European VATs are sales taxes, applied transaction by transaction. The Cruz/Laffer proposal taxes firms on revenues minus capital investments and payments to other businesses, is based on corporate accounts, and is payable by firms quarterly. It essentially taxes profits and payrolls, not sales. Or as the Tax Foundation put it,

Is It Like A Retail Sales Tax?

No, it’s not.

Most of the GOP tax proposals, regardless of flavor and legal incidence, tax “value added,” so VAT is a much less precise and informative term than the debate this last week would imply. I understand the political incentive for opponents to link the two semantically, but the label matters much less than the  substance. Critics make some plausible sounding arguments, but I’m not sure any of them hold up. Among the criticisms:

VATs are a key reason European taxation and government is so large.

It’s true that many European nations employ VATs, but these sales taxes are almost uniformly imposed on top of payroll taxes, corporate taxes, and progressive income taxes. They are not a replacement for these taxes but an addition to these taxes. Laffer’s outline and Cruz’s plan, however, use the business flat tax and the personal flat tax to replace the current tax code, not to augment it.

Conservatives have been warning against European style VATs for decades. Why would we go down this road?

Again, the key argument against Euro style VATs was that American liberals have wanted for a long time to boost taxation and the size of government, and adding a VAT on top of the current tax code has been one Democratic idea to accomplish this. Conservatives were and are correct to argue against this additional layer of taxation. The Tax Foundation analysis says the Cruz plan would boost economic growth without increasing the tax burden (and in many important ways, reducing the tax burden) – just the opposite of the European experience.

Which gets us to the next argument – that VATs raise too much money.

It’s true that economists of all persuasions think VATs are efficient methods to raise revenue, which conservatives usually say is the purpose of the tax code. Not social engineering, not redistribution. Laffer’s explanation above makes the point: the lowest possible rates on the largest possible economic base, which will minimize distortions, disincentives, unfairness, and noncompliance. One of the foundational insights of supply-side economics and the Reagan economic revival was that some taxes are better than others – that tax complexity and high rates can impose large costs on the economy relative to the revenue they collect, and that we can encourage greater economic activity and collect necessary revenues with a more efficient tax code. An efficient tax just means we can enjoy lower rates and less interference in the economy.

Ah, critics say, yes, but VATs tend to hide the cost of government.

True, VATs don’t appear as deductions from your paycheck, nor would the business flat tax. But neither do corporate income taxes appear on your paycheck, nor, for most people, do the high-rate income taxes that pay for a huge proportion of the nation’s total tax take. Taxes in the U.S. today don’t reflect the true cost of government for many voters. Let’s give future voters a little more credit. They would quickly figure out that the business flat tax rates are built into the prices of goods and services and affect wages, and would vote accordingly. (The Cruz plan even says that firms would pay the tax quarterly and report the figures to their employees and shareholders, making it transparent.) In fact, one could argue that a low-rate-broad-base tax would better align voters with good economic policy. In the current highly progressive system in which the cost of government is invisible to many, half(+1) the population can essentially vote to tax the other half(-1). With a flat tax’s broader base and single rate, on the other hand, the costs would be more apparent, less unfairly and arbitrarily distributed, and a substantial majority of voters would be likely to oppose tax rate increases. The Cruz plan would still protect low-income Americans with a larger standard deduction and, they say, an improved EITC.

Yes, yes, yes, but the real threat is that future politicians could raise the VAT rate without people noticing.

Politicians have already proved willing and adept at raising (and complicating) today’s taxes! I understand the theory behind this line of argument, but I just don’t buy it. Again, I think most people would understand that voting to increase the flat tax rate is voting to tax themselves. A further irony: the very critics who warn that future politicians will raise the rates in Cruz’s plan themselves support a tax plan with a corporate tax rate nearly twice as high and a personal income tax rate more than three times as high. One could say their favored tax code thus enshrines from the outset what they warn against as a mere possibility for their opponent’s plan. All that said, yes, I’d love to see some additional protections so that any new and improved tax system would be difficult to undo.

Now, one argument I have not heard from critics but that I can imagine is this: Because a flat tax puts everyone basically in the same boat, and better aligns the incentives of all taxpayers/voters, taxes as a political issue may lose their saliency. Presumably, a large and crucial part of the Republican coalition is based upon the group of voters that pays an overwhelming portion of all taxes. Might some anti-tax advocates think inefficient taxes that gouge some taxpayers are good for generating anti-tax voting incentives and holding together the political coalition? With less of a tax split, would this clear cut issue go away, while the parties realign based on other non-tax issues? I have no idea, am no political expert, and am just speculating.

The fact is that the tax proposals of many of the GOP presidential candidates would all improve the tax code and the U.S. economy. I think the Laffer/Cruz proposal is perhaps the most attractive option among many good plans. For good summaries, detailed analyses, and comparisons of the candidates’ plans, see the Tax Foundation.

Why economic growth matters (cont.)

Marginal Revolution University has a good new short video on why economic growth is so important . . .

Remember Peak Oil?

It’s difficult to overstate just how panicked the world was over oil prices a decade ago — stratospherically high oil prices. We were, most policy makers and economists believed, in an energy crisis — the result of a desperate shortage of petroleum that could only be solved with cellulosic ethanol and windmills. During this “energy crisis” of 2006, we wrote the following Wall Street Journal commentary, hoping to calm fears of peak oil and other such nonsense that often accompanies big price swings. We said oil prices likely would recede. We said vast stores of oil, especially in shale, were about to be found and extracted. We said alternative energy schemes in part justified by high oil prices were a bad idea. We also said a big financial disruption was likely. The macro environment is very different today — prices are low instead of high; the dollar strong instead of weak. In fact, we’ve been telling clients for the last year that today’s environment looks much like the late 1990s: a strong dollar, plummeting energy and commodity prices, soaring prices for abstract technology firms like Internets and bio-techs, and trouble in emerging markets. We reprint this column as a reminder of the economic fundamentals…and energy’s abundance.

The Elephant in the Barrel

The Wall Street Journal — August 12, 2006

by Bret Swanson

Nigerian pipeline explosions, Chinese demand, Arab angst, Venezuelan volatility, peak oil and a Putin premium: These are the usual explanations for high petroleum prices. But our discussion of the “energy crisis” has ignored the elephant in the barrel — monetary policy. Today, high oil prices are the backdrop for Middle Eastern chaos and calls for bad energy policy. It was much the same in the 1970s, when high prices yielded similar violence against our fellow man and against economics. This is no coincidence. A weak dollar is the culprit, now as then.

When the Yom Kippur war was launched in October 1973, the price of oil had been rising for two years. For decades, oil’s price had been remarkably stable, like the prices of most other goods. But in 1971 Richard Nixon broke the dollar’s links both to gold and to key foreign currencies. Bretton Woods — and the dollar — collapsed, and a decade-long inflation began.

By July 1973, gold had deviated from its long-time price of $35 per ounce and soared to $120. Oil also responded quickly to dollar weakness and doubled in price by the early autumn. The Mideast nations complained that the Western oil companies were accumulating massive “windfall profits.” Having negotiated agreements in the previous environment of price stability, the Arabs and Persians were stuck with much lower prices and royalty payments. You know the rest of the decade’s news: embargoes, gas lines, inflation, wage and price controls, hostages.

Today, commodity prices across the board, from coffee to carbon fiber, remain near 25-year highs. High oil prices are not a unique phenomenon, but just another commodity whose price is determined primarily by the value of the dollar. Expensive oil isn’t exclusively a monetary event, of course: Risk and demand matter, too. But in comparing oil to other commodities, especially gold, we find that elevated risk and demand explains only $10-$15 of the higher oil price; $30 of the price is explained by a weak, inflationary dollar. The entity most responsible for expensive oil is thus the Fed.


Samsung, Apple, and a possible date at the Supreme Court

Today, Samsung asked the Supreme Court to review an antiquated component of patent law. My brief take:

“The prevailing interpretation of design patents and penalties is rooted in the 1870s. It doesn’t work in a smartphone world. The Supreme Court should take this case and modernize the notion of damages for ‘total device profits’ for complex products. The Court should continue its good work in rebalancing our intellectual property paradigm away from clever lawyering and in favor of true innovation.”

–Bret Swanson

A little good news for the Net

Surprise: there’s a bit of good news from Washington. The House and Senate just agreed to include a permanent Internet Tax Freedom Act in the Customs and Border Protection reauthorization. Congress first barred states and the federal government from taxing Internet access in 1998. But the measure was temporary, and every few years since then  it’s been in jeopardy of expiring. Applying discriminatory taxes to Internet access would have slowed the rollout of broadband, the uptake by consumers, and the emergence of some of America’s most successful industries. This new measure ensures we continue a successful policy . . . permanently.

Is there a better vision for health care?

Screen Shot 2015-11-19 at 12.24.39 PM

In recent days, the New York Times and Wall Street Journal have reported on the Affordable Care Act’s growing problems. Skyrocketing premiums, more lost coverage, skyrocketing deductibles, narrow networks, dysfunctional health insurance exchanges (more than half of which have now closed shop), and a warning from the nation’s largest insurer UnitedHealth that it may abandon Obamacare altogether. One consumer summed up the dismal situation:

“We can’t afford the Affordable Care Act, quite honestly,” said Cassaundra Anderson, whose family canvassed for Obama in their neighborhood, a Republican stronghold outside Cincinnati. “The intention is great, but there is so much wrong. . . . I’m mad.”

Is there a better way? Yes, there are lots of better ways, and lots of good ideas to “reform health care reform.” In fact, I believe health care is poised to explode with exciting innovations that will slash costs and radically improve care. Just yesterday the venture capital firm Andreessen Horowitz announced a new fund focused on software for biotech. But many of these important medical and economic advancements will only happen to the degree we allow them to happen. And right now, the ACA is exacerbating the worst features of the existing health market while adding new pathologies of its own. Choice is contracting, costs are mushrooming, and innovation is being stifled. The FDA, too, is a big obstacle. Instead of this rigid, top-down, costly path, I’ve laid out what I think is a more hopeful vision for the future of health care in a new report called “The App-ification of Medicine: A Four-Facted Information Revolution in Health.” This revolution is based on:

  • Smartphones and personal technology
  • Big Data, Social Data
  • The Code of Life
  • The app-ification of the business of health care

The report is by no means a comprehensive look at what is a huge sector and a hugely complex topic. But it might spark some ideas and bolster our optimism that if we free the health sector, it can become an economic blessing rather than a burden.


Moore’s Law at 50

Here’s a new version of my 50th anniversary assessment of Moore’s Law, just out from the American Enterprise Institute.

Moore 50 cover 1


Key Points

  • Over the last 50 years, exponential scaling of silicon microelectronics “turned a hundred dollar chip with a few dozen transistors into a 10 dollar chip with a few billion transistors,” fulfilling Moore’s Law, Gordon Moore’s ambitious prediction, and propelling the information economy.
  • Information technology, powered by Moore’s Law, provided nearly all the productivity growth of the last 40 years and promises to transform industries such as health care and education that desperately need creative disruption.
  • Shrinking silicon transistors is getting more difficult as we approach fundamental atomic limits, but varied innovations—in materials, devices, state variables, and parallel architectures—will likely combine to deliver continued exponential growth in computation, data storage, sensing, and communications.

The Regulatory Charade

Michael Spence and Kevin Warsh, writing in The Wall Street Journal, highlight the dearth of business investment over the last eight years. (WSJ)

Michael Spence and Kevin Warsh, writing in The Wall Street Journal, highlight the dearth of business investment over the last eight years. (WSJ)

Unless we address the growth of the Administrative State, it will continue to stifle growth in the real economy. As you can see in the chart above, this recovery has suffered, among other maladies, from the weakest business investment of any recent expansion. The weakest by far. A number of factors may be at play — monetary policy, global turmoil, bad corporate tax policy, the nature of the last downturn, etc. But it’s not a stretch to conclude that a major factor in the economy’s underperformance is growing bureaucratic interference with economic activity. One study estimates that regulation costs the economy $1.88 trillion per year, and another study puts the cost to the economy into the tens of trillions of dollars. As bureaucratic excursions into firms and industries grow, and as the costs so manifestly outweigh the benefits, the agencies’ rationales for regulatory control become ever more creative.

A good example comes from Susan Dudley of George Washington University, who studies environmental regulation. She describes a clearly political decision cloaked as “science.”

The Environmental Protection Agency published its final national ambient air quality standard (NAAQS) for ozone in the Federal Register on Monday.  EPA emphasizes that “Setting air quality standards is about protecting public health and the environment. By law, EPA cannot consider costs in doing that.”  The agency did prepare a regulatory impact analysis (RIA) to comply with presidential executive orders 12866 and 13563, but it is explicit that “although an RIA has been prepared, the results of the RIA have not been considered in issuing this final rule.”

The results of the RIA, however, were featured prominently in EPA’s press release.  According to the release, “The public health benefits of the updated standards, estimated at $2.9 to $5.9 billion annually in 2025, outweigh the estimated annual costs of $1.4 billion.”  EPA’s fact sheet relies on the RIA to assert that meeting the new 70 parts per billion (ppb) standard will avoid 320 to 660 premature deaths each year.

Nonetheless, the 480-page RIA suggests that these health benefits pale in comparison to the benefits that achieving a more stringent 65 ppb standard would bring.  According to EPA’s models, a standard of 65 ppb would avoid between 1,590 and 3,320 premature deaths. (This does not include California.)

There are ample reasons to question EPA’s ozone health benefit estimates but the fact is, the agency’s own analysis claims that the more stringent 65 ppb standard would have saved an additional 1,274 to 2,660 lives per year, and avoided an additional 2,670 emergency room visits and almost 1,300 hospital admissions.

If, as EPA says, “the Act requires [it] to base the decision for the primary standard on health considerations only; economic factors cannot be considered,” how can it reconcile setting a standard that leaves so many lives unprotected?

EPA cannot openly admit that its decision was influenced by the enormous costs of achieving the tighter standard.  (Chapter 4 of the RIA acknowledges that no known measures are available to achieve either of the standards EPA considered, but estimates that a 65 ppb standard would impose costs of $16 billion per year – more than 10 times the estimated $1.4 billion per year cost of achieving a 70 ppb standard.)

It’s obvious that EPA did consider the gigantic cost, and Dudley concluded:

It’s time to stop the charade that it is wise or even possible to base NAAQS purely on health considerations.  There are very real tradeoffs involved in these policy decisions that deserve open and transparent debate, rather than the pretense that they can be made by considering only science.

Another example from the environmental arena is the never-ending Keystone XL saga, in which various bureaucracies have for seven years pretended to “study the impact assessments” while blocking the project. Almost no one even argues anymore that this is anything but a political football designed to pacify narrow constituencies and raise campaign money. And yet billions of dollars in potential investment and thousands of jobs are put off.

It is impossible to insulate executive and even independent agencies from all politics. Let’s be realistic. And yet emboldened bureaucrats are increasingly dispensing with even the pretense of expertise, fair play, and the rule of law.

In recent years, the Federal Communications Commission, a nominally “independent expert agency,” has descended into the political swamp. In the most famous case, one year ago, just after the 2014 elections, the FCC collapsed in the face of a subversive White House campaign to write new regulations governing the Internet, one of the most important and innovative sectors of the economy. The FCC had been heading in one policy direction, but at the last second, after years of consideration, a small team of non-expert political operatives in the White House (in cahoots with a few FCC insiders who, it turns out, were also orchestrating outside political activists) twisted Chairman Tom Wheeler’s arm, and the White House got its favored policy. Never mind that all of this was illegal —  Congress had told the FCC 20 years ago the Internet was to remain “unfettered by Federal and State regulation.”

Last week, one of Chairman Wheelers’s senior advisors spoke to an industry group and once again asked them to go on a political campaign in favor of even more regulation of the communications sector. As Light Reading reported,

Ideally, Sohn [Wheeler’s senior advisor] said, the same kind of consumer activism that helped drive the Open Internet rule changes earlier this year — including pickets at Wheeler’s home and the White House, and widespread TV coverage — could be brought to bear on some of the more arcane issues, such as special access and IP transition rules.

So senior staff at “independent expert” agencies, who make economic rules and enforce technology standards in highly technical sectors of the economy, are now urging political activists to go to the home of the agency chairman to bank pots and pans and urge specific policies — invariably tilted toward more regulation.

In a possible silver lining, the assertiveness of regulatory and expert agencies is exposing fundamental flaws in the Administrative State. So egregious is the behavior, so overt and obvious is the politicization, so damaging is the impact on the economy, that the agencies — long political tools but not recognized as such — are earning the scrutiny that could lead to a revolution of sorts.

Steven Davis of the University of Chicago describes the size of the problem — a Code of Federal Regulations now 175,000 pages long, for example — here. Charles Murray describes the nature of the regulatory charade and a possible political solution  here. John Cochrane of UChicago and the Hoover Institution outlines the impact of regulatory insanity on economic growth here. I’ve looked at the impact on economic growth here and suggested that, in the cases where regulation is needed, it’s imperative to “Keep It Simple.”

John Cochrane on Economic Growth

We’ve been hammering for years on the importance of reinvigorating economic growth, and John Cochrane of the University of Chicago has put lots of the key ideas, big and small, in one new paper. Enjoy.

Screen Shot 2015-10-30 at 11.35.34 AM

Analysis of Jeb Bush’s tax plan and growth agenda

Here’s a short list of analyses of Jeb Bush’s new tax reform plan, which focuses on reviving economic growth.

Wi-Fi and LTE-U: What’s the real story on unlicensed spectrum?

Today The Wall Street Journal highlighted a debate over unlicensed wireless spectrum that’s been brewing for the last few months. On one side, mobile carriers like Verizon and T-Mobile are planning to roll out a new technology known as LTE-U that will make use of the existing unlicensed spectrum most commonly used for Wi-Fi. LTE-U is designed to deliver a similar capability as Wi-Fi, namely short-range connectivity to mobile devices. As billions of mobile devices and Web video continue to strain wireless networks and existing spectrum allocations (see “The Immersive Internet Needs More Wireless Spectrum”), mobile service providers (and everyone else) is looking for good sources of spectrum. For the meantime, they’ve found it in the 5 GHz ISM band. The 5 GHz band is a good place in which to deploy “small cells” (think miniature cell towers delivering transmissions over a much smaller area) which can greatly enhance the capacity, reach, and overall functionality of wireless services.

Google and the cable companies, such as Comcast, however, are opposed to the use of LTE-U because they say  LTE-U could interfere with Wi-Fi. The engineering department at the Federal Communications Commission (FCC) has been looking into the matter for the last few months to see whether the objections are valid, but the agency has not yet reported any firm conclusions.

Is this a technical issue? Or a business dispute?

Until I see some compelling technical evidence that LTE-U interferes with Wi-Fi, this looks like a business dispute. Meaning the FCC probably should not get involved. The 2.4 GHz and 5 GHz spectrum in which Wi-Fi (and Bluetooth and other technologies) operate is governed by just a few basic rules. Most crucially, devices must not exceed certain power thresholds, and they can’t actively interfere with one another. Wi-Fi was designed to share nicely, but as everyone knows, large numbers of devices in one area, or large numbers of Wi-Fi hotspots can cause interference and thus degrade performance. The developers of LTE-U have spent the last couple years designing it specifically to play by the rules of the unlicensed spectrum and to play nicely with Wi-Fi.

The early results are encouraging. In real world tests so far,

  • LTE-U delivers better performance than Wi-Fi,
  • doesn’t degrade nearby Wi-Fi performance, and
  • may in fact improve the performance of nearby Wi-Fi networks.

For more commentary and technical analysis, see Richard Bennett’s recent posts here and here. Bennett was an early co-inventor of Wi-Fi and thus knows what he’s talking about. Also, Qualcomm has a white paper here and some good technical reports here and here.

Another line of opposition to LTE-U says that the mobile service providers like Verizon and T-Mobile will use LTE-U to deliver services that compete with Wi-Fi and will thus disadvantage competitive service providers. But the mobile service providers already operate lots of Wi-Fi hotspots. They are some of the biggest operators of Wi-Fi hotspots anywhere. In other words, they already compete (if that’s the right word) with Google and cable firms in this unlicensed space. LTE-U is merely a different protocol that makes use of the same unlicensed spectrum, and must abide by the same rules, as Wi-Fi.  The mobile providers just think LTE-U can deliver better performance and better integrate with their wide area LTE-based cellular networks. Consider an analogy: the rental fleets of Hertz and Avis are both made up of Ford vehicles. Hertz then decides to start renting Fords and Chevys. The new Chevys don’t push Fords off the road. They are both cars that must obey the rules of the road and the laws of physics. The two types of vehicles can coexist and operate just as they did before. Hertz is not crowding out Avis because it is now using Chevys.

I’ll be looking for more real world tests that either confirm or contradict the initially encouraging evidence. Until then, we shouldn’t prejudge and block a potentially useful new technology.

Permission Slips for Internet Innovation

Screen Shot 2015-08-15 at 1.03.35 PM

See my commentary in The Wall Street Journal this weekend — “Permission Slips for Internet Innovation.”

Screen Shot 2015-08-16 at 1.06.49 PM

Continue reading . . .

Finally, a robust discussion of economic growth

I’m delighted to see the robust discussion breaking out over the urgent need to reignite the U.S. economy. The impetus seems to be Jeb Bush’s call last week to implement policies that would boost the U.S. growth rate to 4%, at least for several years. A number of economists and journalists said Bush’s 4% goal was impossible. But others say nonsense; of course we can do much better than we have over the last decade. See Glenn Hubbard and Kevin Warsh, for example, in The Wall Street Journal today. Jon Hartley follows up here. Michael Solon wrote an excellent piece back in February. And John Taylor has been urging the same here and here.

Here’s a selection of my own research and commentary on the topic over the last five years:

THE GROWTH IMPERATIVE — Forbes — May 27, 2011
The Growth Imperative — Slides — Presentation to U.S. Chamber — May 24, 2011

Despite 0.2% GDP reading, New Normalizers and Secular Stagnationists are wrong

slow growth 1q15 v3

The political fights of the last decade have distracted us from what should be, in my view, the central issue of our policy debates—reviving economic growth. First quarter 2015 growth of just 0.2% comes on the heels of a lackluster 2014, when the economy grew just 2.4%. The financial crisis surely took its toll, but for how long can we blame a seven-year-old event, while millions of Americans are denied the opportunities that attend a faster growing economy?

There are many excuses for the first quarter reading. Yes, it was cold. Yes, there might be some statistical aberration—first quarter growth has been conspicuously low for the last few years. But this is not a single-quarter problem. Over the last nine full years, the economy has not achieved 3% growth.

The stock market has recovered nicely, but middle-class Americans and small businesses are struggling with the anxieties of slower growth. If, after the last recession, the U.S. had kept moving ahead at its historical 3% growth rate, the American economy would be $2.3 trillion larger today. (The Congressional Budget Office, using a slightly more conservative analysis, says the economy would be $1.7 trillion larger—still an astounding shortfall.) No, 3% growth is not a law of nature. It is no guarantee. But the failure to clear away self-defeating policies is simply unacceptable. continue reading . . .

From the Hollywood archives: Sony questions FCC Internet regulation

Although lots of firms sat out the public debate over Net Neutrality, we’ve learned that many of them strenuously opposed the FCC’s new Internet regulations behind the scenes. The latest example is Sony, which, according to this Daily Caller story, warned that Title II Internet regulation “might put up roadblocks on how we distribute content.” Plumbing internal emails now available because of the notorious Sony hack, DC found a number of private complaints about the FCC. Sony Pictures Entertainment chief technology officer Spencer Stephens, for example, was adamant:

“The Internet has drawn investment precisely because it isn’t a utility,” Stephens wrote. “My expectation is that prioritized services will mean investment in infrastructure which would expand the size of the pipe.”

Responding to Netflix’s assertions that interconnection disagreements compelled the FCC to enact sweeping regulation, Stephens wrote that “their claims that they have been held to ransom are, IMHO, complete BS.”

More here.

Moore’s Law: A 50th Anniversary Assessment

See our celebration of the 50th anniversary of Moore’s Law — “Moore’s Law Exceeded Moore’s Expectations.”

And for a longer treatment, see our 19-page paper — “Moore’s Law: A 50th Anniversary Assessment.”

Netflix, Mozilla, Google recant on Net Neutrality

Dept. of You Can’t Make This Stuff Up:

Three of the driving forces behind the 10-year effort to regulate the Internet — Netflix, Mozilla, and Google — have, in the last few days and in their own ways, all recanted their zealous support of Net Neutrality. It may have been helpful to have this information . . . before last week, when the FCC plunged the entire Internet industry into a years-long legal war.

First, on Monday, Netflix announced it had entered into a “sponsored data” deal with an Australian ISP, which violates the principles of “strong Net Neutrality,” Netflix’s preferred and especially robust flavor of regulation.

Then on Wednesday, Netflix CFO David Wells, speaking at an investor conference, said

“Were we pleased it pushed to Title II? Probably not,” Wells said at the conference. “We were hoping there might be a non-regulated solution.”

At this week’s huge Mobile World Congress in Barcelona, meanwhile, my AEI colleague Jeff Eisenach reported via Twitter that a Mozilla executive had backtracked:

Mozilla’s Dixon-Thayer is latest #netneutrality advocate to backpedal – “we don’t necessarily favor regulation” #repealtitleII #MWC15MP
3/4/15, 10:44 AM

Add these to the revelations about Google’s newfound reticence. Several weeks ago, in The Wall Street Journal‘s blockbuster exposé, we found out that Google Chairman Eric Schmidt called the White House to protest President Obama’s surprise endorsement of Title II regulation of the Internet. Then, just days before the February 26 vote at the FCC, Google urgently pleaded that the Commission remove the bizarre new regulatory provision known as broadband subscriber access service (BSAS), which would have created out of thin air a hereto unknown “service” between websites and ISP consumers — in order to regulate that previously nonexistent service. (Ironic, yes, that this BSAS provision was dreamt up by . . . Mozilla.) Google was successful, just 48 hours before the vote, in excising this menacing regulation of a phantom service. But Google and the others are waking up to the fact that Title II and broad Section 706 authority might contain more than a few nasty surprises.

Fred Campbell examined Netflix’s statements over the last year and concluded: “Netflix bluffed. And everybody lost.”

And Yet . . .

The bottom line of these infuriating reversals may actually be a positive for the Internet. These epiphanies — “Holy bit, we just gave the FCC the power do do what!?!” — may wake serious people from the superficial slumber of substance-free advocacy. The epiphanies may give new life to efforts in Congress to find a legislative compromise that would prohibit clear bad behavior (blocking, throttling, etc.) but which would also circumscribe the FCC’s regulatory ambitions and thus allow the Internet to continue on its mostly free and unregulated — and hugely successful — path.

Could Apple be awarded all of Ford’s or Lexus’s profits?

Screen Shot 2015-03-04 at 10.42.26 AMThe fanfare surrounding Apple exploded to new levels two weeks ago as we learned that the iPhone maker may enter the automobile business. The Wall Street Journal reported that CEO Tim Cook has hired away top auto executives from Mercedes and Ford and is running a secret car team that may number up to 1,000 employees. Apple, apparently, doesn’t want to let Google, with its driverless car program, or Tesla, the auto darling of the moment, have all the fun. Or, another rumor goes, maybe Apple plans to buy Tesla — for $75 billion. Who knows. Odds are Apple will never build cars. Perhaps Apple is mostly targeting electronics, software, and content in the new and very large “connected car” world.

Whatever the case, its not difficult to imagine Apple’s iOS, its apps, its icons, and its designs seeping into more and more devices, from smart-watches to smart-homes to connected cars.

Which gets us to the point of this post . . .

There’s a big oral argument today. No, not the health care hearing at the Supreme Court. Today is the latest round of the four-years patent war between Apple and Samsung. The two smartphone titans have been suing each other all over the world, but the cases have reduced to a couple remaining skirmishes in American courts.

While not the focus of today’s argument, the highest profile issue remains unresolved. Last year a jury found Samsung infringed three fairly minor Apple design patents and awarded Apple $930 million — a huge number considering the nature and consequence of the patents in question. Among other legal arguments at issue is a quirk of patent law, dating to 1887, which says an infringer is liable for its “total profit.” But as we’ve previously explained, in today’s market of hypercomplex products, this rule is perverting rationality.

The question is whether the remedy in these cases — the award to the plaintiff of the total profits earned by the defendant’s product — makes any sense in the modern world.

A smartphone is a complex integration of thousands of hardware and software technologies, manufacturing processes, aesthetic designs and concepts. These components may each be patented, or licensed, or not at all, by any number of firms. A smartphone, by one estimate, may contain up to 250,000 patents. Does a minor design patent comprising a tiny fraction of a product’s overall makeup drive the purchase decision? If company A’s product contains one infringing component among many thousands, even if it has no knowledge or intent to infringe, and even if the patent should never have been issued, does it make sense that company B gets all company A’s profits?

There are good reasons to think a fair reading gives a much saner result:

To see why the phrase should be interpreted in a common sense way, consider an alternative plain reading. Why couldn’t “total profit,” for example, mean the entire profit of the firm, including profits derived from unrelated products?

Does anyone think this is the meaning of the law? No. Among other common sense readings, the phrase “to the extent” is a modifier that can easily be read to limit the award in proportion to the severity of the infringement. An additional consideration is that many design patents better resemble trademarks and copyrights, and in fact trademark and copyright law (although imperfect themselves) often provide for more common sense remedies.

Imagine, however, if the reading of the 1887 law that yielded the $930-million award is upheld. Several years from now, Apple’s iOS is installed in Chevrolets and BMWs. But Ford and Lexus are using distinct software that in some way resembles Apple’s. Apple sues Ford and Lexus for a tiny graphical icon containing a bevel that could only have originated in the mind of Sir Johnny Ive. Could Apple be awarded all of Ford’s or Lexus’s profits?

Absurd? Yes. But that is the logical extension of the overly-expansive “total profits” reading.

In the last few years, the Supreme Court has reined in software patents in a hugely constructive way. A common sense ruling here would be one more step forward on the path to patent sanity.

More evidence against Internet regulation: the huge U.S.-European broadband gap

In its effort to regulate the Internet, the Federal Communications Commission is swimming upstream against a flood of evidence. The latest data comes from Fred Campbell and the Internet Innovation Alliance, showing the startling disparities between the mostly unregulated and booming U.S. broadband market, and the more heavily regulated and far less innovative European market. In November, we showed this gap using the measure of Internet traffic. Here, Campbell compares levels of investment and competitive choice (see chart below). The bottom line is that the U.S. invests around four times as much in its wired broadband networks and about twice as much in wireless. It’s not even close. Why would the U.S. want to drop America’s hugely successful model in favor of “President Obama’s plan to regulate the Internet,” which is even more restrictive and intrusive than Europe’s?

Screen Shot 2015-02-16 at 12.39.38 PM

« Previous PageNext Page »