Author Archives: Admin

Netflix, Mozilla, Google recant on Net Neutrality

Dept. of You Can’t Make This Stuff Up:

Three of the driving forces behind the 10-year effort to regulate the Internet — Netflix, Mozilla, and Google — have, in the last few days and in their own ways, all recanted their zealous support of Net Neutrality. It may have been helpful to have this information . . . before last week, when the FCC plunged the entire Internet industry into a years-long legal war.

First, on Monday, Netflix announced it had entered into a “sponsored data” deal with an Australian ISP, which violates the principles of “strong Net Neutrality,” Netflix’s preferred and especially robust flavor of regulation.

Then on Wednesday, Netflix CFO David Wells, speaking at an investor conference, said

“Were we pleased it pushed to Title II? Probably not,” Wells said at the conference. “We were hoping there might be a non-regulated solution.”

At this week’s huge Mobile World Congress in Barcelona, meanwhile, my AEI colleague Jeff Eisenach reported via Twitter that a Mozilla executive had backtracked:

JeffEisenach
Mozilla’s Dixon-Thayer is latest #netneutrality advocate to backpedal – “we don’t necessarily favor regulation” #repealtitleII #MWC15MP
3/4/15, 10:44 AM

Add these to the revelations about Google’s newfound reticence. Several weeks ago, in The Wall Street Journal‘s blockbuster exposé, we found out that Google Chairman Eric Schmidt called the White House to protest President Obama’s surprise endorsement of Title II regulation of the Internet. Then, just days before the February 26 vote at the FCC, Google urgently pleaded that the Commission remove the bizarre new regulatory provision known as broadband subscriber access service (BSAS), which would have created out of thin air a hereto unknown “service” between websites and ISP consumers — in order to regulate that previously nonexistent service. (Ironic, yes, that this BSAS provision was dreamt up by . . . Mozilla.) Google was successful, just 48 hours before the vote, in excising this menacing regulation of a phantom service. But Google and the others are waking up to the fact that Title II and broad Section 706 authority might contain more than a few nasty surprises.

Fred Campbell examined Netflix’s statements over the last year and concluded: “Netflix bluffed. And everybody lost.”

And Yet . . .

The bottom line of these infuriating reversals may actually be a positive for the Internet. These epiphanies — “Holy bit, we just gave the FCC the power do do what!?!” — may wake serious people from the superficial slumber of substance-free advocacy. The epiphanies may give new life to efforts in Congress to find a legislative compromise that would prohibit clear bad behavior (blocking, throttling, etc.) but which would also circumscribe the FCC’s regulatory ambitions and thus allow the Internet to continue on its mostly free and unregulated — and hugely successful — path.

Could Apple be awarded all of Ford’s or Lexus’s profits?

Screen Shot 2015-03-04 at 10.42.26 AMThe fanfare surrounding Apple exploded to new levels two weeks ago as we learned that the iPhone maker may enter the automobile business. The Wall Street Journal reported that CEO Tim Cook has hired away top auto executives from Mercedes and Ford and is running a secret car team that may number up to 1,000 employees. Apple, apparently, doesn’t want to let Google, with its driverless car program, or Tesla, the auto darling of the moment, have all the fun. Or, another rumor goes, maybe Apple plans to buy Tesla — for $75 billion. Who knows. Odds are Apple will never build cars. Perhaps Apple is mostly targeting electronics, software, and content in the new and very large “connected car” world.

Whatever the case, its not difficult to imagine Apple’s iOS, its apps, its icons, and its designs seeping into more and more devices, from smart-watches to smart-homes to connected cars.

Which gets us to the point of this post . . .

There’s a big oral argument today. No, not the health care hearing at the Supreme Court. Today is the latest round of the four-years patent war between Apple and Samsung. The two smartphone titans have been suing each other all over the world, but the cases have reduced to a couple remaining skirmishes in American courts.

While not the focus of today’s argument, the highest profile issue remains unresolved. Last year a jury found Samsung infringed three fairly minor Apple design patents and awarded Apple $930 million — a huge number considering the nature and consequence of the patents in question. Among other legal arguments at issue is a quirk of patent law, dating to 1887, which says an infringer is liable for its “total profit.” But as we’ve previously explained, in today’s market of hypercomplex products, this rule is perverting rationality.

The question is whether the remedy in these cases — the award to the plaintiff of the total profits earned by the defendant’s product — makes any sense in the modern world.

A smartphone is a complex integration of thousands of hardware and software technologies, manufacturing processes, aesthetic designs and concepts. These components may each be patented, or licensed, or not at all, by any number of firms. A smartphone, by one estimate, may contain up to 250,000 patents. Does a minor design patent comprising a tiny fraction of a product’s overall makeup drive the purchase decision? If company A’s product contains one infringing component among many thousands, even if it has no knowledge or intent to infringe, and even if the patent should never have been issued, does it make sense that company B gets all company A’s profits?

There are good reasons to think a fair reading gives a much saner result:

To see why the phrase should be interpreted in a common sense way, consider an alternative plain reading. Why couldn’t “total profit,” for example, mean the entire profit of the firm, including profits derived from unrelated products?

Does anyone think this is the meaning of the law? No. Among other common sense readings, the phrase “to the extent” is a modifier that can easily be read to limit the award in proportion to the severity of the infringement. An additional consideration is that many design patents better resemble trademarks and copyrights, and in fact trademark and copyright law (although imperfect themselves) often provide for more common sense remedies.

Imagine, however, if the reading of the 1887 law that yielded the $930-million award is upheld. Several years from now, Apple’s iOS is installed in Chevrolets and BMWs. But Ford and Lexus are using distinct software that in some way resembles Apple’s. Apple sues Ford and Lexus for a tiny graphical icon containing a bevel that could only have originated in the mind of Sir Johnny Ive. Could Apple be awarded all of Ford’s or Lexus’s profits?

Absurd? Yes. But that is the logical extension of the overly-expansive “total profits” reading.

In the last few years, the Supreme Court has reined in software patents in a hugely constructive way. A common sense ruling here would be one more step forward on the path to patent sanity.

More evidence against Internet regulation: the huge U.S.-European broadband gap

In its effort to regulate the Internet, the Federal Communications Commission is swimming upstream against a flood of evidence. The latest data comes from Fred Campbell and the Internet Innovation Alliance, showing the startling disparities between the mostly unregulated and booming U.S. broadband market, and the more heavily regulated and far less innovative European market. In November, we showed this gap using the measure of Internet traffic. Here, Campbell compares levels of investment and competitive choice (see chart below). The bottom line is that the U.S. invests around four times as much in its wired broadband networks and about twice as much in wireless. It’s not even close. Why would the U.S. want to drop America’s hugely successful model in favor of “President Obama’s plan to regulate the Internet,” which is even more restrictive and intrusive than Europe’s?

Screen Shot 2015-02-16 at 12.39.38 PM

The last refuge of Internet regulators: the theory of the “terminating access monopoly”

Net neutrality activists have deployed a long series of rationales in their quest for government control of the Internet. As each rationale is found wanting, they simply move onto the next, more exotic theory. The debate has gone on so long that they’ve even begun recycling through old theories that were discredited long ago.

In the beginning, the activists argued that there should be no pay for performance anywhere on the Net. We pointed out the most obvious example of a harmful consequence of their proposal: their rules, as originally written, would have banned widely used content delivery networks (CDNs), which speed delivery of packets (for a price).

Then they argued that without strong government rules, broadband service providers would block innovation at the “edges” of the network. But for the last decade, under minimal regulation, we’ve enjoyed an explosion of new technologies, products, and services from content and app firms like YouTube, Facebook, Netflix, Amazon, Twitter, WhatsApp, Etsy, Snapchat, Pinterest, Twitch, and a thousand others. Many of these firms have built businesses worth billions of dollars.

They said we needed new rules because the light-touch regulatory environment had left broadband in the U.S. lagging its international rivals, whose farsighted industrial policies had catapulted them far ahead of America. Oops. Turns out, the U.S. leads the world in broadband. (See my colleague Richard Bennett’s detailed report on global broadband and my own.)

Then they argued that, regardless of how well the U.S. is doing, do you really trust a monopoly to serve consumer needs? We need to stop the broadband monopolist — the cable company. Turns out most Americans have several choices in broadband providers, and the list of options is growing — see FiOS, U-verse, Google Fiber, satellite, broadband wireless from multiple carriers, etc. No, broadband service is not like peanut butter. Because of the massive investments required to build networks, there will never be many dozens of wires running to each home. But neither is broadband a monopoly.

Artificially narrowing the market is the first refuge of nearly all bureaucrats concerned with competition. It’s an easy way to conjure a monopoly in almost any circumstance. My favorite example was the Federal Trade Commission’s initial opposition in 2003 to the merger of Haagen-Dasz (Nestle) and Godiva (Dreyer’s). The government argued it would harmfully reduce competition in the market for “super premium ice cream.” The relevant market, in the agency’s telling, wasn’t food, or desserts, or sweets, or ice cream, or even premium ice cream, but super premium ice cream.

(more…)

Broadband facts: GON with the wind

See below our post from TechPolicyDaily.com responding to President Obama’s January 14 speech in Iowa. We’ve added some additional notes at the bottom of the post.

Yesterday, President Obama visited Cedar Falls, Iowa, to promote government-run broadband networks. On Tuesday, he gave a preview of the speech from the Oval Office. We need to help cities and towns build their own networks, he said, because the US has fallen behind the rest of the world. He pointed to a chart on his iPad, which showed many big US cities trailing Paris, Tokyo, Hong Kong, and Seoul in broadband speeds. Amazingly, however, some small US towns with government-owned broadband networks matched these world leaders with their taxpayer-funded deployment of gigabit broadband.

I wish I could find a more polite way to say this, but the President’s chart is utter nonsense. Most Parisians do not enjoy Gigabit broadband. Neither do most residents of Tokyo, Hong Kong, or Seoul, which do in fact participate in healthy broadband markets. Perhaps most importantly, neither do most of the citizens of American towns, like Cedar Falls, Chattanooga, or Lafayette, which are the supposed nirvanas of government-run broadband.*

The chart, which is based on a fundamentally flawed report, and others like it, deliberately obscures the true state of broadband around the world. As my AEI colleagues and I have shown, by the most important and systematic measures, the US not only doesn’t lag, it leads. The US, for example, generates two to three times the Internet traffic (per capita and per Internet user) of the most advanced European and Asian nations. (more…)

The U.S. Leads the World in Broadband

See our Wall Street Journal op-ed from December 8, which summarizes our new research on Internet traffic and argues for a continued policy of regulatory humility for the digital economy.

Continue reading here. Or read the text below the fold . . . 

(more…)

Commissioner Pai’s Netflix letter exposes fundamental flaws of Internet regulation

Combatants in the Net Neutrality wars often seem to talk past each other. Sometimes it’s legitimate miscommunication. More often, though, it arises from fundamental defects in the concept itself.

On December 2, Commissioner Ajit Pai wrote to Netflix, Inc., saying he “was surprised to learn of allegations that Netflix has been working to effectively secure ‘fast lanes’ for its own content on ISPs’ networks at the expense of its competitors.” Commissioner Pai noted press accounts that suggested Netflix’s Open Connect content delivery platform and its use of specialized video streaming protocols put video from non-Netflix sources at a disadvantage. Commissioner Pai concluded that “these allegations raise an apparent conflict with Netflix’s advocacy of strong net neutrality regulations” and thus asked for an explanation.

In its reply of December 11, Netflix made four basic points. Netflix (1) said it “designed Open Connect content delivery network (CDN) to provide consumers with a high-quality video experience”; (2) insisted “Open Connect is not a fast lane . . . . Open Connect helps ISPs reduce costs and better manage congestion, which results in a better Internet experience for all end users”; (3) said it “uses open-source software and readily-available hardware components”; and (4) applauded other firms for developing open video caching standards but “has focused” on its own proprietary system because it is more efficient and customer friendly than the collaborative industry efforts.

Three of Netflix’s four points are reasonable, as far as they go. The company is developing technologies and architectures to improve customer service and beat the competition. The firm, however, seems not to grasp Commissioner Pai’s central point: Netflix relishes aggressive competition on its own behalf but wants to outlaw similarly innovative behavior from the rest of the Internet economy.

(more…)

How can U.S. broadband lag if it generates 2-3 times the traffic of other nations?

Is the U.S. broadband market healthy or not? This question is central to the efforts to change the way we regulate the Internet. In a short new paper from the American Enterprise Institute, we look at a simple way to gauge whether the U.S. has in fact fallen behind other nations in coverage, speed, and price . . . and whether consumers enjoy access to content. Here’s a summary:

  • Internet traffic volume is an important indicator of broadband health, as it encapsulates and distills the most important broadband factors, such as access, coverage, speed, price, and content availability.
  • US Internet traffic — a measure of the nation’s “digital output” — is two to three times higher than most advanced nations, and the United States generates more Internet traffic per capita and per Internet user than any major nation except for South Korea.
  • The US model of broadband investment and innovation — which operates in an environment that is largely free from government interference — has been a dramatic success.
  • Overturning this successful policy by imposing heavy regulation on the Internet puts one of America’s most vital industries at risk.

M-Lab: The Real Source of the Web Slow-Down

Last week, M-Lab, a group that monitors select Internet network links, issued a report claiming interconnection disputes caused significant declines in consumer broadband speeds in 2013 and 2014.

This was not news. Everyone knew the disputes between Netflix and Comcast/Verizon/AT&T and others affected consumer speeds. We wrote about the controversy here, here, and here, and our “How the Net Works” report offered broader context.

The M-Lab study, “ISP Interconnection and Its Impact on Consumer Internet Performance,” however, does have some good new data. Although M-Lab says it doesn’t know who was “at fault,” advocates seized on the report as evidence of broadband provider mischief at big interconnection points.

But the M-Lab data actually show just the opposite. As you can see in the three graphs below, Comcast, Time Warner, Verizon, and to a lesser extent AT&T all show sharp drops in performance in May of 2013. Then network performance of all four networks at the three monitoring points in New York, Dallas, and Los Angeles all show sudden improvements in March of 2014.

The simultaneous drops and spikes for all four suggest these firms could not have been the cause. It would have required some sort of amazingly precise coordination among the four firms. Rather, the simultaneous action suggests the cause was some outside entity or event. Dan Rayburn of StreamingMedia agrees and offers very useful commentary on the M-Lab study here.

(more…)

Interconnection: Arguing for Inefficiency

Last week Level 3 posted some new data from interconnection points with three large broadband service providers. The first column of the chart, with data from last spring, shows lots of congestion between Level 3 and the three BSPs. You might recall the battles of last winter and early spring when Netflix streaming slowed down and it accused Comcast and other BSPs of purposely “throttling” its video traffic. (We wrote about the incident here, here, here, and here.)

The second column of the Level 3 chart, with data from September, shows that traffic with two of the three BSPs is much less congested today. Level 3 says, reasonably, the cause for the change is Netflix’s on-net transit (or paid peering) agreements with Comcast and (presumably) Verizon, in which Netflix and the broadband firms established direct connections with one another. As Level 3 writes, “You might say that it’s good news overall.” And it is: these on-net transit agreements, which have been around for at least 15 years, and which are used by Google, Amazon, Microsoft, all the content delivery networks (CDNs), and many others, make the Net work better and more efficiently, cutting costs for content providers and delivering better, faster, more robust services to consumers.

But Level 3 says despite this apparent improvement, the data really shows the broadband providers demanding “tolls” and that this is bad for the Internet overall. It thinks Netflix and the broadband providers should be forced to employ an indirect A–>B–>C architecture even when a direct A–>C architecture is more efficient.

The Level 3 charts make another probably unintended point. Recall that Netflix, starting around two years ago, began building its own CDN called OpenConnect. Its intention was always to connect directly to the broadband providers (A–>C) and to bypass Level 3 and other backbone providers (B). This is exactly what happened. Netflix connected to Comcast, Verizon, and others (although for a small fee, rather than for free, as it had hoped). And it looks like the broadband providers were smart not to build out massive new interconnection capacity with Level 3 to satisfy a peering agreement that was out of balance, and which, as soon as Netflix left, regained balance. It would have been a huge waste (what they used to call stranded investment).

Twitch Proves the Net Is Working

Below find our Reply Comments in the Federal Communications Commission’s Open Internet proceeding:

September 15, 2014

Twitch Proves the Net Is Working

On August 25, 2014, Amazon announced its acquisition of Twitch for around $1 billion. Twitch  (twitch.tv) is a young but very large website that streams video games and the gamers who play them. The rise of Twitch demonstrates the Net is working and, we believe, also deals a severe blow to a central theory of the Order and NPRM.

The NPRM repeats the theory of the 2010 Open Internet Order that “providers of broadband Internet access service had multiple incentives to limit Internet openness.” The theory advances a concern that small start-up content providers might be discouraged or blocked from opportunities to grow. Neither the Order nor the current NPRM considers or even acknowledges evidence or arguments to the contrary — that broadband service providers (BSPs) may have substantial incentives to promote Internet openness. Nevertheless, the Commission now helpfully seeks comment “to update the record to reflect marketplace, technical, and other changes since the 2010 Open Internet Order was adopted that may have either exacerbated or mitigated broadband providers’ incentives and ability to limit Internet openness. We seek general comment on the Commission’s approach to analyzing broadband providers’ incentives and ability to engage in practices that would limit the open Internet.”

The continued growth of the Internet, and the general health of the U.S. Web, content, app, device, and Internet services markets — all occurring in the absence of Net Neutrality regulation — more than mitigate the Commission’s theory of BSP incentives. While there is scant evidence for the theory of bad BSP behavior, there is abundant evidence that openness generally benefits all players throughout the Internet value chain. The Commission cannot ignore this evidence.

The rise of Twitch is a perfect example. In three short years, Twitch went from brand new start-up to the fourth largest single source of traffic on the Internet. Google had previously signed a term sheet with Twitch, but so great was the momentum of this young, tiny company, that it could command a more attractive deal from Amazon. At the time of its acquisition by Amazon, Twitch said it had 55 million unique monthly viewers (consumers) and more than one million broadcasters (producers), generating 15 billion minutes of content viewed a month. According to measurements by the network scientist and Deepfield CEO Craig Labovitz, only Netflix, Google’s YouTube, and Apple’s iTunes generate more traffic.

(more…)

FCC — the Federal Crony Commission?

Ok, maybe that’s a little harsh. But watch this video of T-Mobile CEO John Legere boasting that’s he’s “spent a lot of time in Washington lately, with the new chairman of the FCC,” and that “they love T-Mobile.”

Ah, spring. Love is in the air. Great for twenty-somethings, not so great for federal agencies. The FCC, however, is thinking about handing over valuable wireless spectrum to T-Mobile and denying it to T-Mobile’s rivals. This type of industrial policy is partially responsible for the sluggish economy.

From taxpayer subsidies for connected Wall Street banks to favored green energy firms with the right political allies, cronyism prevents the best firms from serving consumers with the best products in the most efficient way. Cronyism is good (at least temporarily) for a few at the top. But it hurts everyone else. Government favors ensure that bad ideas and business models are supported even if they would have proved wanting in a more neutral market. They transfer scarce taxpayer dollars to friends and family. They also hurt firms who aren’t fortunate enough to have the right friends in the right places at the right time. It’s hard to compete against a rival who has the backing of Washington. The specter of arbitrary government then hangs over the economy as firms and investors make decisions not on the merits but on a form of kremlinology — what will Washington do? In the case at hand, cronyism could blow up the whole spectrum auction, an act of wild irresponsibility in the service of a narrow special interest (we’ve written about it herehere, and here).

The U.S. has never been perfectly free of such cronyism, but our system was better than most and over the centuries attracted the world’s financial and human capital because investors and entrepreneurs knew that in the U.S. the best ideas and the hardest work tend to win out. Effort, smarts, and risk capital won’t be snuffed out by some arbitrary bureaucratic decision or favor. That was the stuff of Banana Republics — the reason financial and human capital fled those spots for America, preferring the Rule of Law to the Whim of Man.

The FCC’s prospective auction rules are perplexing in part because the U.S. mobile industry is healthy — world-leading healthy. More usage, faster speeds, plummeting prices, etc. Why risk interrupted that string of success? Economist Hal Singer shows that in the FCC’s voluminous reports on the wireless industry, it has failed to present any evidence of monopoly power that would justify its rigging of the spectrum auctions. On the other hand, an overly complex auction could derail spectrum policy for a decade.

GDP, Unemployment, and the ‘Quaternary Society’

On February 28, the Bureau of Economic Analysis revised fourth quarter U.S. GDP growth downward to just 2.4% from an initial estimate of 3.2%. For 2013, the economy expanded just 1.9%, nearly a point lower than the lackluster 2.8% growth of 2012. Five years after the sharp downturn of 2008-09, we are still just limping along.

Granted, the stock market keeps making all-time highs. That is not insignificant, and in the past rising stocks often signaled growth ahead. Another important consideration weighing against depressingly slow growth is a critique of our economic measures themselves. Does gross domestic product (GDP), for example, accurately capture output, let alone value, technical progress, and overall wellbeing? A new book GDP: A Brief But Affectionate History, by Diane Coyle, examines some of the shortcomings of GDP-the-measure. And lots of smart commentary has been written on the ways that technologies that improve standards of living often don’t show up on official ledgers — from anesthesia to the massive consumer surpluses afforded by information technology. In addition, although income inequality is said by many to have grown, consumption inequality has, by many measures, substantially fallen. All true and interesting and important, and worthy of much further discussion at a later date.

For now, however, we still must pay the butcher, the baker, and the aircraft carrier maker — with real dollars. And the dollar economy is not growing nearly fast enough. We’ve sliced and diced the poor employment data a thousand ways these last few years, but one of the most striking recent figures is the fall in the portion of American men 25-54 who are working. Looking at this cohort tends to minimize the possible retirement and schooling factors that could skew the analysis. We simply presume that most able-bodied men in this range should be working. And the numbers are bad. As Binyamin Appelbaum of the New York Times Economix blog writes:

In February 2008, 87.4 percent of men in that demographic had jobs. Six years later, only 83.2 percent of men in that bracket are working.

Are these working-age men not working because they are staying home with children? Because they don’t have the right skills for today’s economy? Because the economy is not growing fast enough and creating enough opportunities? Because they are discouraged? Because policies have actively discouraged work in favor of leisure, or at least non-work?

The polymathic thinker Herman Kahn, back in the 1970s book The Next 200 Years, suggested another possibility. Kahn first recounted the standard phases of economic history: a primary economy that focused on extraction — agriculture, mining, forestry; a secondary economy focused on construction and manufacturing; and a tertiary economy, primarily composed of services, management, and knowledge work. But Kahn went further, pointing toward a “quaternary society,” where work would be beside the point and various types of personal fulfillment would rise in importance. Where the primary society conducted games against nature, the secondary society conducted games against materials, and the tertiary society pitted organizations against other organizations, people in the quaternary society would play “games with and against themselves, . . . each other, and . . . communities.” He said much of this activity, from obsessions with gourmet cooking and interior design to hunting, hiking, and fishing, to exercise, adventures, and public campaigns and causes. He said quaternary activities would look a lot like leisure, or hobbies. He predicted many of us in the future would see this as “stagnation.”

If any of you have checked out twitch.tv, you might think Kahn was on to something. Twitch.tv is a website that broadcasts other people playing and commentating on video games in real-time. It appears to be an entirely “meta” activity. But twitch.tv is no tiny fringe curiosity. It is the fourth largest consumer of bandwidth on the Internet.

Is twitch.tv responsible for millions of American men dropping out of the labor force? No. But the Kahn hypothesis is, nevertheless, provocative and worth thinking about.

The possibility of a quaternary economy, however, depends in some measure on substantial wealth. And here one could make a case either way. Is it possible the large consumer surpluses of the modern knowledge economy allow us to provide for our basic needs quite easily and, if we are not driven by other ambitions or internal drives, live somewhat comfortably without sustained effort in a conventional job? Perhaps some of this is going on. Is America really so wealthy, however, that large portions of society — not merely the super wealthy — can drop out of work and pursue hobbies full time? Unlikely. There is evidence that many Baby Boomers near retirement, or even those who had retired, are working more than they’d planned to make up for lost savings. Kahn’s quaternary economy will have to wait.

I say we won’t know the answers to many of these questions until we remove the shackles around the economy’s neck and see what happens. If we start fresh with a simple tax code, substantially deregulate health, education, energy, and communications, and remove other barriers to work, investment, and entrepreneurship, will just 83% of working-age men continue choosing to work? And will GDP, as imperfect a measure as it is, limp along around 2%? (Charles Murray, presenting a new paper at a recent Hudson Institute roundtable on the future of American innovation, hit us with some seriously pessimistic cultural indicators. More on that next time.)

I doubt it. I don’t think human kind has permanently sloughed off its internal ambition toward improvement, growth, and (indirectly) GDP generation. I think new policy and new optimism could unleash an enormous boom.

Phone Company Screws Everyone: Forces Rural Simpletons and Elderly Into Broadband, Locks Young Suburbanites in Copper Cage

Big companies must often think, damned if we do, damned if we don’t.

Netflix-Comcast Coverage

See our coverage of Comcast-Netflix, which really began before any deal was announced. Two weeks ago we wrote about the stories that Netflix traffic had slowed, and we suggested a more plausible explanation (interconnection disputes negotiations) than the initial suspicion (so called “throttling”). Soon after, we released a short paper, long in the works, describing “How the Net Works” — a brief history of interconnection and peering. And this week we wrote about it all at TechPolicyDaily, Forbes, and USNews.

Netflix, Verizon, and the Interconnection Question – TechPolicyDaily.com – February 13, 2014

How the Net Works: A Brief History of Internet Interconnection – Entropy Economics – February 21, 2014

Comcast, Netflix Prove Internet Is Working – TechPolicyDaily.com – February 24, 2014

Netflix, Comcast Hook Up Sparks Web Drama – Forbes.com – February 26, 2014

Comcast, Netflix and the Future of the Internet – U.S. News & World Report – February 27, 2014

— Bret Swanson

How much would an iPhone have cost in 1991?

Amazing! An iPhone is more capable than 13 distinct electronics gadgets, worth more than $3,000, from a 1991 Radio Shack ad. Buffalo writer Steve Cichon first dug up the old ad and made the point about the seemingly miraculous pace of digital advance, noting that an iPhone incorporates the features of the computer, CD player, phone, “phone answerer,” and video camera, among other items in the ad, all at a lower price. The Washington Post‘s tech blog The Switch picked up the analysis, and lots of people then ran with it on Twitter. Yet the comparison was, unintentionally, a huge dis to the digital economy. It massively underestimates the true pace of technological advance and, despite its humor and good intentions, actually exposes a shortcoming that plagues much economic and policy analysis.

To see why, let’s do a very rough, back-of-the-envelope estimate of what an iPhone would have cost in 1991.

In 1991, a gigabyte of hard disk storage cost around $10,000, perhaps a touch less. (Today, it costs around four cents ($0.04).) Back in 1991, a gigabyte of flash memory, which is what the iPhone uses, would have cost something like $45,000, or more. (Today, it’s around 55 cents ($0.55).)

The mid-level iPhone 5S has 32 GB of flash memory. Thirty-two GB, multiplied by $45,000, equals $1.44 million.

The iPhone 5S uses Apple’s latest A7 processor, a powerful CPU, with an integrated GPU (graphics processing unit), that totals around 1 billion transistors, and runs at a clock speed of 1.3 GHz, producing something like 20,500 MIPS (millions of instructions per second). In 1991, one of Intel’s top microprocessors, the 80486SX, oft used in Dell desktop computers, had 1.185 million transistors and ran at 20 MHz, yielding around 16.5 MIPS. (The Tandy computer in the Radio Shack ad used a processor not nearly as powerful.) A PC using the 80486SX processor at the time might have cost $3,000. The Apple A7, by the very rough measure of MIPS, which probably underestimates the true improvement, outpaces that leading edge desktop PC processor by a factor of 1,242. In 1991, the price per MIPS was something like $30.

So 20,500 MIPS in 1991 would have cost around $620,000.

But there’s more. The 5S also contains the high-resolution display, the touchscreen, Apple’s own M7 motion processing chip, Qualcomm’s LTE broadband modem and its multimode, multiband broadband transceiver, a Broadcom Wi-Fi processor, the Sony 8 megapixel iSight (video) camera, the fingerprint sensor, power amplifiers, and a host of other chips and motion-sensing MEMS devices, like the gyroscope and accelerometer.

In 1991, a mobile phone used the AMPS analog wireless network to deliver kilobit voice connections. A 1.44 megabit T1 line from the telephone company cost around $1,000 per month. Today’s LTE mobile network is delivering speeds in the 15 Mbps range. Wi-Fi delivers speeds up to 100 Mbps (limited, of course, by its wired connection). Safe to say, the iPhone’s communication capacity is at least 10,000 times that of a 1991 mobile phone. Almost the entire cost of a phone back then was dedicated to merely communicating. Say the 1991 cost of mobile communication (only at the device/component level, not considering the network infrastructure or monthly service) was something like $100 per kilobit per second.

Fifteen thousand Kbps (15 Mbps), multiplied by $100, is $1.5 million.

Considering only memory, processing, and broadband communications power, duplicating the iPhone back in 1991 would have (very roughly) cost: $1.44 million + $620,000 + $1.5 million = $3.56 million.

This doesn’t even account for the MEMS motion detectors, the camera, the iOS operating system, the brilliant display, or the endless worlds of the Internet and apps to which the iPhone connects us.

This account also ignores the crucial fact that no matter how much money one spent, it would have been impossible in 1991 to pack that much technological power into a form factor the size of the iPhone, or even a refrigerator.*

Tim Lee at The Switch noted the imprecision of the original analysis and correctly asked how typical analyses of inflation can hope to account for such radical price drops. (Harvard economist Larry Summers recently picked up on this point as well.)

But the fact that so many were so impressed by an assertion that an iPhone possesses the capabilities of $3,000 worth of 1991 electronics products — when the actual figure exceeds $3 million — reveals how fundamentally difficult it is to think in exponential terms.

Innovation blindness, I’ve long argued, is a key obstacle to sound economic and policy thinking. And this is a perfect example. When we make policy based on today’s technology, we don’t just operate mildly sub-optimally. No, we often close off entire pathways to amazing innovation.

Consider the way education policy has mostly enshrined a 150-year-old model, and in recent decades has thrown more money at the same broken system while blocking experimentation. The other day, the venture capitalist Marc Andreessen (@pmarca) noted in a Twitter missive the huge, but largely unforeseen, impact digital technologies are having on this industry that so desperately needs improvement:

“Four biggest K-12 education breakthroughs in last 20 years: (1) Google, (2) Wikipedia, (3) Khan Academy, (4) Wolfram Alpha.”

Maybe the biggest breakthroughs of the last 50 years. Point made, nonetheless. California is now closing down “coding bootcamps” — courses that teach people how to build apps and other software — because many of them are not state certified. This is crazy.

The importance of understanding the power of innovation applies to health care, energy, education, and fiscal policy, but no where is it more applicable than in Internet and technology policy, which is, at the moment, the subject of a much needed rethink by the House Energy and Commerce Committee.

— Bret Swanson

* To be fair, we do not account for the fact that back in 1991, had engineers tried to design and build chips and components with faster speeds and greater capacities than the consumer items mentioned, they could have in some cases scaled the technology in a more efficient manner than, for example, simply adding up consumer microprocessors totaling 20,500 MIPS. On the other hand, the extreme volumes of the consumer products in these memory, processing, and broadband communications categories, are what make the price drops possible. So this acknowledgment doesn’t change the analysis too much, if at all.

Reaction to “net neutrality” ruling

My AEI tech policy colleagues and I discussed today’s net neutrality ruling, which upheld the FCC’s basic ability to oversee broadband but vacated the two major, specific regulations.

Federal Court strikes down FCC “net neutrality” order

Today, the D.C. Federal Appeals Court struck down the FCC’s “net neutrality” regulations, arguing the agency cannot regulate the Internet as a “common carrier” (that is, the way we used to regulate telephones). Here, from a pre-briefing I and several AEI colleagues did for reporters yesterday, is a summary of my statement:

Chairman Wheeler has emphasized importance of Open Internet. We agree. The Internet is more open than ever — we’ve got more people, connected via more channels and more devices, to more content and more services than ever. And we will continue to enjoy an Open Internet because it benefits all involved — consumers, BSPs, content companies, software and device makers.

Chairman Wheeler has also emphasized recently that he believes innovation in multi-sided markets is important. At his Ohio State speech, he said we should allow experimentation, and when pressed on this apparent endorsement of multi-sided market innovation, he did not back down.

The AT&T “sponsored data” is a good example of such multi-sided market innovation, but one that many Net Neutrality supporters say violates NN. Sponsored Data, in which a content firm might pay for a portion of the data used by a consumer, increases total capacity, expands consumer choice, and would help keep prices lower than they would otherwise be. It also offers content firms a way to reach consumers. And it helps pay for cost of expensive broadband infrastructure. It is win-win-win.

Firms have already used this method — Amazon, for example, pays for the data downloads of Kindle ebooks.

Across the landscape, allowing technical and business model innovation is important to keep delivering diverse products to consumers at the best prices. Prohibiting “sponsored data” or tiered data plans or content partnerships or quality-of service based networking will reduce the flexibility of networks, reduce product differentiation, and reduce consumer choice. A rule that requires only one product or only one price level for a range of products could artificially inflate the price that many consumers pay. Low-level users may end up paying for high-end users. Entire classes of products might not come into being because a rule bans a crucial partnership that would have helped the product at its inception. Network architectures that can deliver better performance at lower prices might not arise.

Common carriage style regulation is not appropriate for the Internet. The Internet is a fast changing, multipurpose network, built and operated by numerous firms, with many types of data, content, products, and services flowing over it, all competing and cooperating in a healthy and dynamic environment. Old telephone style regulation, meant to regulate a monopoly utility that used a single purpose network to deliver one type of service, would be a huge (and possibly catastrophic) step backward for what is today a vibrant Internet economy.

The Court, though not ruling on the wisdom of Net Neutrality, essentially agreed and vacated the old-style common carriage rules. It’s a near-term win for the Internet. The court’s grant to the FCC of regulatory authority over the Internet, save common carriage, is, however, potentially problematic. We don’t know how broad this grant is or what the FCC might do with it. A fundamental rethink of our communications laws and regulations may thus be in order.

Why the fuss over “sponsored data”?

Today, at the Consumer Electronics Show in Las Vegas, AT&T said it would begin letting content firms — Google, ESPN, Netflix, Amazon, a new app, etc. — pay for a portion of the mobile data used by consumers of this content. If a mobile user has a 2 GB plan but likes to watch lots of Yahoo! news video clips, which consume a lot of data, Yahoo! can now subsidize that user by paying for that data usage, which won’t count against the user’s data limit.

Lots of people were surprised — or “surprised” — at the announcement and reacted violently. They charged AT&T with “double dipping,” imposing “taxes,” and of course the all-purpose net neutrality violation.

But this new sponsored data program is typical of multisided markets where a platform provider offers value to two or more parties — think magazines who charge both subscribers and advertisers. We addressed this topic before the idea was a reality. Back in June 2013, we argued that sponsored data would make lots of mobile consumers better off and no one worse off.

Two weeks ago, for example, we got word ESPN had been talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

Sounds like a reasonable deal all around. But not to everyone. “This is what a net neutrality violation looks like,” wrote Public Knowledge, a key backer of Internet regulation.

The idea that ESPN would pay to exempt its bits from data caps offends net neutrality’s abstract notion that all bits must be treated equal. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data — plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially if a non-watcher of ESPN, am worse off.

The critics’ real worry, then, is that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But is this government’s role — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms? This was our warning. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

What if magazines were barred from carrying advertisements? They’d have to make all their money from subscribers and thus (attempt to) charge much higher prices or change their business model. Consumers would lose, either through higher prices or less diversity of product offerings. And advertisers, deprived of an outlet to reach an audience, would lose. That’s what we call a lose-lose-lose proposition.

Maybe sponsored data will take off. Maybe not. It’s clear, however, in the highly dynamic mobile Internet business, we should allow such voluntary experiments.

Crisis of Complexity

[W]e have these big agencies, some of which are outdated, some of which are not designed properly . . . . The White House is just a tiny part of what is a huge, widespread organization with increasingly complex tasks in a complex world.

That was President Obama, last week, explaining Obamacare’s failed launch. We couldn’t have said it better ourselves.

Where Washington thinks this is a reason to give itself more to do, with more resources, however, we see it as a blaring signal of overreach.

The Administration now says Healthcare.gov is operating with “private sector velocity and effectiveness.” But why seek to further governmentalize one-seventh of the economy if the private sector is faster and more effective than government?

Meanwhile, the New York Times notes that

The technology troubles that plagued the HealthCare.gov website rollout, may not have come as a shock to people who work for certain agencies of the government — especially those who still use floppy disks, the cutting-edge technology of the 1980s.

Every day, The Federal Register, the daily journal of the United States government, publishes on its website and in a thick booklet around 100 executive orders, proclamations, proposed rule changes and other government notices that federal agencies are mandated to submit for public inspection.

So far, so good.

It turns out, however, that the Federal Register employees who take in the information for publication from across the government still receive some of it on the 3.5-inch plastic storage squares that have become all but obsolete in the United States.

Floppy disks make us chuckle. But the costs of complexity are all too real.

A Bloomberg study found the six largest U.S. banks, between 2008 and August of this year, spent $103 billion on lawyers and related legal expenses. These costs pale compared to the far larger economic distortions imposed by metastasizing financial regulation. Even Barney Frank is questioning whether his signature law, Dodd-Frank, is a good idea. The bureaucracy’s decision to push regulations intended for big banks onto money managers and mutual funds seems to have tipped his thinking.

This is not an aberration. This is what happens with vast, complex, ambiguous laws, which ask “huge, widespread” bureaucracies to implement them.

It is the norm of today’s sprawling Administrative State and of Congress’s penchant for 2,000-page wish lists, which ineluctably empower that Administrative State.

We resist, however, the idea that the problem is merely “outdated” or “inefficient” bureaucracy.

We do not need better people to administer these “laws.” With laws and regulations this extensive and ambiguous, they are inherently political. The best managers would seek efficient and effective outcomes based on common-sense readings and would resist political tampering. Effective implementation of conflicting and economically irrational rules would still yield big problems. Regardless, the goal is not effective management — it is political control.

Agency “reform” is not the answer, although in most cases reform is preferable to no reform. Even reformed agencies do not possess the information to manage a “complex world.” Anyway, “competent” management is not what the political branches want. Agencies routinely evade existing controls — such as procurement rules — when convenient. The largest Healthcare.gov contractor, for example, reportedly got the work without any contesting bids. That is not an oversight, it is a decision.

The laws and rules are uninterpretable by the courts. Depending on which judges hear the cases, we get dramatically and unpredictably divergent analyses, or the type of baby splitting Chief Justice Roberts gave us on Obamacare. Judges thus end up either making their own law or throwing the question back into the political arena.

Infinite complexity of law means there is no law.

“With great power,” Peter Parker’s (aka Spiderman’s) uncle told us, “comes great responsibility.” For Washington, however, ambiguity and complexity are features, not bugs. Ambiguity and complexity promote control without accountability, power without responsibility.

The only solution to this crisis of complexity is to reform the very laws, rules, scope, and aims of government itself.

In a paper last spring called “Keep It Simple,” we highlighted two instances — one from the labor markets and one from the capital markets — where even the most well-intended rules yielded catastrophic results. We showed how the interactions among these rules and the supporting bureaucracies produced unintended consequences. And we outlined a basic framework for assessing “good rules and bad rules.”

As our motto and objective, we adopted Richard Epstein’s aspiration of “simple rules for a complex world.” Which, you will notice, is the just opposite of the problem so incisively outlined by the President — Washington’s failed attempts to perform “complex tasks in a complex world.”

As we wrote elsewhere,

The private sector is good at mastering complexity and turning it into apparent simplicity — it’s the essence of wealth creation. At its best, the government is a neutral arbiter of basic rules. The Administration says it is ‘discovering’ how these ‘complicated’ things can blow up. We’ll see if government is capable of learning.

« Previous PageNext Page »