Archive for the ‘Internet’ Category

Interconnection: Arguing for Inefficiency

Monday, October 6th, 2014

Last week Level 3 posted some new data from interconnection points with three large broadband service providers. The first column of the chart, with data from last spring, shows lots of congestion between Level 3 and the three BSPs. You might recall the battles of last winter and early spring when Netflix streaming slowed down and it accused Comcast and other BSPs of purposely “throttling” its video traffic. (We wrote about the incident here, here, here, and here.)

The second column of the Level 3 chart, with data from September, shows that traffic with two of the three BSPs is much less congested today. Level 3 says, reasonably, the cause for the change is Netflix’s on-net transit (or paid peering) agreements with Comcast and (presumably) Verizon, in which Netflix and the broadband firms established direct connections with one another. As Level 3 writes, “You might say that it’s good news overall.” And it is: these on-net transit agreements, which have been around for at least 15 years, and which are used by Google, Amazon, Microsoft, all the content delivery networks (CDNs), and many others, make the Net work better and more efficiently, cutting costs for content providers and delivering better, faster, more robust services to consumers.

But Level 3 says despite this apparent improvement, the data really shows the broadband providers demanding “tolls” and that this is bad for the Internet overall. It thinks Netflix and the broadband providers should be forced to employ an indirect A–>B–>C architecture even when a direct A–>C architecture is more efficient.

The Level 3 charts make another probably unintended point. Recall that Netflix, starting around two years ago, began building its own CDN called OpenConnect. Its intention was always to connect directly to the broadband providers (A–>C) and to bypass Level 3 and other backbone providers (B). This is exactly what happened. Netflix connected to Comcast, Verizon, and others (although for a small fee, rather than for free, as it had hoped). And it looks like the broadband providers were smart not to build out massive new interconnection capacity with Level 3 to satisfy a peering agreement that was out of balance, and which, as soon as Netflix left, regained balance. It would have been a huge waste (what they used to call stranded investment).

Twitch Proves the Net Is Working

Wednesday, October 1st, 2014

Below find our Reply Comments in the Federal Communications Commission’s Open Internet proceeding:

September 15, 2014

Twitch Proves the Net Is Working

On August 25, 2014, Amazon announced its acquisition of Twitch for around $1 billion. Twitch  (twitch.tv) is a young but very large website that streams video games and the gamers who play them. The rise of Twitch demonstrates the Net is working and, we believe, also deals a severe blow to a central theory of the Order and NPRM.

The NPRM repeats the theory of the 2010 Open Internet Order that “providers of broadband Internet access service had multiple incentives to limit Internet openness.” The theory advances a concern that small start-up content providers might be discouraged or blocked from opportunities to grow. Neither the Order nor the current NPRM considers or even acknowledges evidence or arguments to the contrary — that broadband service providers (BSPs) may have substantial incentives to promote Internet openness. Nevertheless, the Commission now helpfully seeks comment “to update the record to reflect marketplace, technical, and other changes since the 2010 Open Internet Order was adopted that may have either exacerbated or mitigated broadband providers’ incentives and ability to limit Internet openness. We seek general comment on the Commission’s approach to analyzing broadband providers’ incentives and ability to engage in practices that would limit the open Internet.”

The continued growth of the Internet, and the general health of the U.S. Web, content, app, device, and Internet services markets — all occurring in the absence of Net Neutrality regulation — more than mitigate the Commission’s theory of BSP incentives. While there is scant evidence for the theory of bad BSP behavior, there is abundant evidence that openness generally benefits all players throughout the Internet value chain. The Commission cannot ignore this evidence.

The rise of Twitch is a perfect example. In three short years, Twitch went from brand new start-up to the fourth largest single source of traffic on the Internet. Google had previously signed a term sheet with Twitch, but so great was the momentum of this young, tiny company, that it could command a more attractive deal from Amazon. At the time of its acquisition by Amazon, Twitch said it had 55 million unique monthly viewers (consumers) and more than one million broadcasters (producers), generating 15 billion minutes of content viewed a month. According to measurements by the network scientist and Deepfield CEO Craig Labovitz, only Netflix, Google’s YouTube, and Apple’s iTunes generate more traffic.

The Commission’s theory said providers of video content, because of the large bandwidth requirements compared to other content types, were especially vulnerable to bad BSP behavior. Twitch is just such an online video player, yet it achieved hyper-growth and spectacular financial success in the absence of Net Neutrality rules. A firm that didn’t exist at the time of the 2010 Order is born and blossoms to become an Internet giant, courted by at least two of the world’s very largest Internet companies — all in the short time that courts, commissions, and companies are haggling over the rules. This is just one of many pieces of evidence demonstrating start-up firms — specifically start-ups that consume massive amounts of bandwidth — are thriving on the Internet.

Another piece of recent evidence bolsters the case that BSPs have incentives to promote, and in fact maintain, openness. In the second quarter of 2014, cable broadband subscribers for the first time ever outnumbered cable TV subscribers. Broadband is now not just the cable industry’s best product, it is its biggest product. It is popular because consumers can access the diverse bounty of the Web and the Net, and subscribers are voting with their feet.

The health of the Internet economy is a major blow to the theory. In an attempted rebuttal, the Commission might argue that although enforceable rules were not in place, BSPs were operating in an environment in which new rules were a possibility. This possibility, the Commission might assert, encouraged good behavior. Perhaps. Yet new rules to combat or discourage anticompetitive or anti-consumer behavior are always on the table. And many general laws and rules already exist to protect competition and consumers no matter the industry. Perhaps the theory is far less powerful than the NPRM assumes.

The theory of future bad behavior continues to be just that. The Commission is grasping at “might be’s.” But the reality of a healthy Internet economy demonstrates the success of the open Internet every day. The Commission should more heavily weight the mountains of accumulating evidence that BSPs have major incentives to promote openness. Similarly, as evidence piles up against it, the Commission should discount its previous theory of BSP behavior. We may argue over the relative incentives for BSPs to constrain or promote Internet openness. But no legitimate rule making can ignore the substantial incentives in favor of openness.

Given the manifest success of the entire value chain, the Chairman’s proposed case-by-case review process, under Section 706, is far preferable to the intrusive omni-regulatory regime of Title II.

Wireless Is Different

The Commission has so far wisely chosen not to apply its heaviest Net Neutrality rules to wireless networks. But it has asked for comment on the proposal that it do so.

A new paper by Jeffrey H. Reed and Nishith D. Tripathi shows just how complex today’s mobile networks are — and how they require even more intensive network management than wired networks. It adds to the overwhelming testimony of the technical community that “wireless is different” and that wireless networks, businesses, and devices would be especially harmed by intrusive Net Neutrality rules.

The number of wireless devices is moving quickly past 10 billion connections. In several years, the Internet of Everything could grow to 30, 50, or even 100 billion devices, nearly all connected wirelessly. The sheer numbers will only exacerbate the existing complexity of wireless networking. “From millisecond to millisecond,” write Reed and Tripathi,

handsets with differing capabilities, consumers with different usage patterns, applications that utilize different aspects and capabilities of both the handset and the network, and content consumption, including video, must be integrated with the network and managed adroitly to deliver a world-class broadband experience for the customer. Now imagine that millisecond to millisecond process happening while the consumer is in motion, while the handsets vary in capability (think flip-phone to smartphone), while the available network changes from 3G to 4G and from one available spectrum band to another, while traffic moves into and out of a cell sector, and while spectrum capacity is limited. This entire process — the integration of all these different variables — is unique to mobile broadband.

Now imagine adding dozens of new types of devices to the network, generating and consuming many types of data, with varied capacity, latency, and jitter requirements. All interacting on and moving between networks using licensed and unlicensed spectrum. All posing increasingly intense challenges of radio interference and data congestion.

Like the example of Twitch, the mobile Internet is a demonstrable success story. It is, however, even more vulnerable to misguided regulation. The burden of proof is on those who would impose regulation to show that new rules would somehow improve wireless from its existing position of strength, and that new rules, contrary to the overwhelming witness of the technical community, would not harm the mobile arena.

Netflix, Mozilla, and Title II

Two of the most prominent and forceful advocates of new Internet regulation are Netflix, the movie and TV-show streaming firm, and Mozilla, maker of the Firefox web browser. Though differing on a few details, each organization has proposed regulating the Internet as a Title II monopoly telephone service.

We admire both organizations for their innovative contributions to the digital universe. Because they are leading the charge for the government to oversee the Internet as never before, however, it is important to understand — and to refute, where warranted — their positions. Here we select and scrutinize just a few of the technical and economic arguments and assertions from their first-round comments.

Mozilla says: the FCC should “recognize a new type of service” — a so-called “remote delivery service,” defined as the connection between an “edge provider” and a broadband ISP’s subscriber. This downstream link would be regulated as a common carrier under Title II.

Mozilla thinks defining a new remote delivery service can both avoid the fraught re-classification of traditional broadband links and also wall off the rest of the Internet from the very real burdens of Title II. It seems to us not just a bad idea substantively, but too clever for its own good. For starters, in the many-to-many world that Mozilla describes, everyone is an edge provider in some sense. This makes it hard to avoid that, despite its best intentions, every network link will get swallowed up by Title II. Even Netflix says, correctly, that the “universe of potential edge providers is extremely heterogeneous.”

Mozilla uses an analogy in which a “doorman in a high-end condominium” holds package deliveries for the condo residents. The broadband ISP is the doorman, in Mozilla’s story, and his only job is to forward the packages to the residents. He may not charge the sender of the package to speed the delivery to Mrs. Smith on the 18th floor, nor can he threaten to slow down the package absent payment. But ISPs are not passive doormen or toll booth operators, and their broadband policy statements all commit not to degrade anyone’s service. They invest $60 billion in the U.S. each year to build networks, data centers, software, and services. The analogy isn’t perfect, but an ISP is in reality more like FedEx. It takes a lot of money to build the infrastructure to transport packages, or bits, and customers pay for the service.

One of the motivations behind Mozilla’s “remote delivery service” definition, it says, is to protect everyone else in the ecosystem from the ravages of Title II. Such an admission is a deep self-indictment. It is difficult to see how the proposal is anything more than a tool to regulate one’s business rivals and/or suppliers — a decidedly non-neutral policy.

Mozilla says: a determination that bans prioritization “would not prevent network operators from seeking new revenue models, or enabling services that require higher standards for delivery. It would instead require these services to be separated from the access service and structured as specialized services. So long as such services do not generate congestion or degrade traffic for the access service, they would fall outside the scope of Title II classification proposed in the Mozilla petition.”

The 2010 Open Internet rules addressed this point and made room for specialized or managed services outside the scope of net neutrality. We suppose this is better than not allowing room for special services that might require higher levels of capacity, or lower latency tolerances, or other premium options. We addressed this carve out idea in Reply Comments in November of 2010:

“The Commission should consider several unintended consequences of moving down the path of explicitly defining, and then exempting, particular ‘specialized’ services while choosing to regulate the so-called ‘basic,’‘best-effort,’ or ‘entry level’‘open Internet.’

“Regulating the ‘basic’ Internet but not ‘specialized’ services will surely push most of the network and application innovation and investment into the unregulated sphere. A ‘specialized’ exemption, although far preferable to a Net Neutrality world without such an exemption, would tend to incentivize both CAS [content, application, and service] providers and ISP service providers to target the ‘specialized’ category and thus shrink the scope of the ‘open Internet.’

“In fact, although specialized services should and will exist, they often will interact with or be based on the ‘basic’ Internet. Finding demarcation lines will be difficult if not impossible. In a world of vast overlap, convergence, integration, and modularity, attempting to decide what is and is not ‘the Internet’ is probably futile and counterproductive. The very genius of the Internet is its ability to connect to, absorb, accommodate, and spawn new networks, applications and services. In a great compliment to its virtues, the definition of the Internet is constantly changing.

“Moreover, a regime of rigid quarantine would not be good for consumers. If a CAS provider or ISP has to build a new physical or logical network, segregate services and software, or develop new products and marketing for a specifically defined ‘specialized’ service, there would be a very large disincentive to develop and offer simple innovations and new services to customers over the regulated ‘basic’ Internet. Perhaps a consumer does not want to spend the extra money to jump to the next tier of specialized service. Perhaps she only wants the service for a specific event or a brief period of time. Perhaps the CAS provider or ISP can far more economically offer a compelling service over the ‘basic’ Internet with just a small technical tweak, where a leap to a full-blown specialized service would require more time and money, and push the service beyond the reach of the consumer. The transactions costs of imposing a ‘specialized’ quarantine would reduce technical and economic flexibility on both CAS providers and ISPs and, most crucially, on consumers.

“Or, as we wrote in our previous Reply Comments about a related circumstance, ‘A prohibition of the voluntary partnerships that are likely to add so much value to all sides of the market – service provider, content creator, and consumer – would incentivize the service provider to close greater portions of its networks to outside content, acquire more content for internal distribution, create more closely held “managed services” that meet the standards of the government’s “exclusions,” and build a new generation of larger, more exclusive “walled gardens” than would otherwise be the case. The result would be to frustrate the objective of the proceeding. The result would be a less open Internet.’

“It is thus possible that a policy seeking to maintain some pure notion of a basic ‘open Internet’ could severely devalue the open Internet the Commission is seeking to preserve.”

Mozilla says: it urges “the Commission to ban paid prioritization and to apply the same open Internet rules to mobile wireless access services as to fixed services.”

Even technicians who have supported robust net neutrality regulation say applying the rules to wireless would be a mistake. The 2010 Open Internet rules exempted wireless. And for good reason. Wireless is a tricky and constrained environment. Wireless technologies use all sorts of prioritization schemes to ration capacity on what are shared networks. Mozilla says it would allow for reasonable network management techniques. But a host of other technical and commercial arrangements could be put in jeopardy. For example, what about “sponsored data” plans where content firms like ESPN could subsidize a user? In January, AT&T announced a sponsored data template, and in the past month T-Mobile has partnered with several digital music providers. The Mozilla and Netflix proposals could ban such partnerships that provide value to all three parties — consumer, network, and content provider.

Mozilla says: “To contend that edge providers offer nothing of value to access service providers would go against the Commission’s core broadband tenets as well as common sense.”

No one contends this.

Mozilla says: failure to enact its favored policy could produce “an outcry from public interest organizations and technology companies citing promises that were broken.”

This is an odd justification for a push to regulate a healthy industry.

Netflix says: “There can be no doubt that Verizon owns and controls the interconnections that mediate how fast Netflix servers respond to a Verizon Internet access customer’s request.”

This is false. As Netflix correctly notes just paragraphs before, “It is called the Inter-net for a reason. That is, the Inter-net comprises interconnections between many autonomous networks” An inter-connection between two networks means precisely that the two “autonomous” networks have agreed to terms to connect. By its nature, no single entity “owns and controls the interconnections.” It is a partnership. The journey of an Internet data packet, or stream of many packets, moreover, usually takes place over multiple networks, thus traversing several interconnections. In fact, factors outside the ownership and control of last mile ISPs are often most crucial to the quality and speed of Netflix streams (see “Netflix and the Net Neutrality Promotional Vehicle”).

Netflix says: “ISPs, not online content providers, set the universe of available pathways into their networks.”

This is only partially true. Yes, ISPs determine with whom they interconnect. But the existence of other successful networks sets the universe of possible pathways, and the economics and culture of the Net mean broadband ISPs want their customers to reach as much content as possible, so ISPs in general want to connect to lots of other networks. Regardless, Netflix has often chosen to use congested pathways into the broadband ISPs, even though a large number of other well known, capacious pathways (CDNs, transit providers) were also available. In most of the cases when Netflix’s service seemed slow, it was these poor network architecture choices that caused deterioration in “how fast Netflix servers respond[ed]” to an “Internet access customer’s request.”

Netflix says: “There is still one and only one way to reach Comcast’s subscribers: through Comcast.”

Netflix similarly has a monopoly in the market of Netflix customers.

Netflix says: “Prioritization has value only in a congested network.” The ability to prioritize “creates a perverse incentive for ISPs to forego network upgrades in order to give prioritization value.” And in a similar vein, “Prioritization is inherently a zero-sum practice.”

First, it must be said that paid priority is getting far too much attention. It’s not really the key question. We may use prioritization techniques for some applications in the future — HD video conferencing, gaming, remote medical procedures — but most broadband ISPs do not today prioritize much, if any, traffic on their last mile access links. It’s just not the central point of contention so many have made it to be.

Second, priority is a commonplace concept. It’s true, in a world of unlimited supply, priority doesn’t matter. In the real world, it does. We prioritize in every business setting, and in everyday life. We certainly prioritize on the Internet. Voice over IP packets get tagged. Websites and online video providers use content delivery networks (CDNs) for faster delivery. Financial firms build direct fiber links to speed stock market trades. The examples are endless: FedEx’s next morning delivery versus three-day ground. First class versus coach. Airplane versus automobile. Now versus later. It’s crucial that we’re allowed to pay more — and that we’re allowed to pay less when we don’t want or need immediacy.

Third, the argument is a bit circular. And it’s not supported by good economics. The theory is that ISPs will offer an increasingly dilapidated product to consumers so that they can charge content providers for fast lanes. But consumers do have other choices, and dilapidated products aren’t popular. We have multiple wireline choices, and multiple wireless choices that are increasingly robust substitutes. Are broadband service providers really eager to anger their huge customer base in order to make a few extra bucks from a relatively small number of content providers? The math doesn’t look good.

The FCC NPRM, however, asserted, without empirical or theoretical foundation, that ISPs have an incentive to underinvest, congest the network, and degrade service. The FCC did not contemplate, let alone give ample weight to, counter arguments and facts showing incentives working in just the opposite, and much happier, direction.

If we make broadband a highly regulated industry, however, we can expect less market entry, less competition, less investment, less new capacity. (See the experience of Europe today.) A world of artificial scarcity will prompt more stingy prioritization schemes (rationing) than a world of investment and innovation, though some forms of priority will exist in any world this side of heaven.

Priority, price discrimination, product differentiation — these things actually allow us to match consumers with their needs and to create an economically rational system that can support growth.

Contrary to blanket assertions, there are many small start-ups who might value various forms of paid priority, sponsored data, or premium services. Perhaps these tools will help them launch into markets faster than they otherwise would. They may not have the large in-house data centers and CDN networks of a Google or Netflix, so perhaps they utilize third party CDN services or establish partnerships or buy super-fast connections.

Lastly, priority is not zero-sum. To the extent consumers and businesses are allowed to pay for priority (and save money when we don’t need it), the value of the entire system increases and allows further investment. Don’t force grandma who checks her email once a day to subsidize the affluent round-the-clock video gamer.

Digital Dynamism

Wednesday, November 13th, 2013

See our new 20-page report – Digital Dynamism: Competition in the Internet Ecosystem:

The Internet is altering the communications landscape even faster than most imagined.

Data, apps, and content are delivered by a growing and diverse set of firms and platforms, interconnected in ever more complex ways. The new network, content, and service providers increasingly build their varied businesses on a common foundation — the universal Internet Protocol (IP). We thus witness an interesting phenomenon — the divergence of providers, platforms, services, content, and apps, and the convergence on IP.

The dynamism of the Internet ecosystem is its chief virtue. Infrastructure, services, and content are produced by an ever wider array of firms and platforms in overlapping and constantly shifting markets.

The simple, integrated telephone network, segregated entertainment networks, and early tiered Internet still exist, but have now been eclipsed by a far larger, more powerful phenomenon. A new, horizontal, hypercon- nected ecosystem has emerged. It is characterized by large investments, rapid innovation, and extreme product differentiation.

  • Consumers now enjoy at least five distinct, competing modes of broadband connectivity — cable modem, DSL, fiber optic, wireless broadband, and satellite — from at least five types of firms. Widespread wireless Wi- Fi nodes then extend these broadband connections.
  • Firms like Google, Microsoft, Amazon, Apple, Facebook, and Netflix are now major Internet infrastructure providers in the form of massive data centers, fiber networks, content delivery systems, cloud computing clusters, ecommerce and entertainment hubs, network protocols and software, and, in Google’s case, fiber optic access net- works. Some also build network devices and operating systems. Each competes to be the hub — or at least a hub — of the consumer’s digital life. So large are these new players that up to 80 percent of network traffic now bypasses the traditional public Internet backbone.
  • Billions of diverse consumer and enterprise devices plug into these networks, from PCs and laptops to smartphones and tablets, from game consoles and flat panel displays to automobiles, web cams, medical devices, and untold sensors and industrial machines.

The communications playing field is continually shifting. Cable disrupted telecom through broadband cable modem services. Mobile is a massively successful business, yet it is cannibalizing wireline services, with further disruptions from Skype and other IP communications apps. Mobile service providers used to control the handset market, but today handsets are mobile computers that wield their own substantial power with consumers. While the old networks typically delivered a single service — voice, video, or data — today’s broadband networks deliver multiple services, with the “Cloud” offering endless possibilities.

Also view the accompanying graphic, showing the progression of network innovation over time: Hyperconnected: The New Network Map.

U.S. Share of Internet Traffic Grows

Thursday, October 10th, 2013

Over the last half decade, during a protracted economic slump, we’ve documented the persistent successes of Digital America — for example the rise of the App Economy. Measuring the health of our tech sectors is important, in part because policy agendas are often based on assertions of market failure (or regulatory failure) and often include comparisons with other nations. Several years ago we developed a simple new metric that we thought better reflected the health of broadband in international comparisons. Instead of measuring broadband using “penetration rates,” or the number of  connections per capita, we thought a much better indicator was actual Internet usage. So we started looking at Internet traffic per capita and per Internet user (see here, here, here, and, for more context, here).

We’ve update the numbers here, using Cisco’s Visual Networking Index for traffic estimates and Internet user figures from the International Telecommunications Union. And the numbers suggest the U.S. digital economy, and its broadband networks, are healthy and extending their lead internationally. (Patrick Brogan of USTelecom has also done excellent work on this front; see his new update.)

If we look at regional comparisons of traffic per person, we see North America generates and consumes nearly seven times the world average and more around two and a half times that of Western Europe.

Looking at individual nations, and switching to the metric of traffic per user, we find that the U.S. is actually pulling away from the rest of the world. In our previous reports, the U.S. trailed only South Korea, was essentially tied with Canada, and generated around 60-70% more traffic than Western European nations. Now, the U.S. has separated itself from Canada and is generating two to three times the traffic per user of Western Europe and Japan.

Perhaps the most remarkable fact, as Brogan notes, is that the U.S. has nearly caught up with South Korea, which, for the last decade, was a real outlier — far and away the worldwide leader in Internet infrastructure and usage.

Traffic is difficult to measure and its nature and composition can change quickly. There are a number of factors we’ll talk more about later, such as how much of this traffic originates in the U.S. but is destined for foreign lands. Yet these are some of the best numbers we have, and the general magnitudes reinforce the idea that the U.S. digital economy, under a relatively light-touch regulatory model, is performing well.

Discussing Broadband and Economic Growth at AEI

Sunday, September 22nd, 2013

On Tuesday this week, the American Enterprise Institute launched an exciting new project — the Center for Internet, Communications, and Technology. I was happy to participate in the inaugural event, which included talks by CEA chairman Jason Furman and Rep. Greg Walden (R-OR). We discussed broadband’s potential to boost economic productivity and focused on the importance and key questions of wireless spectrum policy. See the video below:

A Decade Later, Net Neutrality Goes To Court

Monday, September 9th, 2013

Today the D.C. Federal Appeals Court hears Verizon’s challenge to the Federal Communications Commission’s “Open Internet Order” — better known as “net neutrality.”

Hard to believe, but we’ve been arguing over net neutrality for a decade. I just pulled up some testimony George Gilder and I prepared for a Senate Commerce Committee hearing in April 2004. In it, we asserted that a newish “horizontal layers” regulatory proposal, then circulating among comm-law policy wonks, would become the next big tech policy battlefield. Horizontal layers became net neutrality, the Bush FCC adopted the non-binding Four Principles of an open Internet in 2005, the Obama FCC pushed through actual regulations in 2010, and now today’s court challenge, which argues that the FCC has no authority to regulate the Internet and that, in fact, Congress told the FCC not to regulate the Internet.

Over the years we’ve followed the debate, and often weighed in. Here’s a sampling of our articles, reports, reply comments, and even some doggerel:

— Bret Swanson

Net ‘Neutrality’ or Net Dynamism? Easy Choice.

Tuesday, May 14th, 2013

Consumers beware. A big content company wants to help pay for the sports you love to watch.

ESPN is reportedly talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

As good a deal as this may be for consumers (and the companies involved), the potential arrangement offends some people’s very particular notion of “network neutrality.” They often have trouble defining what they mean by net neutrality, but they know rule breakers when they see them. Sure enough, long time net neutrality advocate Public Knowledge noted, “This is what a network neutrality violation looks like.”

The basic notion is that all bits on communications networks should be treated the same. No prioritization, no discrimination, and no partnerships between content companies and conduit companies. Over the last decade, however, as we debated net neutrality in great depth and breadth, we would point out that such a notional rule would likely result in many perverse consequences. For example, we noted that, had net neutrality existed at the time, the outlawing of pay-for-prioritization would have banned the rise of content delivery networks (CDNs), which have fundamentally improved the user experience for viewing online content. When challenged in this way, the net neutrality proponents would often reply, Well, we didn’t mean that. Of course that should be allowed. We also would point out that yesterday’s and today’s networks discriminate among bits in all sorts of ways, and that we would continue doing so in the future. Their arguments often deteriorated into a general view that Bad things should be banned. Good things should be allowed. And who do you think would be the arbiter of good and evil? You guessed it.

So what is the argument in the case of ESPN? The idea that ESPN would pay to exempt its bits from data caps apparently offends the abstract all-bits-equal notion. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially as an ESPN non-watcher, am worse off.

So the critics’ real worry is, I suppose, that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But this is NOT what government policy should be — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms. This is what we warned would happen. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

These practical and utilitarian arguments about technology and economics are important. Yet they ignore perhaps the biggest point of all: the FCC has no authority to regulate the Internet. The Internet is perhaps the greatest free-flowing, fast-growing, dynamic engine of cultural and economic value we’ve known. The Internet’s great virtue is its ability to change and grow, to foster experimentation and innovation. Diversity in networks, content, services, apps, and business models is a feature, not a bug. Regulation necessarily limits this freedom and diversity, making everything more homogeneous and diminishing the possibilities for entrepreneurship and innovation. Congress has given the FCC no authority to regulate the Internet. The FCC invented this job for itself and is now being challenged in court.

Possible ESPN-mobile partnerships are just the latest reminder of why we don’t want government limiting our choices — and all the possibilities — on the Internet.

— Bret Swanson

U.S. Mobile: Effectively competitive? Probably. Positively healthy? Absolutely.

Tuesday, March 26th, 2013

Each year the Federal Communications Commission is required to report on competition in the mobile phone market. Following Congress’s mandate to determine the level of industry competition, the FCC, for many years, labeled the industry “effectively competitive.” Then, starting a few years ago, the FCC declined to make such a determination. Yes, there had been some consolidation, it was acknowledged, yet the industry was healthier than ever — more subscribers, more devices, more services, lots of innovation. The failure to achieve the “effectively competitive” label was thus a point of contention.

This year’s “CMRS” — commercial mobile radio services — report again fails to make a designation, one way or the other. Yet whatever the report lacks in official labels, it more than makes up in impressive data.

For example, it shows that as of October 2012, 97.2% of Americans have access to three or more mobile providers, and 92.8% have access to four or more. As for mobile broadband data services, 97.8% have access to two or more providers, and 91.6% have access to three or more.

Rural America is also doing well. The FCC finds 87% of rural consumers have access to three or more mobile voice providers, and 69.1% have access to four or more. For mobile broadband, 89.9% have access to two or more providers, while 65.4% enjoy access to three or more.

Call this what you will — to most laypeople, these choices count as robust competition. Yet the FCC has a point when it

refrain[s] from providing any single conclusion because such an assessment would be incomplete and possibly misleading in light of the variations and complexities we observe.

The industry has grown so large, with so many interconnected and dynamic players, it may have outgrown Congress’s request for a specific label.

14. Given the Report’s expansive view of mobile wireless services and its examination of competition across the entire mobile wireless ecosystem, we find that the mobile wireless ecosystem is sufficiently complex and multi-faceted that it would not be meaningful to try to make a single, all-inclusive finding regarding effective competition that adequately encompasses the level of competition in the various interrelated segments, types of services, and vast geographic areas of the mobile wireless industry.

Or as economist George Ford of the Phoenix Center put it,

The statute wants a competitive analysis, but as the Commission correctly points out, competition is not the goal, it [is] the means. Better performance is the goal. When the evidence presented in the Sixteenth Report is viewed in this way, the conclusion to be reached about the mobile industry, at least to me, is obvious: the U.S. mobile wireless industry is performing exceptionally well for consumers, regardless of whether or not it satisfies someone’s arbitrarily-defined standard of “effective competition.”

I’m in good company.  Outgoing FCC Chairman Julius Genachowski lists among his proudest achievements that “the U.S. is now the envy of the world in advanced wireless networks, devices, applications, among other areas.

The report shows that in the last decade, U.S. mobile connections have nearly tripled. The U.S. now has more mobile connections than people.

The report also shows per user data consumption more than doubling year to year.

More important, the proliferation of smartphones, which are powerful mobile computers, is the foundation for a new American software industry widely known as the App Economy. We detailed the short but amazing history of the app and its impact on the economy in our report “Soft Power: Zero to 60 Billion in Four Years.” Likewise, these devices and software applications are changing industries that need changing. Last week, experts testified before Congress about mobile health, or mHealth, and we wrote about the coming health care productivity revolution in “The App-ification of Medicine.”

One factor that still threatens to limit mobile growth is the availability of spectrum. The report details past spectrum allocations that have borne fruit, but the pipeline of future spectrum allocations is uncertain. A more robust commitment to spectrum availability and a free-flowing spectrum market would ensure continued investment in networks, content, and services.

What Congress once called the mobile “phone” industry is now a sprawling global ecosystem and a central driver of economic advance. By most measures, the industry is effectively competitive. By any measure, it’s positively healthy.

— Bret Swanson

The Broadband Rooster

Tuesday, March 12th, 2013

FCC chairman Julius Genachowski opens a new op-ed with a bang:

As Washington continues to wrangle over raising revenue and cutting spending, let’s not forget a crucial third element for reining in the deficit: economic growth. To sustain long-term economic health, America needs growth engines, areas of the economy that hold real promise of major expansion. Few sectors have more job-creating innovation potential than broadband, particularly mobile broadband.

Private-sector innovation in mobile broadband has been extraordinary. But maintaining the creative momentum in wireless networks, devices and apps will need an equally innovative wireless policy, or jobs and growth will be left on the table.

Economic growth is indeed the crucial missing link to employment, opportunity, and healthier government budgets. Technology is the key driver of long term growth, and even during the downturn the broadband economy has delivered. Michael Mandel estimates the “app economy,” for example, has created more than 500,000 jobs in less than five short years of existence.

We emphatically do need policies that will facilitate the next wave of digital innovation and growth. Chairman Genachowski’s top line assessment — that U.S. broadband is a success — is important. It rebuts the many false but persistent claims that U.S. broadband lags the world. Chairman Genachowski’s diagnosis of how we got here and his prescriptions for the future, however, are off the mark.

For example, he suggests U.S. mobile innovation is newer than it really is.

Over the past few years, after trailing Europe and Asia in mobile infrastructure and innovation, the U.S. has regained global leadership in mobile technology.

This American mobile resurgence did not take place in just the last “few years.” It began a little more than decade ago with smart decisions to:

(1) allow reasonable industry consolidation and relatively free spectrum allocation, after years of forced “competition,” which mandated network duplication and thus underinvestment in coverage and speed (we did in fact trail Europe in some important mobile metrics in the late 1990s and briefly into the 2000s);

(2) refrain from any but the most basic regulation of broadband in general and the mobile market in particular, encouraging experimental innovation; and

(3) finally implement the digital TV / 700 MHz transition in 2007, which put more of the best spectrum into the market.

These policies, among others, encouraged some $165 billion in mobile capital investment between 2001 and 2008 and launched a wave of mobile innovation. Development on the iPhone began in 2004, the iPhone itself arrived in 2007, and the app store in 2008. Google’s Android mobile OS came along in 2009, the year Mr. Genachowski arrived at the FCC. By this time, the American mobile juggernaut had already been in full flight for years, and the foundation was set — the U.S. topped the world in 3G mobile networks and device and software innovation. Wi-Fi, meanwhile surged from 2003 onward, creating an organic network of tens of millions of wireless nodes in homes, offices, and public spaces. Mr. Genachowski gets some points for not impeding the market as aggressively as some other more zealous regulators might have. But taking credit for America’s mobile miracle smacks of the rooster proudly puffing his chest at sunrise.

More important than who gets the credit, however, is determining what policies led to the current success . . . and which are likely to spur future growth. Chairman Genachowski is right to herald the incentive auctions that could unleash hundreds of megahertz of un- and under-used spectrum from the old TV broadcasters. Yet wrangling over the rules of the auctions could stretch on, delaying the the process. Worse, the rules themselves could restrict who can bid on or buy new spectrum, effectively allowing the FCC to favor certain firms, technologies, or friends at the expense of the best spectrum allocation. We’ve seen before that centrally planned spectrum allocations don’t work. The fact that the FCC is contemplating such an approach is worrisome. It runs counter to the policies that led to today’s mobile success.

The FCC also has a bad habit of changing the metrics and the rules in the middle of the game. For example, the FCC has been caught changing its “spectrum screen” to fit its needs. The screen attempts to show how much spectrum mobile operators hold in particular markets. During M&A reviews, however, the FCC has changed its screen procedures to make the data fit its opinion.

In a more recent example, Fred Campbell shows that the FCC alters its count of total available commercial spectrum to fit the argument it wants to make from day to day. We’ve shown that the U.S. trails other nations in the sum of currently available spectrum plus spectrum in the pipeline. Below, see a chart from last year showing how the U.S. compares favorably in existing commercially available spectrum but trails severely in pipeline spectrum. Translation: the U.S. did a pretty good job unleashing spectrum in 1990s through he mid-2000s. But, contrary to Chairman Genachowski’s implication, it has stalled in the last few years.

When the FCC wants to argue that particular companies shouldn’t be allowed to acquire more spectrum (whether through merger or secondary markets), it adopts this view that the U.S. trails in spectrum allocation. Yet when challenged on the more general point that the U.S. lags other nations, the FCC turns around and includes an extra 139 MHz in spectrum in the 2.5 GHz range to avoid the charge it’s fallen behind the curve.

Next, Chairman Genachowski heralds a new spectrum “sharing” policy where private companies would be allowed to access tiny portions of government-owned airwaves. This really is weak tea. The government, depending on how you measure, controls between 60% and 85% of the best spectrum for wireless broadband. It uses very little of it. Yet it refuses to part with meaningful portions, even though it would still be left with more than enough for its important uses — military and otherwise. If they can make it work (I’m skeptical), sharing may offer a marginal benefit. But it does not remotely fit the scale of the challenge.

Along the way, the FCC has been whittling away at mobile’s incentives for investment and its environment of experimentation. Chairman Genachowski, for example, imposed price controls on “data roaming,” even though it’s highly questionable he had the legal authority to do so. The Commission has also, with varied degrees of “success,” been attempting to impose its extralegal net neutrality framework to wireless. And of course the FCC has blocked, altered, and/or discouraged a number of important wireless mergers and secondary spectrum transactions.

Chairman Genachowski’s big picture is a pretty one: broadband innovation is key to economic growth. Look at the brush strokes, however, and there are reasons to believe sloppy and overanxious regulators are threatening to diminish America’s mobile masterpiece.

— Bret Swanson

Broadband Bullfeathers

Friday, December 14th, 2012

Several years ago, some American lawyers and policymakers were looking for ways to boost government control of the Internet. So they launched a campaign to portray U.S. broadband as a pathetic patchwork of tin-cans-and-strings from the 1950s. The implication was that broadband could use a good bit of government “help.”

They initially had some success with a gullible press. The favorite tools were several reports that measured, nation by nation, the number of broadband connections per 100 inhabitants. The U.S. emerged from these reports looking very mediocre. How many times did we read, “The U.S. is 16th in the world in broadband”? Upon inspection, however, the reports weren’t very useful. Among other problems, they were better at measuring household size than broadband health. America, with its larger households, would naturally have fewer residential broadband subscriptions (not broadband users) than nations with smaller households (and thus more households per capita). And as the Phoenix Center demonstrated, rather hilariously, if the U.S. and other nations achieved 100% residential broadband penetration, America would actually fall to 20th from 15th.

In the fall of 2009, a voluminous report from Harvard’s Berkman Center tried to stitch the supposedly ominous global evidence into a case-closed indictment of U.S. broadband. The Berkman report, however, was a complete bust (see, for example, these thorough critiques: 1, 2, and 3 as well as my brief summary analysis).

Berkman’s statistical analyses had failed on their own terms. Yet it was still important to think about the broadband economy in a larger context. We asked the question, how could U.S. broadband be so backward if so much of the world’s innovation in broadband content, services, and devices was happening here?

To name just a few: cloud computing, YouTube, Twitter, Facebook, Netflix, iPhone, Android, ebooks, app stores, iPad. We also showed that the U.S. generates around 60% more network traffic per capita and per Internet user than Western Europe, the supposed world broadband leader. The examples multiply by the day. As FCC chairman Julius Genachowski likes to remind us, the U.S. now has more 4G LTE wireless subscribers than the rest of the world combined.

Yet here comes a new book with the same general thrust — that the structure of the U.S. communications market is delivering poor information services to American consumers. In several new commentaries summarizing the forthcoming book’s arguments, author Susan Crawford’s key assertion is that U.S. broadband is slow. It’s so bad, she thinks broadband should be a government utility. But is U.S. broadband slow?

According to actual network throughput measured by Akamai, the world’s largest content delivery network, the U.S. ranks in the top ten or 15 across a range of bandwidth metrics. It is ninth in average connection speed, for instance, and 13th in average peak speed. Looking at proportions of populations who enjoy speeds above a certain threshold, Akamai finds the U.S. is seventh in the percentage of connections exceeding 10 megabits per second (Mbps) and 13th in the percentage exceeding 4 Mbps. (See the State of the Internet report, 2Q 2012.)

You may not be impressed with rankings of seventh or 13th. But did you look at the top nations on the list? Hong Kong, South Korea, Latvia, Switzerland, the Netherlands, Japan, etc.

Each one of them is a relatively small, densely populated country. The national rankings are largely artifacts of geography and the size of the jurisdictions observed. Small nations with high population densities fare well. It is far more economical to build high-speed communications links in cities and other relatively dense populations. Accounting for this size factor, the U.S. actually looks amazingly good. Only Canada comes close to the U.S. among geographically larger nations.

But let’s look even further into the data. Akamai also supplies speeds for individual U.S. states. If we merge the tables of nations and states, the U.S. begins to look not like a broadband backwater or even a middling performer but an overwhelming success. Here are the two sets of Akamai data combined into tables that directly compare the successful small nations with their more natural counterparts, the U.S. states (shaded in blue).

Average Broadband Connection Speed — Nine of the top 15 entities are U.S. states.

Average Peak Connection Speed — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 10 Megabits per Second — Ten of the top 15 entities are U.S. states.

Percent of Connections Over 4 Megabits per Second — Ten of the top 16 entities are U.S. states.

Among the 61 ranked entities on these four measures of broadband speed, 39, or almost two-thirds, are U.S. states. American broadband is not “pitifully slow.” In fact, if we were to summarize U.S. broadband, we’d have to say, compared to the rest of the world, it is very fast.

It is true that not every state or region in the U.S. enjoys top speeds. Yes, we need more, better, faster, wider coverage of wired and wireless broadband. In underserved neighborhoods as well as our already advanced areas. We need constant improvement both to accommodate today’s content and services and to drive tomorrow’s innovations. We should not, however, be making broad policy under the illusion that U.S. broadband, taken as a whole, is deficient. The quickest way to make U.S. broadband deficient is probably to enact policies that discourage investment and innovation — such as trying to turn a pretty successful and healthy industry that invests $60 billion a year into a government utility.

— Bret Swanson

The $66-billion Internet Expansion

Thursday, November 8th, 2012

Sixty-six billion dollars over the next three years. That’s AT&T’s new infrastructure plan, announced yesterday. It’s a bold commitment to extend fiber optics and 4G wireless to most of the country and thus dramatically expand the key platform for growth in the modern U.S. economy.

The company specifically will boost its capital investments by an additional $14 billion over previous estimates. This should enable coverage of 300 million Americans (around 97% of the population) with LTE wireless and 75% of AT&T’s residential service area with fast IP broadband. It’s adding 10,000 new cell towers, a thousand distributed antenna systems, and 40,000 “small cells” that augment and extend the wireless network to, for example, heavily trafficked public spaces. Also planned are fiber optic connections to an additional 1 million businesses.

As the company expands its fiber optic and wireless networks — to drive and accommodate the type of growth seen in the chart above — it will be retiring parts of its hundred-year-old copper telephone network. To do this, it will need cooperation from federal and state regulators. This is the end of phone network, the transition to all Internet, all the time, everywhere.

FCC’s 706 Broadband Report Does Not Compute

Wednesday, August 22nd, 2012

Yesterday the Federal Communications Commission issued 181 pages of metrics demonstrating, to any fair reader, the continuing rapid rise of the U.S. broadband economy — and then concluded, naturally, that “broadband is not yet being deployed to all Americans in a reasonable and timely fashion.” A computer, being fed the data and the conclusion, would, unable to process the logical contradictions, crash.

The report is a response to section 706(b) of the 1996 Telecom Act that asks the FCC to report annually whether broadband “is being deployed . . . in a reasonable and timely fashion.” From 1999 to 2008, the FCC concluded that yes, it was. But now, as more Americans than ever have broadband and use it to an often maniacal extent, the FCC has concluded for the third year in a row that no, broadband deployment is not “reasonable and timely.”

The FCC finds that 19 million Americans, mostly in very rural areas, don’t have access to fixed line terrestrial broadband. But Congress specifically asked the FCC to analyze broadband deployment using any technology.”

“Any technology” includes DSL, cable modems, fiber-to-the-x, satellite, and of course fixed wireless and mobile. If we include wireless broadband, the unserved number falls to 5.5 million from the FCC’s headline 19 million. Five and a half million is 1.74% of the U.S. population. Not exactly a headline-grabbing figure.

Even if we stipulate the FCC’s framework, data, and analysis, we’re still left with the FCC’s own admission that between June 2010 and June 2011, an additional 7.4 million Americans gained access to fixed broadband service. That dropped the portion of Americans without access to 6% in 2011 from around 8.55% in 2010 — a 30% drop in the unserved population in one year. Most Americans have had broadband for many years, and the rate of deployment will necessarily slow toward the tail-end of any build-out. When most American households are served, there just aren’t very many to go, and those that have yet to gain access are likely to be in the very most difficult to serve areas (e.g. “on tops of mountains in the middle of nowhere”). The fact that we still added 7.4 million broadband in the last year, lowering the unserved population by 30%, even using the FCC’s faulty framework, demonstrates in any rational world that broadband “is being deployed” in a “reasonable and timely fashion.”

But this is not the rational world — it’s D.C. in the perpetual political silly season.

One might conclude that because the vast majority of these unserved Americans live in very rural areas — Alaska, Montana, West Virginia — the FCC would, if anything, suggest policies tailored to boost infrastructure investment in these hard-to-reach geographies. We could debate whether these are sound investments and whether the government would do a good job expanding access, but if rural deployment is a problem, then presumably policy should attempt to target and remediate the rural underserved. Commissioner McDowell, however, knows the real impetus for the FCC’s tortured no-confidence vote — its regulatory agenda.

McDowell notes that the report repeatedly mentions the FCC’s net neutrality rules (now being contested in court), which are as far from a pro-broadband policy, let alone a targeted one, as you could imagine. If anything, net neutrality is an impediment to broader, faster, better broadband. But the FCC is using its thumbs-down on broadband deployment to prop up its intrusions into a healthy industry. As McDowell concluded, “the majority has used this process as an opportunity to create a pretext to justify more regulation.”

Misunderstanding the Mobile Ecosystem

Thursday, August 9th, 2012

Mobile communications and computing are among the most innovative and competitive markets in the world. They have created a new world of software and offer dramatic opportunities to improve productivity and creativity across the industrial spectrum.

Last week we published a tech note documenting the rapid growth of mobile and the importance of expanding wireless spectrum availability. More clean spectrum is necessary both to accommodate fast-rising demand and drive future innovations. Expanding spectrum availability might seem uncontroversial. In the report, however, we noted that one obstacle to expanding spectrum availability has been a cramped notion of what constitutes competition in the Internet era. As we wrote:

Opponents of open spectrum auctions and flexible secondary markets often ignore falling prices, expanding choices, and new features available to consumers. Instead they sometimes seek to limit new spectrum availability, or micromanage its allocation or deployment characteristics, charging that a few companies are set to dominate the market. Although the FCC found that 77% of the U.S. population has access to three or more 3G wireless providers, charges of a coming “duopoly” are now common.

This view, however, relies on the old analysis of static utility or commodity markets and ignores the new realities of broadband communications. The new landscape is one of overlapping competitors with overlapping products and services, multi-sided markets, network effects, rapid innovation, falling prices, and unpredictability.

Sure enough, yesterday Sprint CEO Dan Hesse made the duopoly charge and helped show why getting spectrum policy right has been so difficult.

Q: You were a vocal opponent of the AT&T/T-Mobile merger. Are you satisfied you can compete now that the merger did not go through?

A: We’re certainly working very hard. There’s no question that the industry does have an issue with the size of the duopoly of AT&T and Verizon. I believe that over time we’ll see more consolidation in the industry outside of the big two, because the gap in size between two and three is so enormous. Consolidation is healthy for the industry as long as it’s not AT&T and Verizon getting larger.

Hesse goes even further.

Hesse also seemed to be likening Sprint’s struggles in competing with AT&T-Rex and Big Red as a fight against good and evil. Sprint wants to wear the white hat, according to Hesse. “At Sprint, we describe it internally as being the good guys, of doing the right thing,” he said.

This type of thinking is always a danger if you’re trying to make sound policy. Picking winners and losers is inevitably — at best — an arbitrary exercise. Doing so based on some notion of corporate morality is plain silly, but even more reasonable sounding metrics and arguments — like those based on market share — are often just as misleading and harmful.

The mobile Internet ecosystem is growing so fast and changing with such rapidity and unpredictability that making policy based on static and narrow market definitions will likely yield poor policy. As we noted in our report:

It is, for example, worth emphasizing: Google and Apple were not in this business just a few short years ago.

Yet by the fourth quarter of 2011 Apple could boast an amazing 75% of the handset market’s profits. Apple’s iPhone business, it was widely noted after Apple’s historic 2011, is larger than all of Microsoft. In fact, Apple’s non-iPhone products are also larger than Microsoft.

Android, the mobile operating system of Google, has been growing even faster than Apple’s iOS. In December 2011, Google was activating 700,000 Android devices a day, and now, in the summer of 2012, it estimates 900,000 activations per day. From a nearly zero share at the beginning of 2009, Android today boasts roughly a 55% share of the global smartphone OS market.

. . .

Apple’s iPhone changed the structure of the industry in several ways, not least the relationships between mobile service providers and handset makers. Mobile operators used to tell handset makers what to make, how to make it, and what software and firmware could be loaded on it. They would then slap their own brand label on someone else’s phone.

Apple’s quick rise to mobile dominance has been matched by Blackberry maker Research In Motion’s fall. RIM dominated the 2000s with its email software, its qwerty keyboard, and its popularity with enterprise IT departments. But it  couldn’t match Apple’s or Android’s general purpose computing platforms, with user-friendly operating systems, large, bright touch-screens, and creative and diverse app communities.

Sprinkled among these developments were the rise, fall, and resurgence of Motorola, and then its sale to Google; the rise and fall of Palm; the rise of HTC; and the decline of once dominant Nokia.

Apple, Google, Amazon, Microsoft, and others are building cloud ecosystems, sometimes complemented with consumer devices, often tied to Web apps and services, multimedia content, and retail stores. Many of these products and services compete with each other, but they also compete with broadband service providers. Some of these business models rely primarily on hardware, some software, some subscriptions, some advertising. Each of the companies listed above — a computer company, a search company, an ecommerce company, and a software company — are now major Internet infrastructure companies.

As Jeffrey Eisenach concluded in a pathbreaking analysis of the digital ecosystem (“Theories of Broadband Competition”), there may be market concentration in one (or more) layer(s) of the industry (broadly considered), yet prices are falling, access is expanding, products are proliferating, and innovation is as rapid as in any market we know.

The Real Deal on U.S. Broadband

Monday, June 11th, 2012

Is American broadband broken?

Tim Lee thinks so. Where he once leaned against intervention in the broadband marketplace, Lee says four things are leading him to rethink and tilt toward more government control.

First, Lee cites the “voluminous” 2009 Berkman Report. Which is surprising. The report published by Harvard’s Berkman Center may have been voluminous, but it lacked accuracy in its details and persuasiveness in its big-picture take-aways. Berkman used every trick in the book to claim “open access” regulation around the world boosted other nation’s broadband economies and lack of such regulation in the U.S. harmed ours. But the report’s data and methodology were so thoroughly discredited (especially in two detailed reports issued by economists Robert Crandall, Everett Ehrlich, and Jeff Eisenach and Robert Hahn) that the FCC, which commissioned the report, essentially abandoned it.  Here was my summary of the economists’ critiques:

The [Berkman] report botched its chief statistical model in half a dozen ways. It used loads of questionable data. It didn’t account for the unique market structure of U.S. broadband. It reversed the arrow of time in its country case studies. It ignored the high-profile history of open access regulation in the U.S. It didn’t conduct the literature review the FCC asked for. It excommunicated Switzerland.

. . .

Berkman’s qualitative analysis was, if possible, just as misleading. It passed along faulty data on broadband speeds and prices. It asserted South Korea’s broadband boom was due to open access regulation, but in fact most of South Korea’s surge happened before it instituted any regulation. The study said Japanese broadband, likewise, is a winner because of regulation. But regulated DSL is declining fast even as facilities-based (unshared, proprietary) fiber-to-the-home is surging.

Berkman also enjoyed comparing broadband speeds of tiny European and Asian countries to the whole U.S. But if we examine individual American states — New York or Arizona, for example — we find many of them outrank most European nations and Europe as a whole. In fact, applying the same Speedtest.com data Berkman used, the U.S. as a whole outpaces Europe as a whole! Comparing small islands of excellence to much larger, more diverse populations or geographies is bound to skew your analysis.

The Berkman report twisted itself in pretzels trying to paint a miserable picture of the U.S. Internet economy and a glowing picture of heavy regulation in foreign nations. Berkman, however, ignored the prima facie evidence of a vibrant U.S. broadband marketplace, manifest in the boom in Web video, mobile devices, the App Economy, cloud computing, and on and on.

How could the bulk of the world’s best broadband apps, services, and sites be developed and achieve their highest successes in the U.S. if American broadband were so slow and thinly deployed? We came up with a metric that seemed to refute the notion that U.S. broadband was lagging, namely, how much network traffic Americans generate vis-à-vis the rest of the world. It turned out the U.S. generates more network traffic per capita and per Internet user than any nation but South Korea and generates about two-thirds more per-user traffic than the closest advanced economy of comparable size, Western Europe.

Berkman based its conclusions almost solely on (incorrect) measures of “broadband penetration” — the number of broadband subscriptions per capita — but that metric turned out to be a better indicator of household size than broadband health. Lee acknowledges the faulty analysis but still assumes “broadband penetration” is the sine qua non measure of Internet health. Maybe we’re not awful, as Berkman claimed, Lee seems to be saying, but even if we correct for their methodological mistakes, U.S. broadband penetration is still just OK. “That matters,” Lee writes,

because a key argument for America’s relatively hands-off approach to broadband regulation has been that giving incumbents free rein would give them incentive to invest more in their networks. The United States is practically the only country to pursue this policy, so if the incentive argument was right, its advocates should have been able to point to statistics showing we’re doing much better than the rest of the world. Instead, the argument has been over just how close to the middle of the pack we are.

No, I don’t agree that the argument has consisted of bickering over whether the U.S. is more or less mediocre. Not at all. I do agree that advocates of government regulation have had to adjust their argument – U.S. broadband is awful mediocre. Yet they still hang their hat on “broadband penetration” because most other evidence on the health of the U.S. digital economy is even less supportive of their case.

In each of the last seven years, U.S. broadband providers have invested between $60 and $70 billion in their networks. Overall, the U.S. leads the world in info-tech investment — totaling nearly $500 billion last year. The U.S. now boasts more than 80 million residential broadband links and 200+ million mobile broadband subscribers. U.S. mobile operators have deployed more 4G mobile network capacity than anyone, and Verizon just announced its FiOS fiber service will offer 300 megabit-per-second residential connections — perhaps the fastest large-scale deployment in the world.

Eisenach and Crandall followed up their critique of the Berkman study with a fresh March 2012 analysis of “open access” regulation around the world (this time with Allan Ingraham). They found:

  • “it is clear that copper loop unbundling did not accelerate the deployment or increase the penetration of first-generation broadband networks, and that it had a depressing effect on network investment”
  • “By contrast, it seems clear that platform competition was very important in promoting broadband deployment and uptake in the earlier era of DSL and cable modem competition.”
  • “to the extent new fiber networks are being deployed in Europe, they are largely being deployed by unregulated, non-ILEC carriers, not by the regulated incumbent telecom companies, and not by entrants that have relied on copper-loop unbundling.”

Lee doesn’t mention the incisive criticisms of the Berkman study nor the voluminous literature, including this latest example, showing open access policies are ineffective at best, and more likely harmful.

In coming posts, I’ll address Lee’s three other worries.

— Bret Swanson

New iPad, Fellow Bandwidth Monsters Hungry for More Spectrum

Tuesday, March 13th, 2012

Last week Apple unveiled its third-generation iPad. Yesterday the company said the LTE versions of the device, which can connect via Verizon and AT&T mobile broadband networks, are sold out.

It took 15 years for laptops to reach 50 million units sold in a year. It took smartphones seven years. For tablets (not including Microsoft’s clunky attempt a decade ago), just two years. Mobile device volumes are astounding. In each of the last five years, global mobile phone sales topped a billion units. Last year smartphones outsold PCs for the first time – 488 million versus 432 million. This year well over 500 million smartphones and perhaps 100 million tablets could be sold.

Smartphones and tablets represent the first fundamentally new consumer computing platforms since the PC, which arrived in the late ’70s and early ’80s. Unlike mere mobile phones, they’ve got serious processing power inside. But their game-changing potency is really based on their capacity to communicate via the Internet. And this power is, of course, dependent on the cloud infrastructure and wireless networks.

But are wireless networks today prepared for this new surge of bandwidth-hungry mobile devices? Probably not. When we started to build 3G mobile networks in the middle of last decade, many thought it was a huge waste. Mobile phones were used for talking, and some texting. They had small low-res screens and were terrible at browsing the Web. What in the world would we do with all this new wireless capacity? Then the iPhone came, and, boom — in big cities we went from laughable overcapacity to severe shortage seemingly overnight. The iPhone’s brilliant screen, its real Web browsing experience, and the world of apps it helped us discover totally changed the game. Wi-Fi helped supply the burgeoning iPhone with bandwidth, and Wi-Fi will continue to grow and play an important role. Yet Credit Suisse, in a 2011 survey of the industry, found that mobile networks overall were running at 80% of capacity and that many network nodes were tapped out.

Today, we are still expanding 3G networks and launching 4G in most cities. Verizon says it offers 4G LTE in 196 cities, while AT&T says it offers 4G LTE in 28 markets (and combined with its HSPA+ networks offers 4G-like speeds to 200 million people in the U.S.). Lots of things affect how fast we can build new networks — from cell site permitting to the fact that these things are expensive ($20 billion worth of wireless infrastructure in the U.S. last year). But another limiting factor is spectrum availability.

Do we have enough radio waves to efficiently and cost-effectively serve these hundreds of millions of increasingly powerful mobile devices, which generate and consume increasingly rich content, with ever more stringent latency requirements, and which depend upon robust access to cloud storage and computing resources?

Capacity is a function of money, network nodes, technology, and radio waves. But spectrum is grossly misallocated. The U.S. government owns 61% of the best airwaves, while mobile broadband providers — where all the action is — own just 10%. Another portion is controlled by the old TV broadcasters, where much of this beachfront spectrum lay fallow or underused.

They key is allowing spectrum to flow to its most valuable uses. Last month Congress finally authorized the FCC to conduct incentive auctions to free up some unused and underused TV spectrum. Good news. But other recent developments discourage us from too much optimism on this front.

In December the FCC and Justice Department vetoed AT&T’s attempt to augment its spectrum and cell-site position via merger with T-Mobile. Now the FCC and DoJ are questioning Verizon’s announced purchase of Spectrum Co. — valuable but unused spectrum owned by a consortium of cable TV companies. The FCC has also threatened to tilt any spectrum auctions so that it decides who can bid, how much bidders can buy, and what buyers may or may not do with their spectrum — pretending Washington knows exactly how this fast-changing industry should be structured, thus reducing the value of spectrum and probably delaying availability of new spectrum and possibly reducing the sector’s pace of innovation.

It’s very difficult to see how it’s at all productive for the government to block companies who desperately need more spectrum from buying it from those who don’t want it, don’t need it, or can’t make good use of it. The big argument against AT&T and Verizon’s attempted spectrum purchases is “competition.” But T-Mobile wanted to sell to AT&T because it admitted it didn’t have the financial (or spectrum) wherewithal to build a super expensive 4G network. Apparently the same for the cable companies, who chose to sell to Verizon. Last week Dish Network took another step toward entering the 4G market with the FCC’s approval of spectrum transfers from two defunct companies, TerreStar and DBSD.

Some people say the proliferation of Wi-Fi or the increased use of new wireless technologies that economize on spectrum will make more spectrum availability unnecessary. I agree Wi-Fi is terrific and will keep growing and that software radios, cognitive radios, mesh networks and all the other great technologies that increase the flexibility and power of wireless will make big inroads. So fine, let’s stipulate that perhaps these very real complements will reduce the need for more spectrum at the margin. Then the joke is on the big companies that want to overpay for unnecessary spectrum. We still allow big, rich companies to make mistakes, right? Why, then, do proponents of these complementary technologies still oppose allowing spectrum to flow to its highest use?

Free spectrum auctions would allow lots of companies to access spectrum — upstarts, middle tier, and yes, the big boys, who desperately need more capacity to serve the new iPad.

— Bret Swanson

Is the FCC serious about more wireless spectrum? Apparently not.

Friday, January 13th, 2012

For the third year in a row, FCC chairman Julius Genachowski used his speech at the Consumer Electronics Show in Las Vegas to push for more wireless spectrum. He wants Congress to pass the incentive auction law that would unleash hundreds of megahertz of spectrum to new and higher uses. Most of Congress agrees: we need lots more wireless capacity and spectrum auctions are a good way to get there.

Genachowski, however, wants overarching control of the new spectrum and, by extension, the mobile broadband ecosystem. The FCC wants the authority to micromanage the newly available radio waves — who can buy it, how much they can buy, how they can use it, what content flows over it, what business models can be employed with it. But this is an arena that is growing wildly fast, where new technologies appear every day, and where experimentation is paramount to see which business models work. Auctions are supposed to be a way to get more spectrum into the marketplace, where lots of companies and entrepreneurs can find the best ways to use it to deliver new communications services. ”Any restrictions” by Congress on the FCC “would be a real mistake,” said Genachowski. In other words, he doesn’t want Congress to restrict his ability to restrict the mobile business. It seems the liberty of regulators to act without restraint is a higher virtue than the liberty of private actors.

At the end of 2011, the FCC and Justice Department vetoed AT&T’s proposed merger with T-Mobile, a deal that would have immediately expanded 3G mobile capacity across the nation and accelerated AT&T’s next generation 4G rollout by several years. That deal was all about a more effective use of spectrum, more cell towers, more capacity to better serve insatiable smart-phone and tablet equipped consumers. Now the FCC is holding hostage the spectrum auction bill with its my-way-or-the-highway approach. And one has to ask: Is the FCC really serious about spectrum, mobile capacity, and a healthy broadband Internet?

— Bret Swanson

Why is the FCC playing procedural games?

Wednesday, November 30th, 2011

America is in desperate need of economic growth. But as the U.S. economy limps along, with unemployment stuck at 9%, the Federal Communications Commission is playing procedural tiddlywinks with the nation’s largest infrastructure investor, in the sector of the economy that offers the most promise for innovation and 21st century jobs. In normal times, we might chalk this up to clever Beltway maneuvering. But do we really have the time or money to indulge bureaucratic gamesmanship?

On Thanksgiving Eve, the FCC surprised everyone. It hadn’t yet completed its investigation into the proposed AT&T-T-Mobile wireless merger, and the parties had not had a chance to discuss or rebut the agency’s initial findings. Yet the FCC preempted the normal process by announcing it would send the case to an administrative law judge — essentially a vote of no-confidence in the deal. I say “vote,” but  the FCC commissioners hadn’t actually voted on the order.

FCC Chairman Julius Genachowski called AT&T CEO Randall Stevenson, who, on Thanksgiving Day, had to tell investors he was setting aside $4 billion in case Washington blocked the deal.

The deal is already being scrutinized by the Department of Justice, which sued to block the merger last summer. The fact that telecom mergers and acquisitions must negotiate two levels of federal scrutiny, at DoJ and FCC, is already an extra burden on the Internet industry. But when one agency on this dual-track games the system by trying to influence the other track — maybe because the FCC felt AT&T had a good chance of winning its antitrust case — the obstacles to promising economic activity multiply.

After the FCC’s surprise move, AT&T and T-Mobile withdrew their merger application at the FCC. No sense in preparing for an additional hearing before an administrative law judge when they are already deep in preparation for the antitrust trial early next year. Moreover, the terms of the merger agreement are likely to have changed after the companies (perhaps) negotiate conditions with the DoJ. They’d have to refile an updated application anyway. Not so fast, said the FCC. We’re not going to allow AT&T and T-Mobile to withdraw their application. Or we if we do allow it, we will do so “with prejudice,” meaning the parties can’t refile a revised application at a later date. On Tuesday the FCC relented — the law is clear: an applicant has the right to withdraw an application without consent from the FCC. But the very fact the FCC initially sought to deny the withdrawal is itself highly unusual. Again, more procedural gamesmanship.

If that weren’t enough, the FCC then said it would release its “findings” in the case — another highly unusual (maybe unprecedented) action. The agency hadn’t completed its process, and there had been no vote on the matter. So the FCC instead released what it calls a “staff report” — a highly critical internal opinion that hadn’t been reviewed by the parties nor approved by the commissioners. We’re eager to analyze the substance of this “staff report,” but the fact the FCC felt the need to shove it out the door was itself remarkable.

It appears the FCC is twisting legal procedure any which way to fit its desired outcome, rather than letting the normal merger process play out. Indeed, “twisting legal procedure” may be too kind. It has now thrown law and procedure out the window and is in full public relations mode. These extralegal PR games tilt the playing field against the companies, against investment and innovation, and against the health of the U.S. economy.

— Bret Swanson

World Broadband Update

Tuesday, June 28th, 2011

The OECD published its annual Communications Outlook last week, and the 390 pages offer a wealth of information on all-things-Internet — fixed line, mobile, data traffic, price comparisons, etc. Among other remarkable findings, OECD notes that:

In 1960, only three countries — Canada, Sweden and the United States — had more than one phone for every four inhabitants. For most of what would become OECD countries a year later, the figure was less than 1 for every 10 inhabitants, and less than 1 in 100 in a couple of cases. At that time, the 84 million telephones in OECD countries represented 93% of the global total. Half a century later there are 1.7 billion telephones in OECD countries and a further 4.1 billion around the world. More than two in every three people on Earth now have a mobile phone.

Very useful stuff. But in recent times the report has also served as a chance for some to misrepresent the relative health of international broadband markets. The common refrain the past several years was that the U.S. had fallen way behind many European and Asian nations in broadband. The mantra that the U.S. is “15th in the world in broadband” — or 16th, 21st, 24th, take your pick — became a sort of common lament. Except it wasn’t true.

As we showed here, the second half of the two-thousand-aughts saw an American broadband boom. The Phoenix Center and others showed that the most cited stat in those previous OECD reports — broadband connections per 100 inhabitants — actually told you more about household size than broadband. And we developed metrics to better capture the overall health of a nation’s Internet market — IP traffic per Internet user and per capita.

Below you’ll see an update of the IP traffic per Internet user chart, built upon Cisco’s most recent (June 1, 2011) Visual Networking Index report. The numbers, as they did last year, show the U.S. leads every region of the world in the amount of IP traffic we generate and consume both in per user and per capita terms. Among nations, only South Korea tops the U.S., and only Canada matches the U.S.

Although Asia contains broadband stalwarts like Korea, Japan, and Singapore, it also has many laggards. If we compare the U.S. to the most uniformly advanced region, Western Europe, we find the U.S. generates 62% more traffic per user. (These figures are based on Cisco’s 2010 traffic estimates and the ITU’s 2010 Internet user numbers.)

As we noted last year, it’s not possible for the U.S. to both lead the world by a large margin in Internet usage and lag so far behind in broadband. We think these traffic per user and per capita figures show that our residential, mobile, and business broadband networks are among the world’s most advanced and ubiquitous.

Lots of other quantitative and qualitative evidence — from our smart-phone adoption rates to the breakthrough products and services of world-leading device (Apple), software (Google, Apple), and content companies (Netflix) — reaffirms the fairly obvious fact that the U.S. Internet ecosystem is in fact healthy, vibrant, and growing. Far from lagging, it leads the world in most of the important digital innovation indicators.

— Bret Swanson

The Slow Walk to a Reregulated Communications Market

Tuesday, May 24th, 2011

The generally light-touch regulatory approach to America’s Internet industry has been a big success story. Broadband, wireless, digital devices, Internet content and apps — these technology sectors have exploded over the last half-dozen years, even through the Great Recession.

So why are Washington regulators gradually encroaching on the Net’s every nook and cranny? Perhaps the explanation is a paraphrased line about Washington’s upside-down ways: If it fails, subsidize it. If it succeeds, tax it. And if it succeeds wildly, regulate it.

Whatever the reason, we should watch out and speak up, lest D.C. do-gooders slow the growth of our most dynamic economic engine.

Last December, the FCC imposed a watered down version of Net Neutrality. A few weeks ago the FCC asserted authority to regulate prices and terms in the data roaming market for mobile phones. There are endless Washington proposals to regulate digital advertising markets and impose strict new rules to (supposedly) protect consumer privacy. The latest new idea (but surely not the last) is to regulate prices and terms of “special access,” or Internet connectivity in the middle of the network.

Special access refers to high-speed links that connect, say, cell phone towers to the larger network, or an office building to a metro fiber ring. Another common name for these network links is “backhaul.” Washington lobbyists have for years been trying to get the FCC to dictate terms in this market, without success. But now, as part of the proposed AT&T-T-Mobile merger, they are pushing harder than ever to incorporate regulation of these high-speed Internet lines into the government’s prospective approval of  the acquisition.

As the chief opponent of the merger, Sprint especially is lobbying for the new regulations. Sprint claims that just a few companies control most the available backhaul links to its cell phone towers and wants the FCC to set rates and terms for its backhaul leases. But from the available information, it’s clear that many companies — not just Verizon and AT&T — provide these Special Access backhaul services. It’s not clear why an AT&T-T-Mobile combination should have a big effect on the market, nor why the FCC should use the event to regulate a well-functioning market.

Sprint is a majority owner and major partner of 4G mobile network Clearwire, which uses its own microwave wireless links for 90% of its backhaul capacity. Sprint used Clearwire backhaul for its Xohm Wi-Max network beginning in 2008 and will pay Clearwire around a billion dollars over the next two years to lease backhaul capacity.

T-Mobile, meanwhile, uses mostly non-AT&T, non-Verizon backhaul for its towers. Recent estimates say something like 80% of T-Mobile sites are linked by smaller Special Access providers like Bright House, FiberNet, Zayo Bandwidth, and IP Networks. Lots of other providers exist, from the large cable companies like Comcast, Cox, and TimeWarner to smaller specialty firms like FiberTower and TowerCloud to large backbone providers like Level 3. The cable companies all report fast growing cell site backhaul sales, accounting for large shares of their wholesale revenue.

One of the rationales for AT&T’s purchase of T-Mobile was that the two companies’ cell sites are complementary, not duplicative, meaning AT&T may not have links to many or most of T-Mobile’s sites. So at least in the short term it’s likely the T-Mobile cells will continue to use their existing backhaul providers, who are, again, mostly not Verizon or AT&T. It’s possible over time AT&T would expand its network and use its own links to serve the sites, but the backhaul business by then will only be more competitive than today.

This is a mostly unseen part of the Internet. Few of us every think about Special Access or Backhaul when we fire up our Blackberry, Android, or iPhone. But these lines are key components in mobile ecosystem, essential to delivering the voices and bits to and from our phones, tablets, and laptops. The wireless industry, moreover, is in the midst of a massive upgrade of its backhaul lines to accommodate first 3G and now 4G networks that will carry ever richer multimedia content. This means replacing the old T-1 and T-3 copper phone lines with new fiber optic lines and high-speed radio links. These are big investments in a very competitive market.

Given the Internet industry’s overwhelming contribution to the U.S. economy — not just as an innovative platform but as a leading investor in the capital base of the nation — one might think we wouldn’t lightly trifle with success. The chart below, compiled by economist Michael Mandel, shows that the top two — and three out of the top seven — domestic investors are communications companies. These are huge sums of money supporting hundreds of thousands of jobs directly and many millions indirectly.

via Michael Mandel

We’ve seen the damage micromanagement can cause — in the communications sector no less. The type of regulation of prices and terms on infrastructure leases now proposed for Special Access was, in my view, a key to the 2000 tech/telecom crash. FCC intrusions (remember line sharing, TELRIC, and UNE-P, etc.) discouraged investments in the first generation of broadband. We fell behind nations like Korea. Over the last half-dozen years, however, we righted our communications ship and leapt to the top of the world in broadband and especially mobile services.

I’m not arguing these regulations would crash the sector. But the accumulated costs of these creeping Washington intrusions could disrupt the crucial price mechanisms and investment incentives that are no where more important than the fastest growing, most dynamic markets, like mobile networks.Time for FCC lawyers to hit the beach — for Memorial Day weekend . . . and beyond. They should sit back and enjoy the stupendous success of the sector they oversee. The market is working.

— Bret Swanson

Akamai CEO Exposes FCC’s Confused “Paid Priority” Prohibition

Tuesday, January 4th, 2011

In the wake of the FCC’s net neutrality Order, published on December 23, several of us have focused on the Commission’s confused and contradictory treatment of “paid prioritization.” In the Order, the FCC explicitly permits some forms of paid priority on the Internet but strongly discourages other forms.

From the beginning — that is, since the advent of the net neutrality concept early last decade — I argued that a strict neutrality regime would have outlawed, among other important technologies, CDNs, which prioritized traffic and made (make!) the Web video revolution possible.

So I took particular notice of this new interview (sub. required) with Akamai CEO Paul Sagan in the February 2011 issue of MIT’s Technology Review:

TR: You’re making copies of videos and other Web content and distributing them from strategic points, on the fly.

Paul Sagan: Or routes that are picked on the fly, to route around problematic conditions in real time. You could use Boston [as an analogy]. How do you want to cross the Charles to, say, go to Fenway from Cambridge? There are a lot of bridges you can take. The Internet protocol, though, would probably always tell you to take the Mass. Ave. bridge, or the BU Bridge, which is under construction right now and is the wrong answer. But it would just keep trying. The Internet can’t ever figure that out — it doesn’t. And we do.

There it is. Akamai and other content delivery networks (CDNs), including Google, which has built its own CDN-like network, “route around” “the Internet,” which “can’t ever figure . . . out” the fastest path needed for robust packet delivery. And they do so for a price. In other words: paid priority. Content companies, edge innovators, basement bloggers, and poor non-profits who don’t pay don’t get the advantages of CDN fast lanes. (more…)

Did the FCC order get lots worse in last two weeks?

Tuesday, December 21st, 2010

So, here we are. Today the FCC voted 3-2 to issue new rules governing the Internet. I expect the order to be struck down by the courts and/or Congress. Meantime, a few observations:

  • The order appears to be more intrusive on the topic of “paid prioritization” than was Chairman Genachowski’s outline earlier this month. (Keep in mind, we haven’t seen the text. The FCC Commissioners themselves only got access to the text at 11:42 p.m. last night.)
  • If this is true, if the “nondiscrimination” ban goes further than a simple reasonableness test, which itself would be subject to tumultuous legal wrangling, then the Net Neutrality order could cause more problems than I wrote about in this December 7 column.
  • A prohibition or restriction on “paid prioritization” is a silly rule that belies a deep misunderstanding of how our networks operate today and how they will need to operate tomorrow. Here’s how I described it in recent FCC comments:

In September 2010, a new network company that had operated in stealth mode digging ditches and boring tunnels for the previous 24 months, emerged on the scene. As Forbes magazine described it, this tiny new company, Spread Networks

“spent the last two years secretly digging a gopher hole from Chicago to New York, usurping the erstwhile fastest paths. Spread’s one-inch cable is the latest weapon in the technology arms race among Wall Street houses that use algorithms to make lightning-fast trades. Every day these outfits control bigger stakes of the markets – up to 70% now. “Anybody pinging both markets  has to be on this line, or they’re dead,” says Jon A. Najarian, cofounder of OptionMonster, which tracks high-frequency trading.

“Spread’s advantage lies in its route, which makes nearly a straight line from a data center  in Chicago’s South Loop to a building across the street from Nasdaq’s servers in Carteret, N.J. Older routes largely follow railroad rights-of-way through Indiana, Ohio and Pennsylvania. At 825 miles and 13.3 milliseconds, Spread’s circuit shaves 100 miles and 3 milliseconds off of the previous route of lowest latency, engineer-talk for length of delay.”

Why spend an estimated $300 million on an apparently duplicative route when numerous seemingly similar networks already exist? Because, Spread says, three milliseconds matters.

Spread offers guaranteed latency on its dark fiber product of no more than 13.33 milliseconds. Its managed wave product is guaranteed at no more than 15.75 milliseconds. It says competitors’ routes between Chicago and New York range from 16 to 20 milliseconds. We don’t know if Spread will succeed financially. But Spread is yet another demonstration that latency is of enormous and increasing importance. From entertainment to finance to medicine, the old saw is truer than ever: time is money. It can even mean life or death.

A policy implication arises. The Spread service is, of course, a form a “paid prioritization.” Companies are paying “eight to 10 times the going rate” to get their bits where they want them, when they want them.5 It is not only a demonstration of the heroic technical feats required to increase the power and diversity of our networks. It is also a prime example that numerous network users want to and will pay money to achieve better service.

One way to achieve better service is to deploy more capacity on certain links. But capacity is not always the problem. As Spread shows, another way to achieve better service is to build an entirely new 750-mile fiber route through mountains to minimize laser light delay. Or we might deploy a network of server caches that store non-realtime data closer to the end points of networks, as many Content Delivery Networks (CDNs) have done. But when we can’t build a new fiber route or store data – say, when we need to get real-time packets from point to point over the existing network – yet another option might be to route packets more efficiently with sophisticated QoS technologies. Each of these solutions fits a particular situation. They take advantage of, or submit to, the technological and economic trade-offs of the moment or the era. They are all legitimate options. Policy simply must allow for the diversity and flexibility of technical and economic options – including paid prioritization – needed to manage networks and deliver value to end-users.

Depending on how far the FCC is willing to take these misguided restrictions, it could actually lead to the very outcomes most reviled by “open Internet” fanatics — that is, more industry concentration, more “walled gardens,” more closed networks. Here’s how I described the possible effect of restrictions on the important voluntary network management tools and business partnerships needed to deliver robust multimedia services:

There has also been discussion of an exemption for “specialized services.” Like wireless, it is important that such specialized services avoid the possible innovation-sapping effects of a Net Neutrality regulatory regime. But the Commission should consider several unintended consequences of moving down the path of explicitly defining, and then exempting, particular “specialized” services while choosing to regulate the so-called “basic,” “best-effort,” or “entry level” “open Internet.”

Regulating the “basic” Internet but not “specialized” services will surely push most of the network and application innovation and investment into the unregulated sphere. A “specialized” exemption, although far preferable to a Net Neutrality world without such an exemption, would tend to incentivize both CAS providers and ISPs service providers to target the “specialized” category and thus shrink the scope of the “open Internet.”

In fact, although specialized services should and will exist, they often will interact with or be based on the “basic” Internet. Finding demarcation lines will be difficult if not impossible. In a world of vast overlap, convergence, integration, and modularity, attempting to decide what is and is not “the Internet” is probably futile and counterproductive. The very genius of the Internet is its ability to connect to, absorb, accommodate, and spawn new networks, applications and services. In a great compliment to its virtues, the definition of the Internet is constantly changing. Moreover, a regime of rigid quarantine would not be good for consumers. If a CAS provider or ISP has to build a new physical or logical network, segregate services and software, or develop new products and marketing for a specifically defined “specialized” service, there would be a very large disincentive to develop and offer simple innovations and new services to customers over the regulated “basic” Internet. Perhaps a consumer does not want to spend the extra money to jump to the next tier of specialized service. Perhaps she only wants the service for a specific event or a brief period of time. Perhaps the CAS provider or ISP can far more economically offer a compelling service over the “basic” Internet with just a small technical tweak, where a leap to a full-blown specialized service would require more time and money, and push the service beyond the reach of the consumer. The transactions costs of imposing a “specialized” quarantine would reduce technical and economic flexibility on both CAS providers and ISPs and, most crucially, on consumers.

Or, as we wrote in our previous Reply Comments about a related circumstance, “A prohibition of the voluntary partnerships that are likely to add so much value to all sides of the market – service provider, content creator, and consumer – would incentivize the service provider to close greater portions of its networks to outside content, acquire more content for internal distribution, create more closely held ‘managed services’ that meet the standards of the government’s ‘exclusions,’ and build a new generation of larger, more exclusive ‘walled gardens’ than would otherwise be the case. The result would be to frustrate the objective of the proceeding. The result would be a less open Internet.”

It is thus possible that a policy seeking to maintain some pure notion of a basic “open Internet” could severely devalue the open Internet the Commission is seeking to preserve.

All this said, the FCC’s legal standing is so tenuous and this order so rooted in reasoning already rejected by the courts, I believe today’s Net Neutrality rule will be overturned. Thus despite the numerous substantive and procedural errors committed on this “darkest day of the year,” I still expect the Internet to “survive and thrive.”

The Internet Survives, and Thrives, For Now

Tuesday, December 7th, 2010

See my analysis of the FCC’s new “net neutrality” policy at RealClearMarkets:

Despite the Federal Communications Commission’s “net neutrality” announcement this week, the American Internet economy is likely to survive and thrive. That’s because the new proposal offered by FCC chairman Julius Genachowski is lacking almost all the worst ideas considered over the last few years. No one has warned more persistently than I against the dangers of over-regulating the Internet in the name of “net neutrality.”

In a better world, policy makers would heed my friend Andy Kessler’s advice to shutter the FCC. But back on earth this new compromise should, for the near-term at least, cap Washington’s mischief in the digital realm.

. . .

The Level 3-Comcast clash showed what many of us have said all along: “net neutrality” was a purposely ill-defined catch-all for any grievance in the digital realm. No more. With the FCC offering some definition, however imperfect, businesses will now mostly have to slug it out in a dynamic and tumultuous technology arena, instead of running to the press and politicians.

FCC Proposal Not Terrible. Internet Likely to Survive and Thrive.

Wednesday, December 1st, 2010

The FCC appears to have taken the worst proposals for regulating the Internet off the table. This is good news for an already healthy sector. And given info-tech’s huge share of U.S. investment, it’s good news for the American economy as a whole, which needs all the help it can get.

In a speech this morning, FCC chair Julius Genachowski outlined a proposal he hopes the other commissioners will approve at their December 21 meeting. The proposal, which comes more than a year after the FCC issued its Notice of Proposed Rule Making into “Preserving the Open Internet,” appears mostly to codify the “Four Principles” that were agreed to by all parties five years ago. Namely:

  • No blocking of lawful data, websites, applications, services, or attached devices.
  • Transparency. Consumers should know what the services and policies of their providers are, and what they mean.
  • A prohibition of “unreasonable discrimination,” which essentially means service providers must offer their products at similar rates and terms to similarly situated customers.
  • Importantly, broadband providers can manage their networks and use new technologies to provide fast, robust services. Also, there appears to be even more flexibility for wireless networks, though we don’t yet know the details.

(All the broad-brush concepts outlined today will need closer scrutiny when detailed language is unveiled, and as with every government regulation, implementation and enforcement can always yield unpredictable results. One also must worry about precedent and a new platform for future regulation. Even if today’s proposal isn’t too harmful, does the new framework open a regulatory can of worms?)

So, what appears to be off the table? Most of the worst proposals that have been flying around over the last year, like . . .

  • Reclassification of broadband as an old “telecom service” under Title II of the Communications Act of 1934, which could have pierced the no-government seal on the Internet in a very damaging way, unleashing all kinds of complex and antiquated rules on the modern Net.
  • Price controls.
  • Rigid nondiscrimination rules that would have barred important network technologies and business models.
  • Bans of quality-of-service technologies and techniques (QoS), tiered pricing, or voluntary relationships between ISPs and content/application/service (CAS) providers.
  • Open access mandates, requiring networks to share their assets.

Many of us have long questioned whether formal government action in this arena is necessary. The Internet ecosystem is healthy. It’s growing and generating an almost dizzying array of new products and services on diverse networks and devices. Communications networks are more open than ever. Facebook on your BlackBerry. Netflix on your iPad. Twitter on your TV. The oft-cited world broadband comparisons, which say the U.S. ranks 15h, or even 26th, are misleading. Those reports mostly measure household size, not broadband health. Using new data from Cisco, we estimate the U.S. generates and consumes more network traffic per user and per capita than any nation but South Korea. (Canada and the U.S. are about equal.) American Internet use is twice that of many nations we are told far outpace the U.S. in broadband. Heavy-handed regulation would have severely depressed investment and innovation in a vibrant industry. All for nothing.

Lots of smart lawyers doubt the FCC has the authority to issue even the relatively modest rules it outlined today. They’re probably right, and the question will no doubt be litigated (yet again), if Congress does not act first. But with Congress now divided politically, the case remains that Mr. Genachowski’s proposal is likely the near-term ceiling on regulation. Policy might get better than today’s proposal, but it’s not likely to get any worse. From what I see today, that’s a win for the Internet, and for the U.S. economy.

— Bret Swanson

One Step Forward, Two Steps Back

Monday, November 22nd, 2010

The FCC’s apparent about-face on Net Neutrality is really perplexing.

Over the past few weeks it looked like the Administration had acknowledged economic reality (and bipartisan Capitol Hill criticism) and turned its focus to investment and jobs. Outgoing NEC Director Larry Summers and Commerce Secretary Gary Locke announced a vast expansion of available wireless spectrum, and FCC chairman Julius Genachowski used his speech to the NARUC state regulators to encourage innovation and employment. Gone were mentions of the old priorities — intrusive new regulations such as Net Neutrality and Title II reclassification of modern broadband as an old telecom service. Finally, it appeared, an already healthy and vibrant Internet sector could stop worrying about these big new government impositions — and years of likely litigation — and get on with building the 21st century digital infrastructure.

But then came word at the end of last week that the FCC would indeed go ahead with its new Net Neutrality regs. Perhaps even issuing them on December 22, just as Congress and the nation take off for Christmas vacation [the FCC now says it will hold its meeting on December 15]. When even a rare  economic sunbeam is quickly clouded by yet more heavy-handedness from Washington, is it any wonder unemployment remains so high and growth so low?

Any number of people sympathetic to the economy’s and the Administration’s plight are trying to help. Last week David Leonhardt of the New York Times pointed the way, at least in a broad strategic sense: “One Way to Trim the Deficit: Cultivate Growth.” Yes, economic growth! Remember that old concept? Economist and innovation expert Michael Mandel has suggested a new concept of “countercyclical regulatory policy.” The idea is to lighten regulatory burdens to boost growth in slow times and then, later, when the economy is moving full-steam ahead, apply more oversight to curb excesses. Right now, we should be lightening burdens, Mandel says, not imposing new ones:

it’s really a dumb move to monkey with the vibrant and growing communications sector when the rest of the economy is so weak. It’s as if you have two cars — one running, one in the repair shop — and you decide it’s a good time to rebuild the transmission of the car that actually works because you hear a few squeaks.

Apparently, FCC honchos met with interested parties this morning to discuss what comes next. Unfortunately, at a time when we need real growth, strong growth, exuberant growth! (as Mandel would say), the Administration appears to be saddling an economy-lifting reform (wireless spectrum expansion) with leaden regulation. What’s the point of new wireless spectrum if you massively devalue it with Net Neutrality, open access, and/or Title II?

One step forward, two steps back (ten steps back?) is not an exuberant growth and jobs strategy.

Microsoft Outlines Economics of the Cloud

Friday, November 12th, 2010

In a new white paper:

We believe that large clouds could one day deliver computing power at up to 80% lower cost than small clouds.  This is due to the combined effects of three factors:supply-sideeconomies of scale which allow large clouds to purchase and operate infrastructure cheaper; demand-sideeconomies of scale which allow large clouds to run that infrastructure more efficiently by pooling users; and multi-tenancy which allows users to share an application, splitting the cost of managing that application.

International Broadband Comparison, continued

Thursday, October 14th, 2010

New numbers from Cisco allow us to update our previous comparison of actual Internet usage around the world. We think this is a far more useful metric than the usual “broadband connections per 100 inhabitants” used by the OECD and others to compile the oft-cited world broadband rankings.

What the per capita metric really measures is household size. And because the U.S. has more people in each household than many other nations, we appear worse in those rankings. But as the Phoenix Center has noted, if each OECD nation reached 100% broadband nirvana — i.e., every household in every nation connected — the U.S. would actually fall from 15th to 20th. Residential connections per capita is thus not a very illuminating measure.

But look at the actual Internet traffic generated and consumed in the U.S.

The U.S. far outpaces every other region of the world. In the second chart, you can see that in fact only one nation, South Korea, generates significantly more Internet traffic per user than the U.S. This is no surprise. South Korea was the first nation to widely deploy fiber-to-the-x and was also the first to deploy 3G mobile, leading to not only robust infrastructure but also a vibrant Internet culture. The U.S. dwarfs most others.

If the U.S. was so far behind in broadband, we could not generate around twice as much network traffic per user compared to nations we are told far exceed our broadband capacity and connectivity. The U.S. has far to go in a never-ending buildout of its communications infrastructure. But we invest more than other nations, we’ve got better broadband infrastructure overall, and we use broadband more — and more effectively (see the Connectivity Scorecard and The Economist’s Digital Economy rankings) — than almost any other nation.

The conventional wisdom on this one is just plain wrong.

The Regulatory Threat to Web Video

Monday, May 17th, 2010

See our commentary at Forbes.com, responding to Revision3 CEO Jim Louderback’s calls for Internet regulation.

What we have here is “mission creep.” First, Net Neutrality was about an “open Internet” where no websites were blocked or degraded. But as soon as the whole industry agreed to these perfectly reasonable Open Web principles, Net Neutrality became an exercise in micromanagement of network technologies and broadband business plans. Now, Louderback wants to go even further and regulate prices. But there’s still more! He also wants to regulate the products that broadband providers can offer.

“In the Matter of Preserving the Open Internet”

Thursday, April 29th, 2010

Here were my comments in the FCC’s Notice of Proposed Rule Making on “Preserving the Open Internet” — better known as “Net Neutrality”:

A Net Neutrality regime will not make the Internet more “open.” The Internet is already very open. More people create and access more content and applications than ever before. And with the existing Four Principles in place, the Internet will remain open. In fact, a Net Neutrality regime could close off large portions of the Internet for many consumers. By intruding in technical infrastructure decisions and discouraging investment, Net Neutrality could decrease network capacity, connectivity, and robustness; it could increase prices; it could slow the cycle of innovation; and thus shut the window to the Web on millions of consumers. Net Neutrality is not about openness. It is far more accurate to say it is about closing off experimentation, innovation, and opportunity.

A Victory For the Free Web

Wednesday, April 7th, 2010

After yesterday’s federal court ruling against the FCC’s overreaching net neutrality regulations, which we have dedicated considerable time and effort combatting for the last seven years, Holman Jenkins says it well:

Hooray. We live in a nation of laws and elected leaders, not a nation of unelected leaders making up rules for the rest of us as they go along, whether in response to besieging lobbyists or the latest bandwagon circling the block hauled by Washington’s permanent “public interest” community.

This was the reassuring message yesterday from the D.C. Circuit Court of Appeals aimed at the Federal Communications Commission. Bottom line: The FCC can abandon its ideological pursuit of the “net neutrality” bogeyman, and get on with making the world safe for the iPad.

The court ruled in considerable detail that there’s no statutory basis for the FCC’s ambition to annex the Internet, which has grown and thrived under nobody’s control.

. . .

So rather than focusing on new excuses to mess with network providers, the FCC should tackle two duties unambiguously before it: Figure out how to liberate the nation’s wireless spectrum (over which it has clear statutory authority) to flow to more market-oriented uses, whether broadband or broadcast, while also making sure taxpayers get adequately paid as the current system of licensed TV and radio spectrum inevitably evolves into something else.

Second: Under its media ownership hat, admit that such regulation, which inhibits the merger of TV stations with each other and with newspapers, is disastrously hindering our nation’s news-reporting resources and brands from reshaping themselves to meet the opportunities and challenges of the digital age. (Willy nilly, this would also help solve the spectrum problem as broadcasters voluntarily redeployed theirs to more profitable uses.)

Chronically Critical Broadband Country Comparisons

Friday, March 26th, 2010

With the release of the FCC’s National Broadband Plan, we continue to hear all sorts of depressing stories about the sorry state of American broadband Internet access. But is it true?

International comparisons in such a fast-moving arena as tech and communications are tough. I don’t pretend it is easy to boil down a hugely complex topic to one right answer, but I did have some critical things to say about a major recent report that got way too many things wrong. A new article by that report’s author singled out France as especially more advanced than the U.S. To cut through all the clutter of conflicting data and competing interpretations on broadband deployment, access, adoption, prices, and speeds, however, maybe a simple chart will help.

Here we compare network usage. Not advertised speeds, which are suspect. Not prices which can be distorted by the use of purchasing power parity (PPP). Not “penetration,” which is largely a function of income, urbanization, and geography. No, just simply, how much data traffic do various regions create and consume.

If U.S. networks were so backward — too sparse, too slow, too expensive — would Americans be generating 65% more network traffic per capita than their Western European counterparts?

Washington liabilities vs. innovative assets

Friday, March 12th, 2010

Our new article at RealClearMarkets:

As Washington and the states pile up mountainous liabilities — $3 trillion for unfunded state pensions, $10 trillion in new federal deficits through 2019, and $38 trillion (or is it $50 trillion?) in unfunded Medicare promises — the U.S. needs once again to call on its chief strategic asset: radical innovation.

One laboratory of growth will continue to be the Internet. The U.S. began the 2000’s with fewer than five million residential broadband lines and zero mobile broadband. We begin the new decade with 71 million residential lines and 300 million portable and mobile broadband devices. In all, consumer bandwidth grew almost 15,000%.

Even a thriving Internet, however, cannot escape Washington’s eager eye. As the Federal Communications Commission contemplates new “network neutrality” regulation and even a return to “Title II” telephone regulation, we have to wonder where growth will come from in the 2010’s . . . .

Quote of the Day

Thursday, March 11th, 2010

“No moment in technology history has ever been more exciting or dangerous than now. The Internet is like a new computer running a flashy, exciting demo. We have been entranced by this demo for fifteen years. But now it is time to get to work, and make the Internet do what we want it to . . . .

“Practical business: who will win the tug of war between private machines and the Cloud? Will you store your personal information on your own personal machines, or on nameless servers far away in the Cloud, or both? Answer: in the Cloud. The Cloud (or the Internet Operating System, IOS — ‘Cloud 1.0′) will take charge of your personal machines. It will move the information you need at any given moment onto your own cellphone, laptop, pad, pod — but will always keep charge of the master copy. When you make changes to any document, the changes will be reflected immediately in the Cloud. Many parts of this service are available already.”

— David Gelernter, “Time to Start Taking the Internet Seriously”

Commone Sense of Amazonian Proportions

Monday, January 18th, 2010

Amazon’s Paul Misener gets all reasonable in his comments on the FCC’s proposed net neutrality rules:

With this win-win-win goal in mind, and consistent with the principle of maintaining an open Internet, Amazon respectfully suggests that the FCC’s proposed rules be extended to allow broadband Internet access service providers to favor some content so long as no harm is done to other content.

Importantly, we note that the Internet has long been interconnected with private networks and edge caches that enhance the performance of some Internet content in comparison with other Internet content, and that these performance improvements are paid for by some but not all providers of content.  The reason why these arrangements are acceptable from a public policy perspective is simple:  the performance of other content is not disfavored, i.e., other content is not harmed.

Google and the Meddling Kingdom

Wednesday, January 13th, 2010

Here are a few good perspectives on Google’s big announcement that it will no longer censor search results for google.cn in China, a move it says could lead to a pull-out from the Middle Kingdom.

“Google’s Move: Does it Make Sense?” by Larry Dignan

“The Google News” by James Fallows

I agree with Dignan of Znet that this move was probably less about about China and more about policy and branding in the U.S. and Europe.

UPDATE: Much more detail on the mechanics of the attack from Wired’s Threat Level blog.

Collective vs. Creative: The Yin and Yang of Innovation

Tuesday, January 12th, 2010

Later this week the FCC will accept the first round of comments in its “Open Internet” rule making, commonly known as Net Neutrality. Never mind that the Internet is already open and it was never strictly neutral. Openness and neutrality are two appealing buzzwords that serve as the basis for potentially far reaching new regulation of our most dynamic economic and cultural sector — the Internet.

I’ll comment on Net Neutrality from several angles over the coming days. But a terrific essay by Berkeley’s Jaron Lanier impelled me to begin by summarizing some of the big meta-arguments that have been swirling the last few years and which now broadly define the opposing sides in the Net Neutrality debate. After surveying these broad categories, I’ll get into the weeds on technology, business, and policy.

The thrust behind Net Neutrality is a view that the Internet should conform to a narrow set of technology and business “ideals” — “open,” “neutral,” “non-discriminatory.” Wonderful words. Often virtuous. But these aren’t the only traits important to economic and cultural systems. In fact, Net Neutrality sets up a false dichotomy — a manufactured war — between open and closed, collaborative versus commercial, free versus paid, content versus conduit. I’ve made a long list of the supposed opposing forces. Net Neutrality favors only one side of the table below. It seeks to cement in place one model of business and technology. It is intensely focused on the left-hand column and is either oblivious or hostile to the right-hand column. It thinks the right-hand items are either bad (prices) or assumes they appear magically (bandwidth).

We skeptics of Net Neutrality, on the other hand, do not favor one side or the other. We understand that there are virtues all around. Here’s how I put it on my blog last autumn:

Suggesting we can enjoy Google’s software innovations without the network innovations of AT&T, Verizon, and hundreds of service providers and technology suppliers is like saying that once Microsoft came along we no longer needed Intel.

No, Microsoft and Intel built upon each other in a virtuous interplay. Intel’s microprocessor and memory inventions set the stage for software innovation. Bill Gates exploited Intel’s newly abundant transistors by creating radically new software that empowered average businesspeople and consumers to engage with computers. The vast new PC market, in turn, dramatically expanded Intel’s markets and volumes and thus allowed it to invest in new designs and multi-billion dollar chip factories across the globe, driving Moore’s law and with it the digital revolution in all its manifestations.

Software and hardware. Bits and bandwidth. Content and conduit. These things are complementary. And yes, like yin and yang, often in tension and flux, but ultimately interdependent.

Likewise, we need the ability to charge for products and set prices so that capital can be rationally allocated and the hundreds of billions of dollars in network investment can occur. It is thus these hard prices that yield so many of the “free” consumer surplus advantages we all enjoy on the Web. No company or industry can capture all the value of the Web. Most of it comes to us as consumers. But companies and content creators need at least the ability to pursue business models that capture some portion of this value so they can not only survive but continually reinvest in the future. With a market moving so fast, with so many network and content models so uncertain during this epochal shift in media and communications, these content and conduit companies must be allowed to define their own products and set their own prices. We need to know what works, and what doesn’t.

When the “network layers” regulatory model, as it was then known, was first proposed back in 2003-04, my colleague George Gilder and I prepared testimony for the U.S. Senate. Although the layers model was little more than an academic notion, we thought then this would become the next big battle in Internet policy. We were right. Even though the “layers” proposal was (and is!) an ill-defined concept, the model we used to analyze what Net Neutrality would mean for networks and Web business models still applies. As we wrote in April of 2004:

Layering proponents . . . make a fundamental error. They ignore ever changing trade-offs between integration and modularization that are among the most profound and strategic decisions any company in any industry makes. They disavow Harvard Business professor Clayton Christensen’s theorems that dictate when modularization, or “layering,” is advisable, and when integration is far more likely to yield success. For example, the separation of content and conduit—the notion that bandwidth providers should focus on delivering robust, high-speed connections while allowing hundreds of millions of professionals and amateurs to supply the content—is often a sound strategy. We have supported it from the beginning. But leading edge undershoot products (ones that are not yet good enough for the demands of the marketplace) like video-conferencing often require integration.

Over time, the digital and photonic technologies at the heart of the Internet lead to massive integration — of transistors, features, applications, even wavelengths of light onto fiber optic strands. This integration of computing and communications power flings creative power to the edges of the network. It shifts bottlenecks. Crystalline silicon and flawless fiber form the low-entropy substrate that carry the world’s high-entropy messages — news, opinions, new products, new services. But these feats are not automatic. They cannot be legislated or mandated. And just as innovation in the core of the network unleashes innovation at the edges, so too more content and creativity at the edge create the need for ever more capacity and capability in the core. The bottlenecks shift again. More data centers, better optical transmission and switching, new content delivery optimization, the move from cell towers to femtocell wireless architectures. There is no final state of equilibrium where one side can assume that the other is a stagnant utility, at least not in the foreseeable future.

I’ll be back with more analysis of the Net Neutrality debate, but for now I’ll let Jaron Lanier (whose book You Are Not a Gadget was published today) sum up the argument:

Here’s one problem with digital collectivism: We shouldn’t want the whole world to take on the quality of having been designed by a committee. When you have everyone collaborate on everything, you generate a dull, average outcome in all things. You don’t get innovation.

If you want to foster creativity and excellence, you have to introduce some boundaries. Teams need some privacy from one another to develop unique approaches to any kind of competition. Scientists need some time in private before publication to get their results in order. Making everything open all the time creates what I call a global mush.

There’s a dominant dogma in the online culture of the moment that collectives make the best stuff, but it hasn’t proven to be true. The most sophisticated, influential and lucrative examples of computer code—like the page-rank algorithms in the top search engines or Adobe’s Flash—always turn out to be the results of proprietary development. Indeed, the adored iPhone came out of what many regard as the most closed, tyrannically managed software-development shop on Earth.

Actually, Silicon Valley is remarkably good at not making collectivization mistakes when our own fortunes are at stake. If you suggested that, say, Google, Apple and Microsoft should be merged so that all their engineers would be aggregated into a giant wiki-like project—well you’d be laughed out of Silicon Valley so fast you wouldn’t have time to tweet about it. Same would happen if you suggested to one of the big venture-capital firms that all the start-ups they are funding should be merged into a single collective operation.

But this is exactly the kind of mistake that’s happening with some of the most influential projects in our culture, and ultimately in our economy.

Berkman’s Broadband Bungle

Tuesday, December 22nd, 2009

Professors at a leading research unit put suspect data into a bad model, fail to include crucial variables, and even manufacture the most central variable to deliver the hoped-for outcome.

Climate-gate? No, call it Berkman’s broadband bungle.

In October, Harvard’s Berkman Center for the Internet and Society delivered a report, commissioned by the Federal Communications Commission, comparing international broadband markets and policies. The report was to be a central component of the Administration’s new national broadband Internet policy, arriving in February 2010.

Just one problem. Actually many problems. The report botched its chief statistical model in half a dozen ways. It used loads of questionable data. It didn’t account for the unique market structure of U.S. broadband. It reversed the arrow of time in its country case studies. It ignored the high-profile history of open access regulation in the U.S. It didn’t conduct the literature review the FCC asked for. It excommunicated Switzerland . . . .

See my critique of this big report on international broadband at RealClearMarkets.

New York and Net Neutrality

Friday, November 20th, 2009

This morning, the Technology Committee of the New York City Council convened a large hearing on a resolution urging Congress to pass a robust Net Neutrality law. I was supposed to testify, but our narrowband transportation system prevented me from getting to New York. Here, however, is the testimony I prepared. It focuses on investment, innovation, and the impact Net Neutrality would have on both.

“Net Neutrality’s Impact on Internet Innovation” – by Bret Swanson – 11.20.09

Must Watch Web Debate

Thursday, November 19th, 2009

If you’re interested in Net Neutrality regulation and have some time on your hands, watch this good debate at the Web 2.0 conference. The resolution was “A Network Neutrality law is necessary,” and the two opposing sides were:

Against

  • James Assey – Executive Vice President, National Cable and Telecommunications Association
  • Robert Quinn -  Senior Vice President-Federal Regulatory, AT&T
  • Christopher Yoo – Professor of Law and Communication; Director, Center for Technology, Innovation, and Competition, UPenn Law

For

  • Tim Wu – Coined the term “Network Neutrality”; Professor of Law, Columbia Law
  • Brad Burnham – VC, Union Square Ventures
  • Nicholas Economides – Professor of Economics, Stern School of Business, New York University.

I think the side opposing the resolution wins, hands down — no contest really — but see for yourself.

Quote of the Day

Wednesday, October 28th, 2009

“I hope that they (government regulators) leave it alone . . . The Internet is working beautifully as it is.”

— Tim Draper, Silicon Valley venture capitalist, who along with many other SV investors and executives signed a letter advocating new Internet regulations apparently unaware of its true content.

Arbor’s new Net traffic report: “This is just the beginning…”

Monday, October 19th, 2009

See this comprehensive new Web traffic study from Arbor Networks — “the largest study of global Internet traffic since the start of the commercial Internet.” 

Conclusion

Internet is at an inflection point

Transition from focus on connectivity to content
Old global Internet economic models are evolving
New entrants are reshaping definition / value of connectivity

New technologies are reshaping definition of network
“Web” / Desktop Applications, Cloud computing, CDN

Changes mean significant new commercial, security and engineering challenges

This is just the beginning…

These conclusions and the data Arbor tracked and reported largely followed our findings, projections, and predictions from two years ago:

And an update from this spring:

Also see our analysis from last winter highlighting the evolution of content delivery networks — what my colleague George Gilder dubbed “storewidth” back in 1999 — and which Arbor now says is the fastest growing source/transmitter of Net traffic.

An Exa-Prize for “Masters of Light”

Wednesday, October 7th, 2009

Holy Swedish silica/on. It’s an exa-prize!

Calling them “Masters of Light,” the Royal Swedish Academy awarded the 2009 Nobel Prize in Physics to Charles Kao, for discoveries central to the development of optical fiber, and to Willard Boyle and George Smith of Bell Labs, for the invention of the charge-coupled device (CCD) digital imager.

Perhaps more than any two discoveries, these technologies are responsible for our current era of dramatically expanding cultural content and commercial opportunities across the Internet. I call this torrent of largely visual data gushing around the Web the “exaflood.” Exa means 1018, and today monthly Internet traffic in the U.S. tops two exabytes. For all of 2009, global Internet traffic should reach 100 exabytes, equal to the contents of around 5,000,000 Libraries of Congress. By 2015, the U.S. might transmit 1,000 exabytes, the equivalent of two Libraries of Congress every second for the entire year.

Almost all this content is transmitted via fiber optics, where laser light pulsing billions of times a second carries information thousands of miles through astoundingly pure glass (silica). And much of this content is created using CCD imagers, the silicon microchips that turn photons into electrons in your digital cameras, camcorders, mobile phones, and medical devices. The basic science of the breakthroughs involves mastering the delicate but powerful reflective, refractive, and quantum photoelectric properties of both light and one of the world’s simplest and most abundant materials — sand. Also known in different forms as silica and silicon.

The innovations derived from Kao, Boyle, and Smith’s discoveries will continue cascading through global society for decades to come.

Neutrality for thee, but not for me

Sunday, October 4th, 2009

In Monday’s Wall Street Journal, I address the once-again raging topic of “net neutrality” regulation of the Web. On September 21, new FCC chair Julius Genachowski proposed more formal neutrality regulations. Then on September 25, AT&T accused Google of violating the very neutrality rules the search company has sought for others. The gist of the complaint was that the new Google Voice service does not connect all phone calls the way other phone companies are required to do. Not an earthshaking matter in itself, but a good example of the perils of neutrality regulation.

As the Journal wrote in its own editorial on Saturday:

Our own view is that the rules requiring traditional phone companies to connect these calls should be scrapped for everyone rather than extended to Google. In today’s telecom marketplace, where the overwhelming majority of phone customers have multiple carriers to choose from, these regulations are obsolete. But Google has set itself up for this political blowback.

Last week FCC Chairman Julius Genachowski proposed new rules for regulating Internet operators and gave assurances that “this is not about government regulation of the Internet.” But this dispute highlights the regulatory creep that net neutrality mandates make inevitable. Content providers like Google want to dabble in the phone business, while the phone companies want to sell services and applications.

The coming convergence will make it increasingly difficult to distinguish among providers of broadband pipes, network services and applications. Once net neutrality is unleashed, it’s hard to see how anything connected with the Internet will be safe from regulation.

Several years ago, all sides agreed to broad principles that prohibit blocking Web sites or applications. But I have argued that more detailed and formal regulations governing such a dynamic arena of technology and changing business models would stifle innovation.

Broadband to the home, office, and to a growing array of diverse mobile devices has been a rare bright spot in this dismal economy. Since net neutrality regulation was first proposed in early 2004, consumer bandwidth per capita in the U.S. grew to 3 megabits per second from just 262 kilobits per second, and monthly U.S. Internet traffic increased to two billion gigabytes from 170 million gigabytes — both 10-fold leaps. New wired and wireless innovations and services are booming.

All without net neutrality regulation.

The proposed FCC regulations could go well beyond the existing (and uncontroversial) non-blocking principles. A new “Fifth Principle,” if codified, could prohibit “discrimination” not just among applications and services but even at the level of data packets traversing the Net. But traffic management of packets is used across the Web to ensure robust service and security.

As network traffic, content, and outlets proliferate and diversify, Washington wants to apply rigid, top-down rules. But the network requirements of email and high-definition video are very different. Real time video conferencing requires more network rigor than stored content like YouTube videos. Wireless traffic patterns are more unpredictable than residential networks because cellphone users are, well, mobile. And the next generation of video cloud computing — what I call the exacloud — will impose the most severe constraints yet on network capacity and packet delay.

Or if you think entertainment unimportant, consider the implications for cybersecurity. The very network technologies that ensure a rich video experience are used to kill dangerous “botnets” and combat cybercrime.

And what about low-income consumers? If network service providers can’t partner with content companies, offer value-added services, or charge high-end users more money for consuming more bandwidth, low-end consumers will be forced to pay higher prices. Net neutrality would thus frustrate the Administration’s goal of 100% broadband.

Health care, energy, jobs, debt, and economic growth are rightly earning most of the policy attention these days. But regulation of the Net would undermine the key global platform that underlay better performance on each of these crucial economic matters. Washington may be bailing out every industry that doesn’t work, but that’s no reason to add new constraints to one that manifestly does.

— Bret Swanson

The first day of the rest of the Internet

Thursday, October 1st, 2009

Yesterday, the Joint Project Agreement between the U.S. Department of Commerce and ICANN expired. Today, a new “Affirmation of Commitments” goes into effect.

Key points from the new Affirmation:

  • ICANN will remain an independent, private-sector led organization.
  • Nations from around the world will have new input through the Government Advisory Committee (GAC).
  • Overall transparency and global involvement should improve.
  • But this Affirmation should extinguish any notions that the UN, EU, or other international players might gain new power over ICANN.
  • ICANN must focus its efforts to ensure three core objectives. That the Internet is:
  1. always on
  2. free and open
  3. secure and stable

More big issues coming down the pike. But for now, I think, a fortuitous development.

Does Google Voice violate neutrality?

Saturday, September 26th, 2009

This is the ironic but very legitimate question AT&T is asking.

As Adam Thierer writes,

Whatever you think about this messy dispute between AT&T and Google about how to classify web-based telephony apps for regulatory purposes — in this case, Google Voice — the key issue not to lose site of here is that we are inching ever closer to FCC regulation of web-based apps!  Again, this is the point we have stressed here again and again and again and again when opposing Net neutrality mandates: If you open the door to regulation on one layer of the Net, you open up the door to the eventual regulation of all layers of the Net.

George Gilder and I made this point in Senate testimony five and a half years ago. Advocates of big new regulations on the Internet should be careful for what they wish.

End-to-end? Or end to innovation?

Friday, September 25th, 2009

In what is sure to be a substantial contribution to both the technical and policy debates over Net Neutrality, Richard Bennett of the Information Technology and Innovation Foundation has written a terrific piece of technology history and forward-looking analysis. In “Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate,” Bennett concludes:

Arguments for freezing the Internet into a simplistic regulatory straightjacket often have a distinctly emotional character that frequently borders on manipulation.

The Internet is a wonderful system. It represents a new standard of global cooperation and enables forms of interaction never before possible. Thanks to the Internet, societies around the world reap the benefits of access to information, opportunities for collaboration, and modes of communication that weren’t conceivable to the public a few years ago. It’s such a wonderful system that we have to strive very hard not to make it into a fetish object, imbued with magical powers and beyond the realm of dispassionate analysis, criticism, and improvement.

At the end of the day, the Internet is simply a machine. It was built the way it was largely by a series of accidents, and it could easily have evolved along completely different lines with no loss of value to the public. Instead of separating TCP from IP in the way that they did, the academics in Palo Alto who adapted the CYCLADES architecture to the ARPANET infrastructure could have taken a different tack: They could have left them combined as a single architectural unit providing different retransmission policies (a reliable TCP-like policy and an unreliable UDP-like policy) or they could have chosen a different protocol such as Watson’s Delta-t or Pouzin’s CYCLADES TS. Had the academics gone in either of these directions, we could still have a World Wide Web and all the social networks it enables, perhaps with greater resiliency.

The glue that holds the Internet together is not any particular protocol or software implementation: first and foremost, it’s the agreements between operators of Autonomous Systems to meet and share packets at Internet Exchange Centers and their willingness to work together. These agreements are slowly evolving from a blanket pact to cross boundaries with no particular regard for QoS into a richer system that may someday preserve delivery requirements on a large scale. Such agreements are entirely consistent with the structure of the IP packet, the needs of new applications, user empowerment, and “tussle.”

The Internet’s fundamental vibrancy is the sandbox created by the designers of the first datagram networks that permitted network service enhancements to be built and tested without destabilizing the network or exposing it to unnecessary hazards. We don’t fully utilize the potential of the network to rise to new challenges if we confine innovations to the sandbox instead of moving them to the parts of the network infrastructure where they can do the most good once they’re proven. The real meaning of end-to-end lies in the dynamism it bestows on the Internet by supporting innovation not just in applications but in fundamental network services. The Internet was designed for continual improvement: There is no reason not to continue down that path.

A QoS primer

Wednesday, September 23rd, 2009

In case my verses attempting an analysis of Quality-of-Service and “net neutrality” regulation need supplementary explanation, here’s a terrifically lucid seven-minute Internet packet primer — in prose and pictures — from George Ou. Also, a longer white paper on the same topic:

Seven-minute Flash presentation: The need for a smarter prioritized Internet

White paper: Managing Broadband Networks: A Policymaker’s Guide

Leviathan Spam

Wednesday, September 23rd, 2009

Leviathan Spam

Send the bits with lasers and chips
See the bytes with LED lights

Wireless, optical, bandwidth boom
A flood of info, a global zoom

Now comes Lessig
Now comes Wu
To tell us what we cannot do

The Net, they say,
Is under attack
Stop!
Before we can’t turn back

They know best
These coder kings
So they prohibit a billion things

What is on their list of don’ts?
Most everything we need the most

To make the Web work
We parse and label
We tag the bits to keep the Net stable

The cloud is not magic
It’s routers and switches
It takes a machine to move exadigits

Now Lessig tells us to route is illegal
To manage Net traffic, Wu’s ultimate evil (more…)

A New Leash on the Net?

Monday, September 21st, 2009

Today, FCC chairman Julius Genachowski proposed new regulations on communications networks. We were among the very first opponents of these so-called “net neutrality” rules when they were first proposed in concept back in 2004. Here are a number of our relevant articles over the past few years:

Political Noise On the Net

Friday, September 18th, 2009

With an agreement between the U.S. Department of Commerce and ICANN (the nonprofit Internet Corp. for Assigned Names and Numbers, headquartered in California) expiring on September 30, global bureaucrats salivate. As I write today in Forbes, they like to criticize ICANN leadership — hoping to gain political control — but too often ignore the huge success of the private-sector-led system.

How has the world fared under the existing model?

In the 10 years of the Commerce-ICANN relationship, Web users around the globe have grown from 300 million to almost 2 billion. World Internet traffic blossomed from around 10 million gigabytes per month to almost 10billion, a near 1,000-fold leap. As the world economy grew by approximately 50%, Internet traffic grew by 100,000%. Under this decade of private sector leadership, moreover, the number of Internet users in North America grew around 150% while the number of users in the rest of the world grew almost 600%. World growth outpaced U.S. growth.

Can we really digest this historic shift? In this brief period, the portion of the globe’s population that communicates electronically will go from negligible to almost total. From a time when even the elite accessed relative spoonfuls of content, to a time in the near future when the masses will access all recorded information.

These advances do not manifest a crisis of Internet governance.

As for a real crisis? See what happens when politicians take the Internet away from the engineers who, in a necessarily cooperative fashion, make the whole thing work. Criticism of mild U.S. government oversight of ICANN is hardly reason to invite micromanagement by an additional 190 governments.

What price, broadband?

Thursday, September 3rd, 2009

See this new paper from economists Rob Shapiro and Kevin Hassett showing how artificial limits on varied pricing of broadband could severely forestall broadband adoption.

To the extent that lower-income and middle-income consumers are required to pay a greater share of network costs, we should expect a substantial delay in achieving universal broadband access. Our simulations suggest that spreading the costs equally among all consumers — the minority who use large amounts of bandwidth and the majority who use very little — will significantly slow the rate of adoption at the lower end of the income scale and extend the life of the digital divide.

If costs are shifted more heavily to those who use the most bandwidth and, therefore, are most responsible for driving up the cost of expanding network capabilities, the digital divergence among the races and among income groups can be eliminated much sooner.

Dept. of Modern Afflictions

Tuesday, September 1st, 2009

Do you suffer from “network deprivation”? I hope so. I do.

Can Microsoft Grasp the Internet Cloud?

Saturday, August 1st, 2009

See my new Forbes.com commentary on the Microsoft-Yahoo search partnership:

Ballmer appears now to get it. “The more searches, the more you learn,” he says. “Scale drives knowledge, which can turn around and drive innovation and relevance.”

Microsoft decided in 2008 to build 20 new data centers at a cost of $1 billion each. This was a dramatic commitment to the cloud. Conceived by Bill Gates’s successor, Ray Ozzie, the global platform would serve up a new generation of Web-based Office applications dubbed Azure. It would connect video gamers on its Xbox Live network. And it would host Microsoft’s Hotmail and search applications.

The new Bing search engine earned quick acclaim for relevant searches and better-than-Google pre-packaged details about popular health, transportation, location and news items. But with just 8.4% of the market, Microsoft’s $20 billion infrastructure commitment would be massively underutilized. Meanwhile, Yahoo, which still leads in news, sports and finance content, could not remotely afford to build a similar new search infrastructure to compete with Google and Microsoft. Thus, the combination. Yahoo and Microsoft can share Ballmer’s new global infrastructure.

Broadband benefit = $32 billion

Tuesday, July 14th, 2009

We recently estimated the dramatic gains in “consumer bandwidth” — our ability to communicate and take advantage of the Internet. So we note this new study from the Internet Innovation Alliance, written by economists Mark Dutz, Jonathan Orszag, and Robert Willig, that estimates a consumer surplus from U.S. residential broadband Internet access of $32 billion. “Consumer surplus” is the net benefit consumers enjoy, basically the additional value they receive from a product compared to what they pay.

Jackson’s traffic spike

Monday, June 29th, 2009

Om Malik surveys the Net traffic spike after Michael Jackson’s death:

Around 6:30 p.m. EST, Akamai’s Net Usage Index for News spiked all the way to 4,247,971 global visitors per minute vs. normal traffic of 2,000,000, a 112 percent gain.

Bandwidth Boom: Measuring Communications Capacity

Wednesday, June 24th, 2009

See our new paper estimating the growth of consumer bandwidth – or our capacity to communicate – from 2000 to 2008. We found:

  • a huge 5,400% increase in residential bandwidth;
  • an astounding 54,200% boom in wireless bandwidth; and
  • an almost 100-fold increase in total consumer bandwidth

us-consumer-bandwidth-2000-08-res-wireless

U.S. consumer bandwidth at the end of 2008 totaled more than 717 terabits per second, yielding, on a per capita basis, almost 2.4 megabits per second of communications power.

“Code” at 10

Tuesday, May 12th, 2009

Check out Cato Unbound’s symposium on Lawrence Lessig’s 1999 book Code and Other Laws of Cyberspace. Declan McCullagh leads off, with Harvard’s Jonathan Zittrain and my former colleague Adam Thierer next, and then a response from Lessig himself.

Here’s Thierer’s bottom line:

Luckily for us, Lessig’s lugubrious predictions proved largely unwarranted. Code has not become the great regulator of markets or enslaver of man; it has been a liberator of both. Indeed, the story of the past digital decade has been the exact opposite of the one Lessig envisioned in Code. Cyberspace has proven far more difficult to “control” or regulate than any of us ever imagined. More importantly, the volume and pace of technological innovation we have witnessed over the past decade has been nothing short of stunning.

Had there been anything to the Lessig’s “code-is-law” theory, AOL’s walled-garden model would still be the dominant web paradigm instead of search, social networking, blogs, and wikis. Instead, AOL — a company Lessig spent a great deal of time fretting over in Code — was forced to tear down those walls years ago in an effort to retain customers, and now Time Warner isspinning it off entirely. Not only are walled gardens dead, but just about every proprietary digital system is quickly cracked open and modified or challenged by open source and free-to-the-world Web 2.0 alternatives. How can this be the case if, as Lessig predicted, unregulated code creates a world of “perfect control”?

Getting the exapoint. Creating the future.

Friday, May 1st, 2009

Lots of commentators continue to misinterpret the research I and others have done on Internet traffic and its interplay with network infrastructure investment and communications policy.

I think that new video applications require lots more bandwidth — and, equally or even more important, that more bandwidth drives creative new applications. Two sides of the innovation coin. And I think investment friendly policies are necessary both to encourage deployment of new wireline and wireless broadband and also boost innovative new applications and services for consumers and businesses.

But this article, as one of many examples, mis-summarizes my view. It uses scary words like “apocalypse,” “catastrophe,” and, well, “scare mongering,” to describe my optimistic anticipation of an exaflood of Internet innovations coming our way. I don’t think that

the world will simply run out of bandwidth and we’ll all be weeping over our clogged tubes.

Not unless we block the expansion of new network capacity and capability. (more…)

Climbing the knowledge automation ladder

Thursday, April 30th, 2009

Check out Stephen Wolfram’s new project called Alpha, which moves beyond searching for information on the Web and toward the integration of knowledge into more useful, higher level patterns. I find the prospect of offloading onto Stephen Wolfram lots of data mining and other drudgery immediately appealing for my own research and analysis work. Quicker research would yield more — and one might hope, better — analysis. One can imagine lots of hiccups in getting to a real product, but this video demo offers a fun and enticing beginning.

Bandwidth caps: One hundred and one distractions

Thursday, April 30th, 2009

When Cablevision of New York announced this week it would begin offering broadband Internet service of 101 megabits per second for $99 per month, lots of people took notice. Which was the point.

Maybe the 101-megabit product is a good experiment. Maybe it will be successful. Maybe not. One hundred megabits per second is a lot, given today’s applications (and especially given cable’s broadcast tree-and-branch shared network topology). A hundred megabits, for example, could accommodate more than five fully uncompressed high-definition TV channels, or 10+ compressed HD streams. It’s difficult to imagine too many households finding a way today to consume that much bandwidth. Tomorrow is another question. The bottom line is that in addition to making a statement, Cablevision is probably mostly targeting the small business market with this product.

Far more perplexing than Cablevision’s strategy, however, was the reaction from groups like the reflexively critical Free Press:

We are encouraged by Cablevision’s plan to set a new high-speed bar of service for the cable industry. . . . this is a long overdue step in the right direction.

Free Press usually blasts any decision whatever by any network or media company. But by praising the 101-megabit experiment, Free Press is acknowledging the perfect legitimacy of charging variable prices for variable products. Pay more, get more. Pay less, get more affordably the type of service that will meet your needs the vast majority of the time. (more…)

Bandwidth and QoS: Much ado about something

Friday, April 24th, 2009

The supposed top finding of a new report commissioned by the British telecom regulator Ofcom is that we won’t need any QoS (quality of service) or traffic management to accommodate next generation video services, which are driving Internet traffic at consistently high annual growth rates of between 50% and 60%. TelecomTV One headlined, “Much ado about nothing: Internet CAN take video strain says UK study.” 

But the content of the Analysys Mason (AM) study, entitled “Delivering High Quality Video Services Online,” does not support either (1) the media headline — “Much ado about nothing,” which implies next generation services and brisk traffic growth don’t require much in the way of new technology or new investment to accommodate them — or (2) its own “finding” that QoS and traffic management aren’t needed to deliver these next generation content and services.

For example, AM acknowledges in one of its five key findings in the Executive Summary: 

innovative business models might be limited by regulation: if the ability to develop and deploy novel approaches was limited by new regulation, this might limit the potential for growth in online video services.

In fact, the very first key finding says:

A delay in the migration to 21CN-based bitstream products may have a negative impact on service providers that use current bitstream products, as growth in consumption of video services could be held back due to the prohibitive costs of backhaul capacity to support them on the legacy core network. We believe that the timely migration to 21CN will be important in enabling significant take-up of online video services at prices that are reasonable for consumers.

So very large investments in new technologies and platforms are needed, and new regulations that discourage this investment could delay crucial innovations on the edge. Sounds like much ado about something, something very big.  (more…)

Apples and Oranges

Friday, April 10th, 2009

Saul Hansell has done some good analysis of the broadband market (as I noted here), and I’m generally a big fan of the NYT’s Bits blog. But this item mixes cable TV apples with switched Internet oranges. And beyond that just misses the whole concept of products and prices.

Questioning whether Time Warner will be successful in its attempt to cap bandwidth usage on its broadband cable modem service — effectively raising the bandwidth pricing issue — Hansell writes:

I tried to explore the marginal costs with [Time Warner's] Mr. Hobbs. When someone decides to spend a day doing nothing but downloading every Jerry Lewis movie from BitTorrent, Time Warner doesn’t have to write a bigger check to anyone. Rather, as best as I can figure it, the costs are all about building the network equipment and buying long-haul bandwidth for peak capacity.

If that is true, the question of what is “fair” is somewhat more abstract than just saying someone who uses more should pay more. After all, people who watch more hours of cable television don’t pay more than those who don’t.

It’s also true that a restaurant patron who finishes his meal doesn’t pay more than someone who leaves half the same menu item on his plate. If he orders two bowls of soup, he gets more soup. He can’t order one bowl of soup and demand each of his five dining partners also be served for free. Pricing decisions depend upon the product and the granularity that is being offered. (more…)

Flameout, meltdown, starburst

Wednesday, April 8th, 2009

Sun Microsystems was the first to know — and to boldly say — that “the network is the computer.” And yet they couldn’t capitalize on this deep and early insight. Here are six reason why.

40 years ago today

Tuesday, April 7th, 2009

Some good history on the evolution of the Internet. I especially liked Steve Crocker’s story about how he and his fellow early Internet developers would share ideas — not by email but by mail. Here’s Crocker’s first “Request for Comments” detailing networking protocols. Today there are some 5,000 RFCs.

Broadband bridges to where?

Wednesday, April 1st, 2009

See my new commentary on the new $7.2 billion broadband program in the Federal stimulus bill. I conclude that if we’re going to spend taxpayer money at all, we should take advantage of local knowledge:

Many states have already pinpointed the areas most in need of broadband infrastructure. Local companies and entrepreneurs are likely to best know where broadband needs to be deployed – and to aggressively deliver it with the most appropriate, cost-effective technology that meets the needs of the particular market. Using the  states as smart conduits is also likely to get the money to network builders more quickly.

And that

After falling seriously behind foreign nations in broadband and in our favored measure of “bandwidth-per-capita” in the early 2000s, the U.S. got its act together and is now on the right path. In the last decade, total U.S. broadband lines grew from around 5 million to over 120 million, while residential broadband grew from under 5 million to 75 million. By far the most important broadband policy point is not to discourage or distort the annual $60+ billion that private companies already invest.

Cyber-security: Let’s get serious

Friday, March 20th, 2009

As the Internet’s power grows — dominating our public discourse and driving deeper into every industry and commercial realm — it becomes a bigger target. As the key platform of our knowledge economy, it invites mischief, or worse.

Most digital citizens know the hassles and even dangers of viruses, phishing, and all manner of mal-ware. But cybersecurity is a much broader and deeper topic, and will grow ever more so. The Center for Strategic and International Studies published a good report on December 8 detailing the threats and offering recommendations to “the 44th Presidency.” CSIS suggested a number of specific actions, among them:

(1) a comprehensive national strategy; (2) that the White House lead the effort; (3) that we “regulate cyberspace”; (4) that we authenticate identities; (5) modernizing old laws not suited to the digital networked world; (6) building secure government systems;  and (7) not starting over, given what they saw as the previous administration’s productive start.

Yesterday the Senate Commerce Committee moved the ball forward with a hearing on the topic, including the head of the CSIS study James Lewis, nuclear engineer Joseph Weiss, cyber-guru Ed Amoroso of AT&T, and Eugene Spafford of Purdue University’s Center for Education and Research in Information Assurance and Security — better known by its brilliant acronym, CERIAS. (more…)

Wireless wonders

Tuesday, March 17th, 2009

As the wireless world continues to churn out terrific new hardware and software innovations and network speeds, Engadget talks about the industry in depth with AT&T’s wireless chief Ralph de la Vega.

Rare reason in the broadband debate

Thursday, March 12th, 2009

Calm and reasoned discussion in debates over broadband and Internet policy are rare. But Saul Hansell, in a series of posts at the NYTimes Bits blog, does an admirable job surveying international broadband comparisons. Here are parts I and II, with part III on the way. [Update: Here's part III. And here's a good previous post on "broadband stimulus."]

So far Hansell has asked two basic questions: Why is theirs faster? And why is theirs cheaper? “Theirs” being non-American broadband.

His answers: “Their” broadband is not too much faster than American broadband, at least not anymore. And their broadband is cheaper for a complicated set of reasons, but mostly because of government price controls that could hurt future investment and innovation in those nations that practice it. 

Ask America. We already tried it. But more on that later.

Hansell makes several nuanced points: (1) broadband speeds depend heavily on population density. The performance and cost of communications technologies are distance-sensitive. It’s much cheaper to deliver fast speeds in Asia’s big cities and Europe’s crowded plains than across America’s expanse. (2) Hansell also points to studies showing some speed-inflation in Europe and Asia. In other words, advertised speeds are often overstated. But most importantly, (3) Hansell echoes my basic point over the last couple years:

. . . Internet speeds in the United States are getting faster. Verizon is wiring half its territory with its FiOS service, which strings fiber optic cable to people’s homes. FiOS now offers 50 Mbps service and has the capacity to offer much faster speeds. As of the end of 2008, 4.1 million homes in the United States had fiber service, which puts the United States right behind Japan, which has brought fiber directly to 8.2 million homes, according to the Fiber to the Home Council. Much of what is called fiber broadband in Korea, Sweden and until recently Japan, only brings the fiber to the basement of apartment buildings or street-corner switch boxes.

AT&T is building out that sort of network for its U-Verse service, running fiber to small switching stations in neighborhoods, so that it can offer much faster DSL with data speed of up to 25 Mbps and and Internet video as well. And cable systems, which cover more than 90 percent of the country, are starting to deploy the next generation of Internet technology called Docsis 3.0. It can offer speeds of 50 Mbps. . . .

(more…)

“Innovation isn’t dead.”

Friday, February 20th, 2009

See this fun interview with the energetic early Web innovator Marc Andreesen. Andreesen is on the Facebook board, has his own social networking company called Ning, and is just launching a new venture fund. He talks about Kindles, iPhones, social nets, the theory of cascading innovation, and says we should create new “virtual banks” to get past the financial crisis.

Silicon Shift

Friday, February 6th, 2009

Take a look at this 40 minute interview with Jen-Hsun Huang, CEO of graphics chip maker Nvidia. It’s a non-technical discussion of a very important topic in the large world of computing and the Internet. Namely, the rise of the GPU — the graphics processing unit.

Almost 40 years ago the CPU — or central processing unit — burst onto the scene and enabled the PC revolution, which was mostly about word processing (text) and simple spreadsheets (number crunching). But today, as Nvidia and AMD’s ATI division add programmability to their graphics chips, the GPU becomes the next generation general purpose processor. (Huang briefly describes the CUDA programmability architecture, which he compares to the x86 architecture of the CPU age.) With its massive parallelism and ability to render the visual applications most important to today’s consumers — games, photos, movies, art, photoshop, YouTube, GoogleEarth, virtual worlds — the GPU rises to match the CPU’s “centrality” in the computing scheme.

Less obvious, the GPU’s attributes also make it useful for all sorts of non-consumer applications like seismic geographic imaging for energy exploration, high-end military systems, and even quantitative finance.

Perhaps the most exciting shift unleashed by the GPU, however, is in cloud computing. At the January 2009 Consumer Electronics Show in Las Vegas, AMD and a small but revolutionary start-up called LightStage/Otoy announced they are building the world’s fastest petaflops supercomputer at LightStage/Otoy’s Burbank, CA, offices. But this isn’t just any supercomputer. It’s based on GPUs, not CPUs. And it’s not just really, really fast. Designed for the Internet age, this “render cloud” will enable real-time photorealistic 3D gaming and virtual worlds across the Web. It will compress the power of the most advanced motion picture CGI (computer generated imaging) techniques, which can consume hours to render one movie frame and months to produce movie sequences, into real-time . . . and link this power to the wider world over the Net. 

Watch this space. The GPU story is big.

Pulling rabbits out of hats

Tuesday, February 3rd, 2009

How has debt-laden Level 3 survived (at least so far) two crashes now? Dennis Berman tells the story.

Since 2001, it has been paying $500 million to $600 million a year in interest. Yet it has never been able to cut its long-term debt load below $5 billion. Even worse, Level 3 hasn’t made a penny of profit since 1999. Its stock has traded below $8 for the past eight years. It closed at 98 cents on Monday.

Level 3 seemed a prime target to get pulled into today’s great credit maw, where decent but otherwise cash-strapped companies go to die. By November, bond investors were seriously doubting the company’s ability to pay off $1.1 billion in debt coming due this year and next, valuing its bonds as low as 30 cents on the dollar.

But the company has convinced large equity holders to bail them out of the crushing debt load, at least for the next year. 2010 and beyond will still be rough. Berman’s article did not mention the angel who saved Level 3 during the tech/telecom crash of 2000-02: Warren Buffett.

The nuts & bolts of the Net

Wednesday, December 17th, 2008

For those who found the Google-net-neutrality-edge-caching story confusing, here’s a terrifically lucid primer by my PFF colleague Adam Marcus explaining “edge caching” and content delivery networks (CDNs) and, even more basically, the concepts of bandwidth and latency.

When Nerds Attack!

Tuesday, December 16th, 2008

Yesterday’s Wall Street Journal story on the supposed softening of Google’s “net neutrality” policy stance, which I posted about here, predictably got all the nerds talking. 

Here was my attempt, over at the Technology Liberation Front, to put this topic in perspective:

_______________________ 

Bandwidth, Storewidth, and Net Neutrality

Very happy to see the discussion over The Wall Street Journal’s Google/net neutrality story. Always good to see holes poked and the truth set free.

But let’s not allow the eruptions, backlashes, recriminations, and “debunkings” – This topic has been debunked. End of story. Over. Sit down! – obscure the still-fundamental issues. This is a terrific starting point for debate, not an end.

Content delivery networks (CDNs) and caching have always been a part of my analysis of the net neutrality debate. Here was testimony that George Gilder and I prepared for a Senate Commerce Committee hearing almost five years ago, in April 2004, where we predicted that a somewhat obscure new MCI “network layers” proposal, as it was then called, would be the next big communications policy issue. (At about the same time, my now-colleague Adam Thierer was also identifying this as an emerging issue/threat.)

Gilder and I tried to make the point that this “layers” — or network neutrality — proposal would, even if attractive in theory, be very difficult to define or implement. Networks are a dynamic realm of ever-shifting bottlenecks, where bandwidth, storage, caching, and peering, in the core, edge, and access, in the data center, on end-user devices, from the heavens and under the seas, constantly require new architectures, upgrades, and investments, thus triggering further cascades of hardware, software, and protocol changes elsewhere in this growing global web. It seemed to us at the time, ill-defined as it was, that this new policy proposal was probably a weapon for one group of Internet companies, with one type of business model, to bludgeon another set of Internet companies with a different business model. 

We wrote extensively about storage, caching, and content delivery networks in the pages of the Gilder Technology Report, first laying out the big conceptual issues in a 1999 article, “The Antediluvian Paradigm.” [Correction: "The Post-Diluvian Paradigm"] Gilder coined a word for this nexus of storage and bandwidth: Storewidth. Gilder and I even hosted a conference, also dubbed “Storewidth,” dedicated to these storage, memory, and content delivery network technologies. See, for instance, this press release for the 2001 conference with all the big players in the field, including Akamai, EMC, Network Appliance, Mirror Image, and one Eric Schmidt, chief executive officer of . . . Novell. In 2002, Google’s Larry Page spoke, as did Jay Adelson, founder of the big data-center-network-peering company Equinix, Yahoo!, and many of the big network and content companies. (more…)

Hey, Sergey and Larry, thanks

Monday, December 15th, 2008

As perhaps the earliest opponent of “net neutrality” regulation, it feels good to know I’m no longer “evil.”

Net Neutrality forever! Wait, never mind…

Monday, December 15th, 2008

When you’ve written as much as I have about the weird Web topic known as “network neutrality,” this is big news indeed.

The celebrated openness of the Internet — network providers are not supposed to give preferential treatment to any traffic — is quietly losing powerful defenders.

Google Inc. has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content, according to documents reviewed by The Wall Street Journal. Google has traditionally been one of the loudest advocates of equal network access for all content providers.

What some innocuously call “equal network access,” others call meddlesome regulation. Net neutrality could potentially provide a platform for Congress and the FCC to micromanage everything on the Net, from wires and switches to applications and services to the bits and bytes themselves. It is a potentially monstrous threat to dynamic innovation on the fast-growing Net, where experimentation still reigns. 

But now Google, a newly powerful force in Washington and Obamaland, may be reversing course 180-degrees. The regulatory threat level may have just dropped from orange to yellow.

Update: Richard Bennett expertly comments here.

Yep, confirmed, it’s a sham

Wednesday, December 10th, 2008

An investigation confirms that FCC Chairman Kevin Martin’s crusade to force à la carte pricing and unbundling of cable TV channels was indeed a sham, as many of us have been saying for years.

My colleague Ken Ferree comments here.

Web Wars

Wednesday, December 10th, 2008

A new report says we’re not prepared for the cyberwars to come and need a White House office to address emerging threats.

Straw Men Can’t Swim

Friday, December 5th, 2008

The venerable Economist magazine has made a hash of my research on the growth of the Internet, which examines the rich media technologies now flooding onto the Web and projects Internet traffic over the coming decade. This “exaflood” of new applications and services represents a bounty of new entertainment, education, and business applications that can drive productivity and economic growth across all our industries and the world economy. 

But somehow, The Economist was convinced that my research represents some “gloomy prophesy,” that I am “doom-mongering” about an Internet “overload” that could “crash” the Internet. Where does The Economist find any evidence for these silly charges?

In a series of reports, articles (here and here), and presentations around the globe — and in a long, detailed, nuanced, very pleasant interview with The Economist, in which I thought the reporter grasped the key points — I have consistently said the exaflood is an opportunity, an embarrassment of riches.

I’ve also said it will take a lot of investment in networks (both wired and wireless), data centers, and other cloud infrastructure to both drive and accommodate this exaflood. Some have questioned this rather mundane statement, but for the life of me I can’t figure out why they deny building this amazingly powerful global Internet might cost a good bit of money.

One critic of mine has said he thinks we might need to spend $5-10 billion on new Net infrastructure over the next five years. What? We already spend some $70 billion a year on all communications infrastructure in the U.S. with an ever greater portion of that going toward what we might consider the Net. Google invests more than $3 billion a year in its cloud infrastructure, Verizon is building a $25-billion fiber-to-the-home network, and AT&T is investing another $10 billion, just for starters. Over the last 10 years, the cable TV companies invested some $120 billion. And Microsoft just yesterday said its new cloud computing infrastructure will consist of 20 new “super data centers,” at $1 billion a piece.

I’m glad The Economist quoted my line that “today’s networks are not remotely prepared to handle this exaflood.” Which is absolutely, unambiguously, uncontroversially true. Can you get all the HD video you want over your broadband connection today? Do all your remote applications work as fast as you’d like? Is your mobile phone and Wi-Fi access as widespread and flawless as you’d like? Do videos or applications always work instantly, without ever a hint of buffer or delay? Are today’s metro switches prepared for a jump from voice-over-IP to widespread high-resolution video conferencing? No, not even close.

But as we add capacity and robustness to many of these access networks, usage and traffic will surge, and the bottlenecks will shift to other parts of the Net. Core, edge, metro, access, data center — the architecture of the Net is ever-changing, with technologies and upgrades and investment happening in different spots at varying pace. This is not a debate about whether the Internet will “crash.” It’s a discussion about how the Net will evolve and grow, about what its capabilities and architecture will be, and about how much it will cost and how we will govern it, but mostly about how much it will yield in new innovation and economic growth.

The Economist and the myriad bloggers, who everyday try to kill some phantom catastrophe theory I do not recognize, are engaging in the old and very tedious practice of setting up digital straw men, which they then heroically strike down with a bold punch of the delete button. Ignoring the real issues and the real debate doesn’t take much effort, nor much thought.

Clouds are expensive

Thursday, December 4th, 2008

Microsoft, having a couple weeks ago finally capitulated to the Web with the announcement of Ray Ozzie’s new Net-based strategy, now says it will build 20 new data centers at $1 billion a piece. Google is already investing some $3 billion a year on its cloud infrastructure.

Lots of people have criticized my rough estimates of a couple hundred billion in new Net investment over the next five years, saying it’s closer to $5-10 billion, and I wonder what the heck they are thinking.

“Googlephobia”: An Unholy Alliance

Sunday, November 30th, 2008

My colleague Adam Thierer with an excellent post warning of the coming war on Google:

So, here we have Wu raising the specter of search engine bias and Lessig raising the specter of Google-as-panopticon. And this comes on top of groups like EPIC and CDT calling for more regulation of the online advertising marketplace in the name of protecting privacy.  Alarm bells must be going off at the Googleplex. But we all have reason to be concerned because greater regulation of Google would mean greater regulation of the entire code / application layer of the Net.  It’s bad enough that we likely have greater regulation of the infrastructure layer on the way thanks to Net neutrality mandates. We need to work hard to contain the damage of increased calls for government to get it’s hands all over every other layer of the Net.

Speeding the Cloud

Sunday, November 23rd, 2008

For someone like me who studies cloud computing and Internet traffic, which is measured in tera-, peta-, and exabytes, Google’s terabyte sorting record is interesting:

we were able to sort 1TB (stored on the Google File System as 10 billion 100-byte records in uncompressed text files) on 1,000 computers in 68 seconds. By comparison, the previous 1TB sorting record is 209 seconds on 910 computers.

But I suppose I can see how you might not feel the same way.

The big bad media monopoly

Thursday, November 20th, 2008

In the context of potential federal media regulation known as “à la carte,” my colleague Adam Thierer comments on new Internet video technologies and content here and here, and expertly reiterates an old theme of mine, namely that the Internet *is* à la carte.

Cloudy Forecast

Thursday, October 30th, 2008

Coincident with the news that Microsoft is embracing the Web even for its longtime PC-centric OS and apps, The Economist has a big special report on “cloud computing,” including articles on:

- “The Evolution of Data Centres
- “Software as a Service
- “Connecting to the Cloud
- “The Economics of the Cloud
The Effect on Business; and
- “Computers without Borders