Category Archives: Net Neutrality

Commone Sense of Amazonian Proportions

Amazon’s Paul Misener gets all reasonable in his comments on the FCC’s proposed net neutrality rules:

With this win-win-win goal in mind, and consistent with the principle of maintaining an open Internet, Amazon respectfully suggests that the FCC’s proposed rules be extended to allow broadband Internet access service providers to favor some content so long as no harm is done to other content.

Importantly, we note that the Internet has long been interconnected with private networks and edge caches that enhance the performance of some Internet content in comparison with other Internet content, and that these performance improvements are paid for by some but not all providers of content.  The reason why these arrangements are acceptable from a public policy perspective is simple:  the performance of other content is not disfavored, i.e., other content is not harmed.

Collective vs. Creative: The Yin and Yang of Innovation

Later this week the FCC will accept the first round of comments in its “Open Internet” rule making, commonly known as Net Neutrality. Never mind that the Internet is already open and it was never strictly neutral. Openness and neutrality are two appealing buzzwords that serve as the basis for potentially far reaching new regulation of our most dynamic economic and cultural sector – the Internet.

I’ll comment on Net Neutrality from several angles over the coming days. But a terrific essay by Berkeley’s Jaron Lanier impelled me to begin by summarizing some of the big meta-arguments that have been swirling the last few years and which now broadly define the opposing sides in the Net Neutrality debate. After surveying these broad categories, I’ll get into the weeds on technology, business, and policy.

The thrust behind Net Neutrality is a view that the Internet should conform to a narrow set of technology and business “ideals” – “open,” “neutral,” “non-discriminatory.” Wonderful words. Often virtuous. But these aren’t the only traits important to economic and cultural systems. In fact, Net Neutrality sets up a false dichotomy – a manufactured war – between open and closed, collaborative versus commercial, free versus paid, content versus conduit. I’ve made a long list of the supposed opposing forces. Net Neutrality favors only one side of the table below. It seeks to cement in place one model of business and technology. It is intensely focused on the left-hand column and is either oblivious or hostile to the right-hand column. It thinks the right-hand items are either bad (prices) or assumes they appear magically (bandwidth).

We skeptics of Net Neutrality, on the other hand, do not favor one side or the other. We understand that there are virtues all around. Here’s how I put it on my blog last autumn:

Suggesting we can enjoy Google’s software innovations without the network innovations of AT&T, Verizon, and hundreds of service providers and technology suppliers is like saying that once Microsoft came along we no longer needed Intel.

No, Microsoft and Intel built upon each other in a virtuous interplay. Intel’s microprocessor and memory inventions set the stage for software innovation. Bill Gates exploited Intel’s newly abundant transistors by creating radically new software that empowered average businesspeople and consumers to engage with computers. The vast new PC market, in turn, dramatically expanded Intel’s markets and volumes and thus allowed it to invest in new designs and multi-billion dollar chip factories across the globe, driving Moore’s law and with it the digital revolution in all its manifestations.

Software and hardware. Bits and bandwidth. Content and conduit. These things are complementary. And yes, like yin and yang, often in tension and flux, but ultimately interdependent.

Likewise, we need the ability to charge for products and set prices so that capital can be rationally allocated and the hundreds of billions of dollars in network investment can occur. It is thus these hard prices that yield so many of the “free” consumer surplus advantages we all enjoy on the Web. No company or industry can capture all the value of the Web. Most of it comes to us as consumers. But companies and content creators need at least the ability to pursue business models that capture some portion of this value so they can not only survive but continually reinvest in the future. With a market moving so fast, with so many network and content models so uncertain during this epochal shift in media and communications, these content and conduit companies must be allowed to define their own products and set their own prices. We need to know what works, and what doesn’t.

When the “network layers” regulatory model, as it was then known, was first proposed back in 2003-04, my colleague George Gilder and I prepared testimony for the U.S. Senate. Although the layers model was little more than an academic notion, we thought then this would become the next big battle in Internet policy. We were right. Even though the “layers” proposal was (and is!) an ill-defined concept, the model we used to analyze what Net Neutrality would mean for networks and Web business models still applies. As we wrote in April of 2004:

Layering proponents . . . make a fundamental error. They ignore ever changing trade-offs between integration and modularization that are among the most profound and strategic decisions any company in any industry makes. They disavow Harvard Business professor Clayton Christensen’s theorems that dictate when modularization, or “layering,” is advisable, and when integration is far more likely to yield success. For example, the separation of content and conduit – the notion that bandwidth providers should focus on delivering robust, high-speed connections while allowing hundreds of millions of professionals and amateurs to supply the content—is often a sound strategy. We have supported it from the beginning. But leading edge undershoot products (ones that are not yet good enough for the demands of the marketplace) like video-conferencing often require integration.

Over time, the digital and photonic technologies at the heart of the Internet lead to massive integration – of transistors, features, applications, even wavelengths of light onto fiber optic strands. This integration of computing and communications power flings creative power to the edges of the network. It shifts bottlenecks. Crystalline silicon and flawless fiber form the low-entropy substrate that carry the world’s high-entropy messages – news, opinions, new products, new services. But these feats are not automatic. They cannot be legislated or mandated. And just as innovation in the core of the network unleashes innovation at the edges, so too more content and creativity at the edge create the need for ever more capacity and capability in the core. The bottlenecks shift again. More data centers, better optical transmission and switching, new content delivery optimization, the move from cell towers to femtocell wireless architectures. There is no final state of equilibrium where one side can assume that the other is a stagnant utility, at least not in the foreseeable future.

I’ll be back with more analysis of the Net Neutrality debate, but for now I’ll let Jaron Lanier (whose book You Are Not a Gadget was published today) sum up the argument:

Here’s one problem with digital collectivism: We shouldn’t want the whole world to take on the quality of having been designed by a committee. When you have everyone collaborate on everything, you generate a dull, average outcome in all things. You don’t get innovation.

If you want to foster creativity and excellence, you have to introduce some boundaries. Teams need some privacy from one another to develop unique approaches to any kind of competition. Scientists need some time in private before publication to get their results in order. Making everything open all the time creates what I call a global mush.

There’s a dominant dogma in the online culture of the moment that collectives make the best stuff, but it hasn’t proven to be true. The most sophisticated, influential and lucrative examples of computer code—like the page-rank algorithms in the top search engines or Adobe’s Flash—always turn out to be the results of proprietary development. Indeed, the adored iPhone came out of what many regard as the most closed, tyrannically managed software-development shop on Earth.

Actually, Silicon Valley is remarkably good at not making collectivization mistakes when our own fortunes are at stake. If you suggested that, say, Google, Apple and Microsoft should be merged so that all their engineers would be aggregated into a giant wiki-like project—well you’d be laughed out of Silicon Valley so fast you wouldn’t have time to tweet about it. Same would happen if you suggested to one of the big venture-capital firms that all the start-ups they are funding should be merged into a single collective operation.

But this is exactly the kind of mistake that’s happening with some of the most influential projects in our culture, and ultimately in our economy.

New York and Net Neutrality

This morning, the Technology Committee of the New York City Council convened a large hearing on a resolution urging Congress to pass a robust Net Neutrality law. I was supposed to testify, but our narrowband transportation system prevented me from getting to New York. Here, however, is the testimony I prepared. It focuses on investment, innovation, and the impact Net Neutrality would have on both.

“Net Neutrality’s Impact on Internet Innovation” – by Bret Swanson – 11.20.09

Must Watch Web Debate

If you’re interested in Net Neutrality regulation and have some time on your hands, watch this good debate at the Web 2.0 conference. The resolution was “A Network Neutrality law is necessary,” and the two opposing sides were:

Against

  • James Assey – Executive Vice President, National Cable and Telecommunications Association
  • Robert Quinn –  Senior Vice President-Federal Regulatory, AT&T
  • Christopher Yoo – Professor of Law and Communication; Director, Center for Technology, Innovation, and Competition, UPenn Law

For

  • Tim Wu – Coined the term “Network Neutrality”; Professor of Law, Columbia Law
  • Brad Burnham – VC, Union Square Ventures
  • Nicholas Economides – Professor of Economics, Stern School of Business, New York University.

I think the side opposing the resolution wins, hands down — no contest really — but see for yourself.

Quote of the Day

“I hope that they (government regulators) leave it alone . . . The Internet is working beautifully as it is.”

— Tim Draper, Silicon Valley venture capitalist, who along with many other SV investors and executives signed a letter advocating new Internet regulations apparently unaware of its true content.

Neutrality for thee, but not for me

In Monday’s Wall Street Journal, I address the once-again raging topic of “net neutrality” regulation of the Web. On September 21, new FCC chair Julius Genachowski proposed more formal neutrality regulations. Then on September 25, AT&T accused Google of violating the very neutrality rules the search company has sought for others. The gist of the complaint was that the new Google Voice service does not connect all phone calls the way other phone companies are required to do. Not an earthshaking matter in itself, but a good example of the perils of neutrality regulation.

As the Journal wrote in its own editorial on Saturday:

Our own view is that the rules requiring traditional phone companies to connect these calls should be scrapped for everyone rather than extended to Google. In today’s telecom marketplace, where the overwhelming majority of phone customers have multiple carriers to choose from, these regulations are obsolete. But Google has set itself up for this political blowback.

Last week FCC Chairman Julius Genachowski proposed new rules for regulating Internet operators and gave assurances that “this is not about government regulation of the Internet.” But this dispute highlights the regulatory creep that net neutrality mandates make inevitable. Content providers like Google want to dabble in the phone business, while the phone companies want to sell services and applications.

The coming convergence will make it increasingly difficult to distinguish among providers of broadband pipes, network services and applications. Once net neutrality is unleashed, it’s hard to see how anything connected with the Internet will be safe from regulation.

Several years ago, all sides agreed to broad principles that prohibit blocking Web sites or applications. But I have argued that more detailed and formal regulations governing such a dynamic arena of technology and changing business models would stifle innovation.

Broadband to the home, office, and to a growing array of diverse mobile devices has been a rare bright spot in this dismal economy. Since net neutrality regulation was first proposed in early 2004, consumer bandwidth per capita in the U.S. grew to 3 megabits per second from just 262 kilobits per second, and monthly U.S. Internet traffic increased to two billion gigabytes from 170 million gigabytes — both 10-fold leaps. New wired and wireless innovations and services are booming.

All without net neutrality regulation.

The proposed FCC regulations could go well beyond the existing (and uncontroversial) non-blocking principles. A new “Fifth Principle,” if codified, could prohibit “discrimination” not just among applications and services but even at the level of data packets traversing the Net. But traffic management of packets is used across the Web to ensure robust service and security.

As network traffic, content, and outlets proliferate and diversify, Washington wants to apply rigid, top-down rules. But the network requirements of email and high-definition video are very different. Real time video conferencing requires more network rigor than stored content like YouTube videos. Wireless traffic patterns are more unpredictable than residential networks because cellphone users are, well, mobile. And the next generation of video cloud computing — what I call the exacloud — will impose the most severe constraints yet on network capacity and packet delay.

Or if you think entertainment unimportant, consider the implications for cybersecurity. The very network technologies that ensure a rich video experience are used to kill dangerous “botnets” and combat cybercrime.

And what about low-income consumers? If network service providers can’t partner with content companies, offer value-added services, or charge high-end users more money for consuming more bandwidth, low-end consumers will be forced to pay higher prices. Net neutrality would thus frustrate the Administration’s goal of 100% broadband.

Health care, energy, jobs, debt, and economic growth are rightly earning most of the policy attention these days. But regulation of the Net would undermine the key global platform that underlay better performance on each of these crucial economic matters. Washington may be bailing out every industry that doesn’t work, but that’s no reason to add new constraints to one that manifestly does.

— Bret Swanson

Does Google Voice violate neutrality?

This is the ironic but very legitimate question AT&T is asking.

As Adam Thierer writes,

Whatever you think about this messy dispute between AT&T and Google about how to classify web-based telephony apps for regulatory purposes — in this case, Google Voice — the key issue not to lose site of here is that we are inching ever closer to FCC regulation of web-based apps!  Again, this is the point we have stressed here again and again and again and again when opposing Net neutrality mandates: If you open the door to regulation on one layer of the Net, you open up the door to the eventual regulation of all layers of the Net.

George Gilder and I made this point in Senate testimony five and a half years ago. Advocates of big new regulations on the Internet should be careful for what they wish.

End-to-end? Or end to innovation?

In what is sure to be a substantial contribution to both the technical and policy debates over Net Neutrality, Richard Bennett of the Information Technology and Innovation Foundation has written a terrific piece of technology history and forward-looking analysis. In “Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate,” Bennett concludes:

Arguments for freezing the Internet into a simplistic regulatory straightjacket often have a distinctly emotional character that frequently borders on manipulation.

The Internet is a wonderful system. It represents a new standard of global cooperation and enables forms of interaction never before possible. Thanks to the Internet, societies around the world reap the benefits of access to information, opportunities for collaboration, and modes of communication that weren’t conceivable to the public a few years ago. It’s such a wonderful system that we have to strive very hard not to make it into a fetish object, imbued with magical powers and beyond the realm of dispassionate analysis, criticism, and improvement.

At the end of the day, the Internet is simply a machine. It was built the way it was largely by a series of accidents, and it could easily have evolved along completely different lines with no loss of value to the public. Instead of separating TCP from IP in the way that they did, the academics in Palo Alto who adapted the CYCLADES architecture to the ARPANET infrastructure could have taken a different tack: They could have left them combined as a single architectural unit providing different retransmission policies (a reliable TCP-like policy and an unreliable UDP-like policy) or they could have chosen a different protocol such as Watson’s Delta-t or Pouzin’s CYCLADES TS. Had the academics gone in either of these directions, we could still have a World Wide Web and all the social networks it enables, perhaps with greater resiliency.

The glue that holds the Internet together is not any particular protocol or software implementation: first and foremost, it’s the agreements between operators of Autonomous Systems to meet and share packets at Internet Exchange Centers and their willingness to work together. These agreements are slowly evolving from a blanket pact to cross boundaries with no particular regard for QoS into a richer system that may someday preserve delivery requirements on a large scale. Such agreements are entirely consistent with the structure of the IP packet, the needs of new applications, user empowerment, and “tussle.”

The Internet’s fundamental vibrancy is the sandbox created by the designers of the first datagram networks that permitted network service enhancements to be built and tested without destabilizing the network or exposing it to unnecessary hazards. We don’t fully utilize the potential of the network to rise to new challenges if we confine innovations to the sandbox instead of moving them to the parts of the network infrastructure where they can do the most good once they’re proven. The real meaning of end-to-end lies in the dynamism it bestows on the Internet by supporting innovation not just in applications but in fundamental network services. The Internet was designed for continual improvement: There is no reason not to continue down that path.

A QoS primer

In case my verses attempting an analysis of Quality-of-Service and “net neutrality” regulation need supplementary explanation, here’s a terrifically lucid seven-minute Internet packet primer — in prose and pictures — from George Ou. Also, a longer white paper on the same topic:

Seven-minute Flash presentation: The need for a smarter prioritized Internet

White paper: Managing Broadband Networks: A Policymaker’s Guide

Leviathan Spam

Leviathan Spam

Send the bits with lasers and chips
See the bytes with LED lights

Wireless, optical, bandwidth boom
A flood of info, a global zoom

Now comes Lessig
Now comes Wu
To tell us what we cannot do

The Net, they say,
Is under attack
Stop!
Before we can’t turn back

They know best
These coder kings
So they prohibit a billion things

What is on their list of don’ts?
Most everything we need the most

To make the Web work
We parse and label
We tag the bits to keep the Net stable

The cloud is not magic
It’s routers and switches
It takes a machine to move exadigits

Now Lessig tells us to route is illegal
To manage Net traffic, Wu’s ultimate evil (more…)

A New Leash on the Net?

Today, FCC chairman Julius Genachowski proposed new regulations on communications networks. We were among the very first opponents of these so-called “net neutrality” rules when they were first proposed in concept back in 2004. Here are a number of our relevant articles over the past few years:

Getting the exapoint. Creating the future.

Lots of commentators continue to misinterpret the research I and others have done on Internet traffic and its interplay with network infrastructure investment and communications policy.

I think that new video applications require lots more bandwidth — and, equally or even more important, that more bandwidth drives creative new applications. Two sides of the innovation coin. And I think investment friendly policies are necessary both to encourage deployment of new wireline and wireless broadband and also boost innovative new applications and services for consumers and businesses.

But this article, as one of many examples, mis-summarizes my view. It uses scary words like “apocalypse,” “catastrophe,” and, well, “scare mongering,” to describe my optimistic anticipation of an exaflood of Internet innovations coming our way. I don’t think that

the world will simply run out of bandwidth and we’ll all be weeping over our clogged tubes.

Not unless we block the expansion of new network capacity and capability. (more…)

Bandwidth and QoS: Much ado about something

The supposed top finding of a new report commissioned by the British telecom regulator Ofcom is that we won’t need any QoS (quality of service) or traffic management to accommodate next generation video services, which are driving Internet traffic at consistently high annual growth rates of between 50% and 60%. TelecomTV One headlined, “Much ado about nothing: Internet CAN take video strain says UK study.” 

But the content of the Analysys Mason (AM) study, entitled “Delivering High Quality Video Services Online,” does not support either (1) the media headline — “Much ado about nothing,” which implies next generation services and brisk traffic growth don’t require much in the way of new technology or new investment to accommodate them — or (2) its own “finding” that QoS and traffic management aren’t needed to deliver these next generation content and services.

For example, AM acknowledges in one of its five key findings in the Executive Summary: 

innovative business models might be limited by regulation: if the ability to develop and deploy novel approaches was limited by new regulation, this might limit the potential for growth in online video services.

In fact, the very first key finding says:

A delay in the migration to 21CN-based bitstream products may have a negative impact on service providers that use current bitstream products, as growth in consumption of video services could be held back due to the prohibitive costs of backhaul capacity to support them on the legacy core network. We believe that the timely migration to 21CN will be important in enabling significant take-up of online video services at prices that are reasonable for consumers.

So very large investments in new technologies and platforms are needed, and new regulations that discourage this investment could delay crucial innovations on the edge. Sounds like much ado about something, something very big.  (more…)

The nuts & bolts of the Net

For those who found the Google-net-neutrality-edge-caching story confusing, here’s a terrifically lucid primer by my PFF colleague Adam Marcus explaining “edge caching” and content delivery networks (CDNs) and, even more basically, the concepts of bandwidth and latency.

When Nerds Attack!

Yesterday’s Wall Street Journal story on the supposed softening of Google’s “net neutrality” policy stance, which I posted about here, predictably got all the nerds talking. 

Here was my attempt, over at the Technology Liberation Front, to put this topic in perspective:

_______________________ 

Bandwidth, Storewidth, and Net Neutrality

Very happy to see the discussion over The Wall Street Journal‘s Google/net neutrality story. Always good to see holes poked and the truth set free.

But let’s not allow the eruptions, backlashes, recriminations, and “debunkings” — This topic has been debunked. End of story. Over. Sit down! — obscure the still-fundamental issues. This is a terrific starting point for debate, not an end.

Content delivery networks (CDNs) and caching have always been a part of my analysis of the net neutrality debate. Here was testimony that George Gilder and I prepared for a Senate Commerce Committee hearing almost five years ago, in April 2004, where we predicted that a somewhat obscure new MCI “network layers” proposal, as it was then called, would be the next big communications policy issue. (At about the same time, my now-colleague Adam Thierer was also identifying this as an emerging issue/threat.)

Gilder and I tried to make the point that this “layers” — or network neutrality — proposal would, even if attractive in theory, be very difficult to define or implement. Networks are a dynamic realm of ever-shifting bottlenecks, where bandwidth, storage, caching, and peering, in the core, edge, and access, in the data center, on end-user devices, from the heavens and under the seas, constantly require new architectures, upgrades, and investments, thus triggering further cascades of hardware, software, and protocol changes elsewhere in this growing global web. It seemed to us at the time, ill-defined as it was, that this new policy proposal was probably a weapon for one group of Internet companies, with one type of business model, to bludgeon another set of Internet companies with a different business model. 

We wrote extensively about storage, caching, and content delivery networks in the pages of the Gilder Technology Report, first laying out the big conceptual issues in a 1999 article, “The Antediluvian Paradigm.” [Correction: “The Post-Diluvian Paradigm”] Gilder coined a word for this nexus of storage and bandwidth: Storewidth. Gilder and I even hosted a conference, also dubbed “Storewidth,” dedicated to these storage, memory, and content delivery network technologies. See, for instance, this press release for the 2001 conference with all the big players in the field, including Akamai, EMC, Network Appliance, Mirror Image, and one Eric Schmidt, chief executive officer of . . . Novell. In 2002, Google’s Larry Page spoke, as did Jay Adelson, founder of the big data-center-network-peering company Equinix, Yahoo!, and many of the big network and content companies. (more…)

Hey, Sergey and Larry, thanks

As perhaps the earliest opponent of “net neutrality” regulation, it feels good to know I’m no longer “evil.”

Net Neutrality forever! Wait, never mind…

When you’ve written as much as I have about the weird Web topic known as “network neutrality,” this is big news indeed.

The celebrated openness of the Internet — network providers are not supposed to give preferential treatment to any traffic — is quietly losing powerful defenders.

Google Inc. has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content, according to documents reviewed by The Wall Street Journal. Google has traditionally been one of the loudest advocates of equal network access for all content providers.

What some innocuously call “equal network access,” others call meddlesome regulation. Net neutrality could potentially provide a platform for Congress and the FCC to micromanage everything on the Net, from wires and switches to applications and services to the bits and bytes themselves. It is a potentially monstrous threat to dynamic innovation on the fast-growing Net, where experimentation still reigns. 

But now Google, a newly powerful force in Washington and Obamaland, may be reversing course 180-degrees. The regulatory threat level may have just dropped from orange to yellow.

Update: Richard Bennett expertly comments here.

“Googlephobia”: An Unholy Alliance

My colleague Adam Thierer with an excellent post warning of the coming war on Google:

So, here we have Wu raising the specter of search engine bias and Lessig raising the specter of Google-as-panopticon. And this comes on top of groups like EPIC and CDT calling for more regulation of the online advertising marketplace in the name of protecting privacy.  Alarm bells must be going off at the Googleplex. But we all have reason to be concerned because greater regulation of Google would mean greater regulation of the entire code / application layer of the Net.  It’s bad enough that we likely have greater regulation of the infrastructure layer on the way thanks to Net neutrality mandates. We need to work hard to contain the damage of increased calls for government to get it’s hands all over every other layer of the Net.

« Previous Page