Posts Tagged ‘bandwidth’

Akamai CEO Exposes FCC’s Confused “Paid Priority” Prohibition

Tuesday, January 4th, 2011

In the wake of the FCC’s net neutrality Order, published on December 23, several of us have focused on the Commission’s confused and contradictory treatment of “paid prioritization.” In the Order, the FCC explicitly permits some forms of paid priority on the Internet but strongly discourages other forms.

From the beginning — that is, since the advent of the net neutrality concept early last decade — I argued that a strict neutrality regime would have outlawed, among other important technologies, CDNs, which prioritized traffic and made (make!) the Web video revolution possible.

So I took particular notice of this new interview (sub. required) with Akamai CEO Paul Sagan in the February 2011 issue of MIT’s Technology Review:

TR: You’re making copies of videos and other Web content and distributing them from strategic points, on the fly.

Paul Sagan: Or routes that are picked on the fly, to route around problematic conditions in real time. You could use Boston [as an analogy]. How do you want to cross the Charles to, say, go to Fenway from Cambridge? There are a lot of bridges you can take. The Internet protocol, though, would probably always tell you to take the Mass. Ave. bridge, or the BU Bridge, which is under construction right now and is the wrong answer. But it would just keep trying. The Internet can’t ever figure that out — it doesn’t. And we do.

There it is. Akamai and other content delivery networks (CDNs), including Google, which has built its own CDN-like network, “route around” “the Internet,” which “can’t ever figure . . . out” the fastest path needed for robust packet delivery. And they do so for a price. In other words: paid priority. Content companies, edge innovators, basement bloggers, and poor non-profits who don’t pay don’t get the advantages of CDN fast lanes. (more…)

International Broadband Comparison, continued

Thursday, October 14th, 2010

New numbers from Cisco allow us to update our previous comparison of actual Internet usage around the world. We think this is a far more useful metric than the usual “broadband connections per 100 inhabitants” used by the OECD and others to compile the oft-cited world broadband rankings.

What the per capita metric really measures is household size. And because the U.S. has more people in each household than many other nations, we appear worse in those rankings. But as the Phoenix Center has noted, if each OECD nation reached 100% broadband nirvana — i.e., every household in every nation connected — the U.S. would actually fall from 15th to 20th. Residential connections per capita is thus not a very illuminating measure.

But look at the actual Internet traffic generated and consumed in the U.S.

The U.S. far outpaces every other region of the world. In the second chart, you can see that in fact only one nation, South Korea, generates significantly more Internet traffic per user than the U.S. This is no surprise. South Korea was the first nation to widely deploy fiber-to-the-x and was also the first to deploy 3G mobile, leading to not only robust infrastructure but also a vibrant Internet culture. The U.S. dwarfs most others.

If the U.S. was so far behind in broadband, we could not generate around twice as much network traffic per user compared to nations we are told far exceed our broadband capacity and connectivity. The U.S. has far to go in a never-ending buildout of its communications infrastructure. But we invest more than other nations, we’ve got better broadband infrastructure overall, and we use broadband more — and more effectively (see the Connectivity Scorecard and The Economist’s Digital Economy rankings) — than almost any other nation.

The conventional wisdom on this one is just plain wrong.

A Victory For the Free Web

Wednesday, April 7th, 2010

After yesterday’s federal court ruling against the FCC’s overreaching net neutrality regulations, which we have dedicated considerable time and effort combatting for the last seven years, Holman Jenkins says it well:

Hooray. We live in a nation of laws and elected leaders, not a nation of unelected leaders making up rules for the rest of us as they go along, whether in response to besieging lobbyists or the latest bandwagon circling the block hauled by Washington’s permanent “public interest” community.

This was the reassuring message yesterday from the D.C. Circuit Court of Appeals aimed at the Federal Communications Commission. Bottom line: The FCC can abandon its ideological pursuit of the “net neutrality” bogeyman, and get on with making the world safe for the iPad.

The court ruled in considerable detail that there’s no statutory basis for the FCC’s ambition to annex the Internet, which has grown and thrived under nobody’s control.

. . .

So rather than focusing on new excuses to mess with network providers, the FCC should tackle two duties unambiguously before it: Figure out how to liberate the nation’s wireless spectrum (over which it has clear statutory authority) to flow to more market-oriented uses, whether broadband or broadcast, while also making sure taxpayers get adequately paid as the current system of licensed TV and radio spectrum inevitably evolves into something else.

Second: Under its media ownership hat, admit that such regulation, which inhibits the merger of TV stations with each other and with newspapers, is disastrously hindering our nation’s news-reporting resources and brands from reshaping themselves to meet the opportunities and challenges of the digital age. (Willy nilly, this would also help solve the spectrum problem as broadcasters voluntarily redeployed theirs to more profitable uses.)

Chronically Critical Broadband Country Comparisons

Friday, March 26th, 2010

With the release of the FCC’s National Broadband Plan, we continue to hear all sorts of depressing stories about the sorry state of American broadband Internet access. But is it true?

International comparisons in such a fast-moving arena as tech and communications are tough. I don’t pretend it is easy to boil down a hugely complex topic to one right answer, but I did have some critical things to say about a major recent report that got way too many things wrong. A new article by that report’s author singled out France as especially more advanced than the U.S. To cut through all the clutter of conflicting data and competing interpretations on broadband deployment, access, adoption, prices, and speeds, however, maybe a simple chart will help.

Here we compare network usage. Not advertised speeds, which are suspect. Not prices which can be distorted by the use of purchasing power parity (PPP). Not “penetration,” which is largely a function of income, urbanization, and geography. No, just simply, how much data traffic do various regions create and consume.

If U.S. networks were so backward — too sparse, too slow, too expensive — would Americans be generating 65% more network traffic per capita than their Western European counterparts?

Washington liabilities vs. innovative assets

Friday, March 12th, 2010

Our new article at RealClearMarkets:

As Washington and the states pile up mountainous liabilities — $3 trillion for unfunded state pensions, $10 trillion in new federal deficits through 2019, and $38 trillion (or is it $50 trillion?) in unfunded Medicare promises — the U.S. needs once again to call on its chief strategic asset: radical innovation.

One laboratory of growth will continue to be the Internet. The U.S. began the 2000’s with fewer than five million residential broadband lines and zero mobile broadband. We begin the new decade with 71 million residential lines and 300 million portable and mobile broadband devices. In all, consumer bandwidth grew almost 15,000%.

Even a thriving Internet, however, cannot escape Washington’s eager eye. As the Federal Communications Commission contemplates new “network neutrality” regulation and even a return to “Title II” telephone regulation, we have to wonder where growth will come from in the 2010’s . . . .

Collective vs. Creative: The Yin and Yang of Innovation

Tuesday, January 12th, 2010

Later this week the FCC will accept the first round of comments in its “Open Internet” rule making, commonly known as Net Neutrality. Never mind that the Internet is already open and it was never strictly neutral. Openness and neutrality are two appealing buzzwords that serve as the basis for potentially far reaching new regulation of our most dynamic economic and cultural sector — the Internet.

I’ll comment on Net Neutrality from several angles over the coming days. But a terrific essay by Berkeley’s Jaron Lanier impelled me to begin by summarizing some of the big meta-arguments that have been swirling the last few years and which now broadly define the opposing sides in the Net Neutrality debate. After surveying these broad categories, I’ll get into the weeds on technology, business, and policy.

The thrust behind Net Neutrality is a view that the Internet should conform to a narrow set of technology and business “ideals” — “open,” “neutral,” “non-discriminatory.” Wonderful words. Often virtuous. But these aren’t the only traits important to economic and cultural systems. In fact, Net Neutrality sets up a false dichotomy — a manufactured war — between open and closed, collaborative versus commercial, free versus paid, content versus conduit. I’ve made a long list of the supposed opposing forces. Net Neutrality favors only one side of the table below. It seeks to cement in place one model of business and technology. It is intensely focused on the left-hand column and is either oblivious or hostile to the right-hand column. It thinks the right-hand items are either bad (prices) or assumes they appear magically (bandwidth).

We skeptics of Net Neutrality, on the other hand, do not favor one side or the other. We understand that there are virtues all around. Here’s how I put it on my blog last autumn:

Suggesting we can enjoy Google’s software innovations without the network innovations of AT&T, Verizon, and hundreds of service providers and technology suppliers is like saying that once Microsoft came along we no longer needed Intel.

No, Microsoft and Intel built upon each other in a virtuous interplay. Intel’s microprocessor and memory inventions set the stage for software innovation. Bill Gates exploited Intel’s newly abundant transistors by creating radically new software that empowered average businesspeople and consumers to engage with computers. The vast new PC market, in turn, dramatically expanded Intel’s markets and volumes and thus allowed it to invest in new designs and multi-billion dollar chip factories across the globe, driving Moore’s law and with it the digital revolution in all its manifestations.

Software and hardware. Bits and bandwidth. Content and conduit. These things are complementary. And yes, like yin and yang, often in tension and flux, but ultimately interdependent.

Likewise, we need the ability to charge for products and set prices so that capital can be rationally allocated and the hundreds of billions of dollars in network investment can occur. It is thus these hard prices that yield so many of the “free” consumer surplus advantages we all enjoy on the Web. No company or industry can capture all the value of the Web. Most of it comes to us as consumers. But companies and content creators need at least the ability to pursue business models that capture some portion of this value so they can not only survive but continually reinvest in the future. With a market moving so fast, with so many network and content models so uncertain during this epochal shift in media and communications, these content and conduit companies must be allowed to define their own products and set their own prices. We need to know what works, and what doesn’t.

When the “network layers” regulatory model, as it was then known, was first proposed back in 2003-04, my colleague George Gilder and I prepared testimony for the U.S. Senate. Although the layers model was little more than an academic notion, we thought then this would become the next big battle in Internet policy. We were right. Even though the “layers” proposal was (and is!) an ill-defined concept, the model we used to analyze what Net Neutrality would mean for networks and Web business models still applies. As we wrote in April of 2004:

Layering proponents . . . make a fundamental error. They ignore ever changing trade-offs between integration and modularization that are among the most profound and strategic decisions any company in any industry makes. They disavow Harvard Business professor Clayton Christensen’s theorems that dictate when modularization, or “layering,” is advisable, and when integration is far more likely to yield success. For example, the separation of content and conduit—the notion that bandwidth providers should focus on delivering robust, high-speed connections while allowing hundreds of millions of professionals and amateurs to supply the content—is often a sound strategy. We have supported it from the beginning. But leading edge undershoot products (ones that are not yet good enough for the demands of the marketplace) like video-conferencing often require integration.

Over time, the digital and photonic technologies at the heart of the Internet lead to massive integration — of transistors, features, applications, even wavelengths of light onto fiber optic strands. This integration of computing and communications power flings creative power to the edges of the network. It shifts bottlenecks. Crystalline silicon and flawless fiber form the low-entropy substrate that carry the world’s high-entropy messages — news, opinions, new products, new services. But these feats are not automatic. They cannot be legislated or mandated. And just as innovation in the core of the network unleashes innovation at the edges, so too more content and creativity at the edge create the need for ever more capacity and capability in the core. The bottlenecks shift again. More data centers, better optical transmission and switching, new content delivery optimization, the move from cell towers to femtocell wireless architectures. There is no final state of equilibrium where one side can assume that the other is a stagnant utility, at least not in the foreseeable future.

I’ll be back with more analysis of the Net Neutrality debate, but for now I’ll let Jaron Lanier (whose book You Are Not a Gadget was published today) sum up the argument:

Here’s one problem with digital collectivism: We shouldn’t want the whole world to take on the quality of having been designed by a committee. When you have everyone collaborate on everything, you generate a dull, average outcome in all things. You don’t get innovation.

If you want to foster creativity and excellence, you have to introduce some boundaries. Teams need some privacy from one another to develop unique approaches to any kind of competition. Scientists need some time in private before publication to get their results in order. Making everything open all the time creates what I call a global mush.

There’s a dominant dogma in the online culture of the moment that collectives make the best stuff, but it hasn’t proven to be true. The most sophisticated, influential and lucrative examples of computer code—like the page-rank algorithms in the top search engines or Adobe’s Flash—always turn out to be the results of proprietary development. Indeed, the adored iPhone came out of what many regard as the most closed, tyrannically managed software-development shop on Earth.

Actually, Silicon Valley is remarkably good at not making collectivization mistakes when our own fortunes are at stake. If you suggested that, say, Google, Apple and Microsoft should be merged so that all their engineers would be aggregated into a giant wiki-like project—well you’d be laughed out of Silicon Valley so fast you wouldn’t have time to tweet about it. Same would happen if you suggested to one of the big venture-capital firms that all the start-ups they are funding should be merged into a single collective operation.

But this is exactly the kind of mistake that’s happening with some of the most influential projects in our culture, and ultimately in our economy.

Berkman’s Broadband Bungle

Tuesday, December 22nd, 2009

Professors at a leading research unit put suspect data into a bad model, fail to include crucial variables, and even manufacture the most central variable to deliver the hoped-for outcome.

Climate-gate? No, call it Berkman’s broadband bungle.

In October, Harvard’s Berkman Center for the Internet and Society delivered a report, commissioned by the Federal Communications Commission, comparing international broadband markets and policies. The report was to be a central component of the Administration’s new national broadband Internet policy, arriving in February 2010.

Just one problem. Actually many problems. The report botched its chief statistical model in half a dozen ways. It used loads of questionable data. It didn’t account for the unique market structure of U.S. broadband. It reversed the arrow of time in its country case studies. It ignored the high-profile history of open access regulation in the U.S. It didn’t conduct the literature review the FCC asked for. It excommunicated Switzerland . . . .

See my critique of this big report on international broadband at RealClearMarkets.

Wireless Crunch

Sunday, November 22nd, 2009

Adam Thierer makes important points about the wireless data boom . . . and the wireless spectrum crunch.

Neutrality for thee, but not for me

Sunday, October 4th, 2009

In Monday’s Wall Street Journal, I address the once-again raging topic of “net neutrality” regulation of the Web. On September 21, new FCC chair Julius Genachowski proposed more formal neutrality regulations. Then on September 25, AT&T accused Google of violating the very neutrality rules the search company has sought for others. The gist of the complaint was that the new Google Voice service does not connect all phone calls the way other phone companies are required to do. Not an earthshaking matter in itself, but a good example of the perils of neutrality regulation.

As the Journal wrote in its own editorial on Saturday:

Our own view is that the rules requiring traditional phone companies to connect these calls should be scrapped for everyone rather than extended to Google. In today’s telecom marketplace, where the overwhelming majority of phone customers have multiple carriers to choose from, these regulations are obsolete. But Google has set itself up for this political blowback.

Last week FCC Chairman Julius Genachowski proposed new rules for regulating Internet operators and gave assurances that “this is not about government regulation of the Internet.” But this dispute highlights the regulatory creep that net neutrality mandates make inevitable. Content providers like Google want to dabble in the phone business, while the phone companies want to sell services and applications.

The coming convergence will make it increasingly difficult to distinguish among providers of broadband pipes, network services and applications. Once net neutrality is unleashed, it’s hard to see how anything connected with the Internet will be safe from regulation.

Several years ago, all sides agreed to broad principles that prohibit blocking Web sites or applications. But I have argued that more detailed and formal regulations governing such a dynamic arena of technology and changing business models would stifle innovation.

Broadband to the home, office, and to a growing array of diverse mobile devices has been a rare bright spot in this dismal economy. Since net neutrality regulation was first proposed in early 2004, consumer bandwidth per capita in the U.S. grew to 3 megabits per second from just 262 kilobits per second, and monthly U.S. Internet traffic increased to two billion gigabytes from 170 million gigabytes — both 10-fold leaps. New wired and wireless innovations and services are booming.

All without net neutrality regulation.

The proposed FCC regulations could go well beyond the existing (and uncontroversial) non-blocking principles. A new “Fifth Principle,” if codified, could prohibit “discrimination” not just among applications and services but even at the level of data packets traversing the Net. But traffic management of packets is used across the Web to ensure robust service and security.

As network traffic, content, and outlets proliferate and diversify, Washington wants to apply rigid, top-down rules. But the network requirements of email and high-definition video are very different. Real time video conferencing requires more network rigor than stored content like YouTube videos. Wireless traffic patterns are more unpredictable than residential networks because cellphone users are, well, mobile. And the next generation of video cloud computing — what I call the exacloud — will impose the most severe constraints yet on network capacity and packet delay.

Or if you think entertainment unimportant, consider the implications for cybersecurity. The very network technologies that ensure a rich video experience are used to kill dangerous “botnets” and combat cybercrime.

And what about low-income consumers? If network service providers can’t partner with content companies, offer value-added services, or charge high-end users more money for consuming more bandwidth, low-end consumers will be forced to pay higher prices. Net neutrality would thus frustrate the Administration’s goal of 100% broadband.

Health care, energy, jobs, debt, and economic growth are rightly earning most of the policy attention these days. But regulation of the Net would undermine the key global platform that underlay better performance on each of these crucial economic matters. Washington may be bailing out every industry that doesn’t work, but that’s no reason to add new constraints to one that manifestly does.

— Bret Swanson

Leviathan Spam

Wednesday, September 23rd, 2009

Leviathan Spam

Send the bits with lasers and chips
See the bytes with LED lights

Wireless, optical, bandwidth boom
A flood of info, a global zoom

Now comes Lessig
Now comes Wu
To tell us what we cannot do

The Net, they say,
Is under attack
Stop!
Before we can’t turn back

They know best
These coder kings
So they prohibit a billion things

What is on their list of don’ts?
Most everything we need the most

To make the Web work
We parse and label
We tag the bits to keep the Net stable

The cloud is not magic
It’s routers and switches
It takes a machine to move exadigits

Now Lessig tells us to route is illegal
To manage Net traffic, Wu’s ultimate evil (more…)

Biting the handsets that connect the world

Tuesday, July 7th, 2009

Over the July 4 weekend, relatives and friends kept asking me: Which mobile phone should I buy? There are so many choices.

I told them I love my iPhone, but all kinds of new devices from BlackBerries and Samsungs to Palm’s new Pre make strong showings, and the less well-known HTC, one of the biggest innovators of the last couple years, is churning out cool phones across the price-point and capability spectrum. Several days before, on Wednesday, July 1, I had made a mid-afternoon stop at the local Apple store. It was packed. A short line formed at the entrance where a salesperson was taking names on a clipboard. After 15 minutes of browsing, it was my turn to talk to a salesman, and I asked: “Why is the store so crowded? Some special event?”

“Nope,” he answered. “This is pretty normal for a Wednesday afternoon, especially since the iPhone 3G S release.”

So, to set the scene: The retail stores of Apple Inc., a company not even in the mobile phone business two short years ago, are jammed with people craving iPhones and other networked computing devices. And competing choices from a dozen other major mobile device companies are proliferating and leapfrogging each other technologically so fast as to give consumers headaches.

But amid this avalanche of innovative alternatives, we hear today that:

The Department of Justice has begun looking into whether large U.S. telecommunications companies such as AT&T Inc. and Verizon Communications Inc. are abusing the market power they have amassed in recent years . . . .

. . . The review is expected to cover all areas from land-line voice and broadband service to wireless.

One area that might be explored is whether big wireless carriers are hurting smaller rivals by locking up popular phones through exclusive agreements with handset makers. Lawmakers and regulators have raised questions about deals such as AT&T’s exclusive right to provide service for Apple Inc.’s iPhone in the U.S. . . .

The department also may review whether telecom carriers are unduly restricting the types of services other companies can offer on their networks . . . .

On what planet are these Justice Department lawyers living?

Most certainly not the planet where consumer wireless bandwidth rocketed by a factor of 542 (or 54,200%) over the last eight years. The chart below, taken from our new research, shows that by 2008, U.S. consumer wireless bandwidth — a good proxy for the power of the average citizen to communicate using mobile devices — grew to 325 terabits per second from just 600 gigabits per second in 2000. This 500-fold bandwidth expansion enabled true mobile computing, changed industries and cultures, and connected billions across the globe. Perhaps the biggest winners in this wireless boom were low-income Americans, and their counterparts worldwide, who gained access to the Internet’s riches for the first time.

total-us-wireless-bandwidth-2000-08

Meanwhile, Sen. Herb Kohl of Wisconsin is egging on Justice and the FCC with a long letter full of complaints right out of the 1950s. He warns of consolidation and stagnation in the dynamic, splintering communications sector; of dangerous exclusive handset deals even as mobile computers are perhaps the world’s leading example of innovative diversity; and of rising prices as communications costs plummet.

Kohl cautioned in particular that text message prices are rising and could severely hurt wireless consumers. But this complaint does not square with the numbers: the top two U.S. mobile phone carriers now transmit more than 200 billion text messages per calendar quarter.

It’s clear: consumers love paid text messaging despite similar applications like email, Skype calling, and instant messaging (IM, or chat) that are mostly free. A couple weeks ago I was asking a family babysitter about the latest teenage trends in text messaging and mobile devices, and I noted that I’d just seen highlights on SportsCenter of the National Texting Championship. Yes, you heard right. A 15 year old girl from Iowa, who had only been texting for eight months, won the speed texting contest and a prize of $50,000. I told the babysitter that ESPN reported this young Iowan used a crazy sounding 14,000 texts per month. “Wow, that’s a lot,” the babysitter said. “I only do 8,000 a month.”

I laughed. Only eight thousand.

In any case, Sen. Kohl’s complaint of a supposed rise in per text message pricing from $.10 to $.20 is mostly irrelevant. Few people pay these per text prices. A quick scan of the latest plans of one carrier, AT&T, shows three offerings: 200 texts for $5.00; 1500 texts for $15.00; or unlimited texts for $20. These plans correspond to per text prices, respectively, of 2.5 cents, 1 cent, and, in the case of our 8,000 text teen, just .25 cents. Not anywhere close to 20 cents.

The criticism of exclusive handset deals — like the one between AT&T and Apple’s iPhone or Sprint and Palm’s new Pre — is bizarre. Apple wasn’t even in the mobile business two years ago. And after its Treo success several years ago, Palm, originally a maker of PDAs (remember those?), had fallen far behind. Remember, too, that RIM’s popular BlackBerry devices were, until recently, just email machines. Then there is Amazon, who created a whole new business and publishing model with its wireless Kindle book- and Web-reader that runs on the Sprint mobile network. These four companies made cooperative deals with service providers to help them launch risky products into an intensely competitive market with longtime global standouts like Nokia, Motorola, Samsung, LG, Sanyo, SonyEricsson, and others.

As The Wall Street Journal noted today:

More than 30 devices have been introduced to compete with the iPhone since its debut in 2007. The fact that one carrier has an exclusive has forced other companies to find partners and innovate. In response, the price of the iPhone has steadily fallen. The earliest iPhones cost more than $500; last month, Apple introduced a $99 model.

If this is a market malfunction, let’s have more of them. Isn’t Washington busy enough re-ordering the rest of the economy?

These new devices, with their high-resolution screens, fast processors, and substantial 3G mobile and Wi-Fi connections to the cloud have launched a new era in Web computing. The iPhone now boasts more than 50,000 applications, mostly written by third-party developers and downloadable in seconds. Far from closing off consumer choice, the mobile phone business has never been remotely as open, modular, and dynamic.

There is no reason why 260 million U.S. mobile customers should be blocked from this onslaught of innovation in a futile attempt to protect a few small wireless service providers who might not — at this very moment — have access to every new device in the world, but who will no doubt tomorrow be offering a range of similar devices that all far eclipse the most powerful and popular device from just a year or two ago.

Bret Swanson

Bandwidth Boom: Measuring Communications Capacity

Wednesday, June 24th, 2009

See our new paper estimating the growth of consumer bandwidth – or our capacity to communicate – from 2000 to 2008. We found:

  • a huge 5,400% increase in residential bandwidth;
  • an astounding 54,200% boom in wireless bandwidth; and
  • an almost 100-fold increase in total consumer bandwidth

us-consumer-bandwidth-2000-08-res-wireless

U.S. consumer bandwidth at the end of 2008 totaled more than 717 terabits per second, yielding, on a per capita basis, almost 2.4 megabits per second of communications power.

Bandwidth and QoS: Much ado about something

Friday, April 24th, 2009

The supposed top finding of a new report commissioned by the British telecom regulator Ofcom is that we won’t need any QoS (quality of service) or traffic management to accommodate next generation video services, which are driving Internet traffic at consistently high annual growth rates of between 50% and 60%. TelecomTV One headlined, “Much ado about nothing: Internet CAN take video strain says UK study.” 

But the content of the Analysys Mason (AM) study, entitled “Delivering High Quality Video Services Online,” does not support either (1) the media headline — “Much ado about nothing,” which implies next generation services and brisk traffic growth don’t require much in the way of new technology or new investment to accommodate them — or (2) its own “finding” that QoS and traffic management aren’t needed to deliver these next generation content and services.

For example, AM acknowledges in one of its five key findings in the Executive Summary: 

innovative business models might be limited by regulation: if the ability to develop and deploy novel approaches was limited by new regulation, this might limit the potential for growth in online video services.

In fact, the very first key finding says:

A delay in the migration to 21CN-based bitstream products may have a negative impact on service providers that use current bitstream products, as growth in consumption of video services could be held back due to the prohibitive costs of backhaul capacity to support them on the legacy core network. We believe that the timely migration to 21CN will be important in enabling significant take-up of online video services at prices that are reasonable for consumers.

So very large investments in new technologies and platforms are needed, and new regulations that discourage this investment could delay crucial innovations on the edge. Sounds like much ado about something, something very big.  (more…)

Apples and Oranges

Friday, April 10th, 2009

Saul Hansell has done some good analysis of the broadband market (as I noted here), and I’m generally a big fan of the NYT’s Bits blog. But this item mixes cable TV apples with switched Internet oranges. And beyond that just misses the whole concept of products and prices.

Questioning whether Time Warner will be successful in its attempt to cap bandwidth usage on its broadband cable modem service — effectively raising the bandwidth pricing issue — Hansell writes:

I tried to explore the marginal costs with [Time Warner's] Mr. Hobbs. When someone decides to spend a day doing nothing but downloading every Jerry Lewis movie from BitTorrent, Time Warner doesn’t have to write a bigger check to anyone. Rather, as best as I can figure it, the costs are all about building the network equipment and buying long-haul bandwidth for peak capacity.

If that is true, the question of what is “fair” is somewhat more abstract than just saying someone who uses more should pay more. After all, people who watch more hours of cable television don’t pay more than those who don’t.

It’s also true that a restaurant patron who finishes his meal doesn’t pay more than someone who leaves half the same menu item on his plate. If he orders two bowls of soup, he gets more soup. He can’t order one bowl of soup and demand each of his five dining partners also be served for free. Pricing decisions depend upon the product and the granularity that is being offered. (more…)

The nuts & bolts of the Net

Wednesday, December 17th, 2008

For those who found the Google-net-neutrality-edge-caching story confusing, here’s a terrifically lucid primer by my PFF colleague Adam Marcus explaining “edge caching” and content delivery networks (CDNs) and, even more basically, the concepts of bandwidth and latency.