Category Archives: Exaflood

U.S. Share of Internet Traffic Grows

Over the last half decade, during a protracted economic slump, we’ve documented the persistent successes of Digital America — for example the rise of the App Economy. Measuring the health of our tech sectors is important, in part because policy agendas are often based on assertions of market failure (or regulatory failure) and often include comparisons with other nations. Several years ago we developed a simple new metric that we thought better reflected the health of broadband in international comparisons. Instead of measuring broadband using “penetration rates,” or the number of  connections per capita, we thought a much better indicator was actual Internet usage. So we started looking at Internet traffic per capita and per Internet user (see here, here, here, and, for more context, here).

We’ve update the numbers here, using Cisco’s Visual Networking Index for traffic estimates and Internet user figures from the International Telecommunications Union. And the numbers suggest the U.S. digital economy, and its broadband networks, are healthy and extending their lead internationally. (Patrick Brogan of USTelecom has also done excellent work on this front; see his new update.)

If we look at regional comparisons of traffic per person, we see North America generates and consumes nearly seven times the world average and more around two and a half times that of Western Europe.

Looking at individual nations, and switching to the metric of traffic per user, we find that the U.S. is actually pulling away from the rest of the world. In our previous reports, the U.S. trailed only South Korea, was essentially tied with Canada, and generated around 60-70% more traffic than Western European nations. Now, the U.S. has separated itself from Canada and is generating two to three times the traffic per user of Western Europe and Japan.

Perhaps the most remarkable fact, as Brogan notes, is that the U.S. has nearly caught up with South Korea, which, for the last decade, was a real outlier — far and away the worldwide leader in Internet infrastructure and usage.

Traffic is difficult to measure and its nature and composition can change quickly. There are a number of factors we’ll talk more about later, such as how much of this traffic originates in the U.S. but is destined for foreign lands. Yet these are some of the best numbers we have, and the general magnitudes reinforce the idea that the U.S. digital economy, under a relatively light-touch regulatory model, is performing well.

Net ‘Neutrality’ or Net Dynamism? Easy Choice.

Consumers beware. A big content company wants to help pay for the sports you love to watch.

ESPN is reportedly talking with one or more mobile service providers about a new arrangement in which the sports giant might agree to pay the mobile providers so that its content doesn’t count against a subscriber’s data cap. People like watching sports on their mobile devices, but web video consumes lots of data and is especially tough on bandwidth-constrained mobile networks. The mobile providers and ESPN have noticed usage slowing as consumers approach their data subscription ceilings, after which they are commonly charged overage fees. ESPN doesn’t like this. It wants people to watch as much as possible. This is how it sells advertising. ESPN wants to help people watch more by, in effect, boosting the amount of data a user may consume — at no cost to the user.

As good a deal as this may be for consumers (and the companies involved), the potential arrangement offends some people’s very particular notion of “network neutrality.” They often have trouble defining what they mean by net neutrality, but they know rule breakers when they see them. Sure enough, long time net neutrality advocate Public Knowledge noted, “This is what a network neutrality violation looks like.”

The basic notion is that all bits on communications networks should be treated the same. No prioritization, no discrimination, and no partnerships between content companies and conduit companies. Over the last decade, however, as we debated net neutrality in great depth and breadth, we would point out that such a notional rule would likely result in many perverse consequences. For example, we noted that, had net neutrality existed at the time, the outlawing of pay-for-prioritization would have banned the rise of content delivery networks (CDNs), which have fundamentally improved the user experience for viewing online content. When challenged in this way, the net neutrality proponents would often reply, Well, we didn’t mean that. Of course that should be allowed. We also would point out that yesterday’s and today’s networks discriminate among bits in all sorts of ways, and that we would continue doing so in the future. Their arguments often deteriorated into a general view that Bad things should be banned. Good things should be allowed. And who do you think would be the arbiter of good and evil? You guessed it.

So what is the argument in the case of ESPN? The idea that ESPN would pay to exempt its bits from data caps apparently offends the abstract all-bits-equal notion. But why is this bad in concrete terms? No one is talking about blocking content. In fact, by paying for a portion of consumers’ data consumption, such an arrangement can boost consumption and consumer choice. Far from blocking content, consumers will enjoy more content. Now I can consume my 2 gigabytes of data plus all the ESPN streaming I want. That’s additive. And if I don’t watch ESPN, then I’m no worse off. But if the mobile company were banned from such an arrangement, it may be forced to raise prices for everyone. Now, because ESPN content is popular and bandwidth-hungry, I, especially as an ESPN non-watcher, am worse off.

So the critics’ real worry is, I suppose, that ESPN, by virtue of its size, could gain an advantage on some other sports content provider who chose not to offer a similar uncapped service. But this is NOT what government policy should be — the micromanagement of prices, products, the structure of markets, and relationships among competitive and cooperative firms. This is what we warned would happen. This is what we said net neutrality was really all about — protecting some firms and punishing others. Where is the consumer in this equation?

These practical and utilitarian arguments about technology and economics are important. Yet they ignore perhaps the biggest point of all: the FCC has no authority to regulate the Internet. The Internet is perhaps the greatest free-flowing, fast-growing, dynamic engine of cultural and economic value we’ve known. The Internet’s great virtue is its ability to change and grow, to foster experimentation and innovation. Diversity in networks, content, services, apps, and business models is a feature, not a bug. Regulation necessarily limits this freedom and diversity, making everything more homogeneous and diminishing the possibilities for entrepreneurship and innovation. Congress has given the FCC no authority to regulate the Internet. The FCC invented this job for itself and is now being challenged in court.

Possible ESPN-mobile partnerships are just the latest reminder of why we don’t want government limiting our choices — and all the possibilities — on the Internet.

— Bret Swanson

FedEx vs. Broadband: the Big Bio data dilemma

The New York Times reports today that scientists reading human genomes are generating so much data that they must use snail mail instead of the Internet to send the DNA readouts around the globe.

BGI, based in China, is the world’s largest genomics research institute, with 167 DNA sequencers producing the equivalent of 2,000 human genomes a day.

BGI churns out so much data that it often cannot transmit its results to clients or collaborators over the Internet or other communications lines because that would take weeks. Instead, it sends computer disks containing the data, via FedEx.

“It sounds like an analog solution in a digital age,” conceded Sifei He, the head of cloud computing for BGI, formerly known as the Beijing Genomics Institute. But for now, he said, there is no better way.

The field of genomics is caught in a data deluge. DNA sequencing is becoming faster and cheaper at a pace far outstripping Moore’s law, which describes the rate at which computing gets faster and cheaper.

The result is that the ability to determine DNA sequences is starting to outrun the ability of researchers to store, transmit and especially to analyze the data.

We’ve been talking about the oncoming rush of biomedical data for a while. A human genome consists of some 2.9 billion base pairs, easily stored in around 725 megabytes with standard compression techniques. Two thousand genomes a day, times 725 MB, equals 1,450,000 MB, or 1.45 terabytes. That’s a lot of data for one entity to transmit in a day’s time. Some researchers believe a genome can be losslessly compressed to approximately 4 megabytes. In compressed form, 2,000 genomes would total around 8,000 MB, or just 8 gigabytes. Easily doable for a major institution.

Interested to know more.

AT&T’s Exaflood Acquisition Good for Mobile Consumers and Internet Growth

AT&T’s announced purchase of T-Mobile is an exaflood acquisition — a response to the overwhelming proliferation of mobile computers and multimedia content and thus network traffic. The iPhone, iPad, and other mobile devices are pushing networks to their limits, and AT&T literally could not build cell sites (and acquire spectrum) fast enough to meet demand for coverage, capacity, and quality. Buying rather than building new capacity improves service today (or nearly today) — not years from now. It’s a home run for the companies — and for consumers.

We’re nearing 300 million mobile subscribers in the U.S., and Strategy Analytics estimates by 2014 we’ll add an additional 60 million connected devices like tablets, kiosks, remote sensors, medical monitors, and cars. All this means more connectivity, more of the time, for more people. Mobile data traffic on AT&T’s network rocketed 8,000% in the last four years. Remember that just a decade ago there was essentially no wireless data traffic. It was all voice traffic. A few rudimentary text applications existed, but not much more. By year-end 2010, AT&T was carrying around 12 petabytes per month of mobile traffic alone. The company expects another 8 to 10-fold rise over the next five years, when its mobile traffic could reach 150 petabytes per month. (We projected this type of growth in a series of reports and articles over the last decade.)

The two companies’ networks and businesses are so complementary that AT&T thinks it can achieve $40 billion in cost savings. That’s more than the $39-billion deal price. Those huge efficiencies should help keep prices low in a market that already boasts the lowest prices in the world (just $0.04 per voice minute versus, say, $0.16 in Europe).

But those who focus only on the price of existing products (like voice minutes) and traditional metrics of “competition,” like how many national service providers there are, will miss the boat. Pushing voice prices down marginally from already low levels is not the paramount objective. Building fourth generation mobile multimedia networks is. Some wonder whether “consolidation of power could eventually lead to higher prices than consumers would otherwise see.” But “otherwise” assumes a future that isn’t going to happen. T-Mobile doesn’t have the spectrum or financial wherewithal to deploy a full 4G network. So the 4G networks of AT&T, Verizon, and Sprint (in addition to Clearwire and LightSquared) would have been competing against the 3G network of T-Mobile. A 3G network can’t compete on price with a 4G network because it can’t offer the same product. In many markets, inferior products can act as partial substitutes for more costly superior products. But in the digital world, next gen products are so much better and cheaper than the previous versions that older products quickly get left behind. Could T-Mobile have milked its 3G network serving mostly voice customers at bargain basement prices? Perhaps. But we already have a number of low-cost, bare-bones mobile voice providers.

The usual worries from the usual suspects in these merger battles go like this: First, assume a perfect market where all products are commodities, capacity is unlimited yet technology doesn’t change, and competitors are many. Then assume a drastic reduction in the number of competitors with no prospect of new market entrants. Then warn that prices could spike. It’s a story that may resemble some world, but not the one in which we live.

The merger’s boost to cell-site density is hugely important and should not be overlooked. Yes, we will simultaneously be deploying lots of new Wi-Fi nodes and femtocells (little mobile nodes in offices and homes), which help achieve greater coverage and capacity, but we still need more macrocells. AT&T’s acquisition will boost its total number of cell sites by 30%. In major markets like New York, San Francisco, and Chicago, the number of AT&T cell sites will grow by 25%-45%. In many areas, total capacity should double.

It’s not easy to build cell sites. You’ve got to find good locations, get local government approvals, acquire (or lease) the sites, plan the network, build the tower and network base station, connect it to your long-haul network with fiber-optic lines, and of course pay for it. In the last 20 years, the number of U.S. cell sites has grown from 5,000 to more than 250,000, but we still don’t have nearly enough. CEO Randall Stephenson says the T-Mobile purchase will achieve almost immediately a network expansion that would have taken five years through AT&T’s existing organic growth plan. Because of the nature of mobile traffic — i.e., it’s mobile and bandwidth is shared — the combination of the two networks should yield a more-than-linear increase in quality improvements. The increased cell-site density will give traffic planners much more flexibility to deliver high-capacity services than if the two companies operated separately.

The U.S. today has the most competitive mobile market in the world (second, perhaps, only to tiny Hong Kong). Yes, it’s true, even after the merger, the U.S. will still have a more “competitive” market than most. But “competition” is often not the most — or even a very — important metric in these fast moving markets. In periods of undershoot, where a technology is not good enough to meet demand on quantity or quality, you often need integration to optimize the interfaces and the overall experience, a la the hand-in-glove paring of the iPhone’s hardware, software, and network. Streaming a video to a tiny piece of plastic in your pocket moving at 60 miles per hour — with thousands of other devices competing for the same bandwidth — is not a commodity service. It’s very difficult. It requires millions of things across the network to go just right. These services often take heroic efforts and huge sums of capital just to make the systems work at all.

Over time technologies overshoot, markets modularize, and small price differences matter more. Products that seem inferior but which are “good enough” then begin to disrupt state-of-the art offerings. This was what happened to the voice minute market over the last 20 years. Voice-over-IP, which initially was just “good enough,” made voice into a commodity. Competition played a big part, though Moore’s law was the chief driver of falling prices. Now that voice is close to free (though still not good enough on many mobile links) and data is king, we see the need for more integration to meet the new challenges of the multimedia exaflood. It’s a never ending, dynamic cycle. (For much more on this view of technology markets, see Harvard Business School’s Clayton Christensen).

The merger will have its critics, but it seriously accelerates the coming of fourth generation mobile networks and the spread of broadband across America.

— Bret Swanson

Data roaming mischief . . . Another pebble in the digital river?

Mobile communications is among the healthiest of U.S. industries. Through a time of economic peril and now merely uncertainty, mobile innovation hasn’t wavered. It’s been a too-rare bright spot. Huge amounts of infrastructure investment, wildly proliferating software apps, too many devices to count. If anything, the industry is moving so fast on so many fronts that we risk not keeping up with needed capacity.

Mobile, perhaps not coincidentally, has also been historically a quite lightly regulated industry. But emerging is a sort of slow boil of small but many rules, or proposed rules, that could threaten the sector’s success. I’m thinking of the “bill shock” proceeding, in which the FCC is looking at billing practices and various “remedies.” And the failure to settle the D block public safety spectrum issue in a timely manner. And now we have a group of  rural mobile providers who want the FCC to set prices in the data roaming market.

You remember that “roaming” is when service provider A pays provider B for access to B’s network so that A’s customers can get service when they are outside A’s service area, or where it has capacity constraints, or for redundancy. These roaming agreements are numerous and have always been privately negotiated. The system works fine.

But now a group of provider A’s, who may not want to build large amounts of new network capacity to meet rising demand for mobile data, like video, Facebook, Twitter, and app downloads, etc., want the FCC to mandate access to B’s networks at regulated prices. And in this case, the B’s have spent many tens of billions of dollars in spectrum and network equipment to provide fast data services, though even these investments can barely keep up with blazing demand.

The FCC has never regulated mobile phone rates, let alone data rates, let alone data roaming rates. And of course mobile voice and data rates have been dropping like rocks. These few rural providers are asking the FCC to step in where it hasn’t before. They are asking the FCC to impose old-time common carrier regulation in a modern competitive market – one in which the FCC has no authority to impose common carrier rules and prices.

In the chart above, we see U.S. info-tech investment in 2010 approached $500 billion. Communications equipment and structures (like cell phone towers) surpassed $105 billion. The fourth generation of mobile networks is just in its infancy. We will need to invest many tens of billions of dollars each year for the foreseeable future both to drive and accommodate Internet innovation, which spreads productivity enhancements and wealth across every sector in the economy.

It is perhaps not surprising that a small number of service providers who don’t invest as much in high-capacity networks might wish to gain artificially cheap access to the networks of the companies who invest tens of billions of dollars per year in their mobile networks alone. Who doesn’t like lower input prices? Who doesn’t like his competitors to do the heavy lifting and surf in his wake? But the also not surprising result of such a policy could be to reduce the amount that everyone invests in new networks. And this is simply an outcome the technology industry, and the entire country, cannot afford. The FCC itself has said that “broadband is the great infrastructure challenge of the early 21st century.”

Economist Michael Mandel has offered a useful analogy:

new regulations [are] like  tossing small pebbles into a stream. Each pebble by itself would have very little effect on the flow of the stream. But throw in enough small pebbles and you can make a very effective dam.

Why does this happen? The answer is that each pebble by itself is harmless. But each pebble, by diverting the water into an ever-smaller area,  creates a ‘negative externality’ that creates more turbulence and slows the water flow.

Similarly, apparently harmless regulations can create negative externalities that add up over time, by forcing companies to spending  time and energy meeting the new requirements. That reduces business flexibility and hurts innovation and growth.

It may be true that none of the proposed new rules for wireless could alone bring down the sector. But keep piling them up, and you can dangerously slow an important economic juggernaut. Price controls for data roaming are a terrible idea.

An Economic Solution to the D Block Dilemma

Last month, Cisco reported that wireless data traffic is growing faster than projected (up 159% in 2010 versus its estimate of 149%). YouTube illustrated the point with its own report that mobile views of its videos grew 3x last year to over 200 million per day. Tablets like the Apple iPad were part of the upside surprise.

The very success of smartphones, tablets, and all the new mobile form-factors fuels frustration. They are never fast enough. We always want more capacity, less latency, fewer dropped calls, and ubiquitous access. In a real sense, these are good problems to have. They reflect a fast-growing sector delivering huge value to consumers and businesses. Rapid growth, however, necessarily strains various nodes in the infrastructure. At some point, a lack of resources could stunt this upward spiral. And one of the most crucial resources is wireless spectrum.

There is broad support for opening vast swaths of underutilized airwaves — 300 megahertz (MHz) by 2015 and 500 MHz overall — but we first must dispose of one spectrum scuffle known as the “D block.” Several years ago in a previous spectrum auction, the FCC offered up 10 MHz for commercial use — with the proviso that the owner would have to share the spectrum with public safety users (police, fire, emergency) nationwide. This “D block” sat next to an additional 10 MHz known as Public Safety Broadband (PSB), which was granted outright to the public safety community. But the D block auction failed. Potential bidders could not reconcile the technical and business complexities of this “encumbered” spectrum. The FCC received just one D block bid for just $472 million, far below the FCC’s minimum acceptable bid of $1.3 billion. So today, three years after the failed auction and almost a decade after 9/11, we still have not resolved the public safety spectrum question. (more…)

World Catches On to the Exaflood

Researchers Martin Hilbert and Priscila Lopez add to the growing literature on the data explosion (what we long ago termed the “exaflood”) with a study of analog and digital information storage, transmission, and computation from 1986 through 2007. They found in 2007 globally we were able to store 290 exabytes, communicate almost 2 zettabytes, and compute around 6.4 exa-instructions per second (EIPS?) on general purpose computers. The numbers have gotten much, much larger since then. Here’s the Science paper (subscription), which appears along side an entire special issue, “Dealing With Data,” and here’s a graphic from the Washington Post:

(Thanks to @AdamThierer for flagging the WashPost article.)

Mobile traffic grew 159% in 2010 . . . Tablets giving big boost

Among other findings in the latest version of Cisco’s always useful Internet traffic updates:

  • Mobile data traffic was even higher in 2010 than Cisco had projected in last year’s report. Actual growth was 159% (2.6x) versus projected growth of 149% (2.5x).
  • By 2015, we should see one mobile device per capita . . . worldwide. That means around 7.1 billion mobile devices compared to 7.2 billion people.
  • Mobile tablets (e.g., iPads) are likely to generate as much data traffic in 2015 as all mobile devices worldwide did in 2010.
  • Mobile traffic should grow at an annual compound rate of 92% through 2015. That would mean 26-fold growth between 2010 and 2015.

NetFlix Boom Leads to Switch

NetFlix is moving its content delivery platform from Akamai back to Level 3. Level 3 is adding 2.9 terabits per second of new capacity specifically to support NetFlix’s booming movie streaming business.

Exa Metrics

Here’s a new exaflood metric for you — tweets per second.

From the Twitter blog:

Folks were tweeting 5,000 times a day in 2007. By 2008, that number was 300,000, and by 2009 it had grown to 2.5 million per day. Tweets grew 1,400% last year to 35 million per day. Today, we are seeing 50 million tweets per day—that’s an average of 600 tweets per second. (Yes, we have TPS reports.)

Exa News

A number of interesting new articles and forums deal with our exaflood theme of the past few years.

“Striving to Map the Shape-Shifting Net” – by John Markoff – The New York Times – March 2, 2010

“Data, data, everywhere”The Economist – Special Report on Managing Information – February 25, 2010

“Managing the Exaflood” – American Association for the Advancement of Science – February 19, 2010

“Professors Find Ways to Keep Heads Above ‘Exaflood’ of Data” – Wired Campus – The Chronicle of Higher Education – February 24, 2010

Mobile traffic to grow 39x by 2014

Cisco’s latest Visual Networking Index, this one focusing mobile data traffic, projects 108% compound growth through 2014.

Finally . . . another HMI? study!

I loved pouring through Berkeley’s 2000 and 2003 studies estimating answers to a very big question –- How Much Information? How much digital information do we create and consume. Always lots of useful — and trivial — stuff in those reports. But where has HMI? been these last few years? Finally, UC-San Diego has picked up the torch and run with a new version, HMI? 2009.

So, you are asking, HMI? The UCSD team estimates that in 2008 outside of the workplace Americans consumed 3.6 zettabytes of information. That’s 3.6 x 10^21 bytes, or 3,600 billion billion.

Must Watch Web Debate

If you’re interested in Net Neutrality regulation and have some time on your hands, watch this good debate at the Web 2.0 conference. The resolution was “A Network Neutrality law is necessary,” and the two opposing sides were:

Against

  • James Assey – Executive Vice President, National Cable and Telecommunications Association
  • Robert Quinn –  Senior Vice President-Federal Regulatory, AT&T
  • Christopher Yoo – Professor of Law and Communication; Director, Center for Technology, Innovation, and Competition, UPenn Law

For

  • Tim Wu – Coined the term “Network Neutrality”; Professor of Law, Columbia Law
  • Brad Burnham – VC, Union Square Ventures
  • Nicholas Economides – Professor of Economics, Stern School of Business, New York University.

I think the side opposing the resolution wins, hands down — no contest really — but see for yourself.

“HD”Tube: YouTube moves toward 1080p

YouTube is moving toward a 1080p Hi Def video capability, just as we long-predicted.

This video may be “1080p,” but the frame-rate is slow, and the video motion is thus not very smooth. George Ou estimates the bit-rate at 3.7 Mbps, which is not enough for real full-motion HD. But we’re moving quickly in that direction.

Two-year study finds fast changing Web

See our brief review of Arbor Networks’ new two-year study where they captured and analyzed 264 exabytes of Internet traffic. Highlights:

  • Internet traffic growing at least 45% annually.
  • Web video jumped to 52% of all Internet traffic from 42%.
  • P2P, although still substantial, dropped more than any other application.
  • Google, between 2007 and 2009, jumped from outside the top-ten global ISPs by traffic volume to the number 3 spot.
  • Comcast jumped from outside the top-ten to number 6.
  • Content delivery networks (CDNs) are now responsible for around 10% of global Internet traffic.
  • This fast-changing ecosystem is not amenable to rigid rules imposed from a central authority, as would be the case under “net neutrality” regulation.

Arbor’s new Net traffic report: “This is just the beginning…”

See this comprehensive new Web traffic study from Arbor Networks — “the largest study of global Internet traffic since the start of the commercial Internet.” 

Conclusion

Internet is at an inflection point

Transition from focus on connectivity to content
Old global Internet economic models are evolving
New entrants are reshaping definition / value of connectivity

New technologies are reshaping definition of network
“Web” / Desktop Applications, Cloud computing, CDN

Changes mean significant new commercial, security and engineering challenges

This is just the beginning…

These conclusions and the data Arbor tracked and reported largely followed our findings, projections, and predictions from two years ago:

And an update from this spring:

Also see our analysis from last winter highlighting the evolution of content delivery networks — what my colleague George Gilder dubbed “storewidth” back in 1999 — and which Arbor now says is the fastest growing source/transmitter of Net traffic.

Did Cisco just blow $2.9 billion?

Cisco better hope wireless “net neutrality” does not happen. It just bought a company called Starent that helps wireless carriers manage the mobile exaflood.

See this partial description of Starent’s top product:

Intelligence at Work

Key to creating and delivering differentiat ed ser vices—and meeting subscriber demand—is the ST40’s ability to recognize different traffic flows, which allows it to shape and manage bandwidth, while interacting with applications to a very fine degree. The system does this through its session intelligence that utilizes deep packet inspection (DPI) technology, ser vice steering, and intelligent traffic control to dynamically monitor and control sessions on a per-subscriber/per-flow basis.

The ST40’s interaction with and understanding of key elements within the multimedia call—devices, applications, transport mechanisms, policies—and assists in the ser vice creation process by:

Providing a greater degree of information granularity and flexibility for billing, network planning, and usage trend analysis

Sharing information with external application ser vers that perform value-added processing

Exploiting user-specific attributes to launch unique applications on a per-subscriber basis

Extending mobility management information to non-mobility aware applications

Enabling policy, charging, and Quality of Ser vice (QoS) features

Traffic management. QoS. Deep Packet Inspection. Per service billing. Special features and products. Many of these technologies and features could be outlawed or curtailed under net neutrality. And the whole booming wireless arena could suffer.

Exa-cation: training the next generation for the exaflood

Google, IBM, and other big technology companies don’t think we’re ready for the exaflood.

It is a rare criticism of elite American university students that they do not think big enough. But that is exactly the complaint from some of the largest technology companies and the federal government.

At the heart of this criticism is data. Researchers and workers in fields as diverse as bio-technology, astronomy and computer science will soon find themselves overwhelmed with information. Better telescopes and genome sequencers are as much to blame for this data glut as are faster computers and bigger hard drives. . . .

Two years ago, I.B.M. and Google set out to change the mindset at universities by giving students broad access to some of the largest computers on the planet. The companies then outfitted the computers with software that Internet companies use to tackle their toughest data analysis jobs.

“It sounds like science fiction, but soon enough, you’ll hand a machine a strand of hair, and a DNA sequence will come out the other side,” said Jimmy Lin, an associate professor at the University of Maryland, during a technology conference held here last week.

The big question is whether the person on the other side of that machine will have the wherewithal to do something interesting with an almost limitless supply of genetic information.

At the moment, companies like I.B.M. and Google have their doubts.

For the most part, university students have used rather modest computing systems to support their studies. They are learning to collect and manipulate information on personal computers or what are known as clusters, where computer servers are cabled together to form a larger computer. But even these machines fail to churn through enough data to really challenge and train a young mind meant to ponder the mega-scale problems of tomorrow.

Correction: Exa-scale.

“If they imprint on these small systems, that becomes their frame of reference and what they’re always thinking about,” said Jim Spohrer, a director at I.B.M.’s Almaden Research Center.

GigaTube

YouTube says it now serves up well over a billion videos a day — far more than previously thought.

Next Page »