In its effort to regulate the Internet, the Federal Communications Commission is swimming upstream against a flood of evidence. The latest data comes from Fred Campbell and the Internet Innovation Alliance, showing the startling disparities between the mostly unregulated and booming U.S. broadband market, and the more heavily regulated and far less innovative European market. In November, we showed this gap using the measure of Internet traffic. Here, Campbell compares levels of investment and competitive choice (see chart below). The bottom line is that the U.S. invests around four times as much in its wired broadband networks and about twice as much in wireless. It’s not even close. Why would the U.S. want to drop America’s hugely successful model in favor of “President Obama’s plan to regulate the Internet,” which is even more restrictive and intrusive than Europe’s?
See below our post from TechPolicyDaily.com responding to President Obama’s January 14 speech in Iowa. We’ve added some additional notes at the bottom of the post.
Yesterday, President Obama visited Cedar Falls, Iowa, to promote government-run broadband networks. On Tuesday, he gave a preview of the speech from the Oval Office. We need to help cities and towns build their own networks, he said, because the US has fallen behind the rest of the world. He pointed to a chart on his iPad, which showed many big US cities trailing Paris, Tokyo, Hong Kong, and Seoul in broadband speeds. Amazingly, however, some small US towns with government-owned broadband networks matched these world leaders with their taxpayer-funded deployment of gigabit broadband.
I wish I could find a more polite way to say this, but the President’s chart is utter nonsense. Most Parisians do not enjoy Gigabit broadband. Neither do most residents of Tokyo, Hong Kong, or Seoul, which do in fact participate in healthy broadband markets. Perhaps most importantly, neither do most of the citizens of American towns, like Cedar Falls, Chattanooga, or Lafayette, which are the supposed nirvanas of government-run broadband.*
The chart, which is based on a fundamentally flawed report, and others like it, deliberately obscures the true state of broadband around the world. As my AEI colleagues and I have shown, by the most important and systematic measures, the US not only doesn’t lag, it leads. The US, for example, generates two to three times the Internet traffic (per capita and per Internet user) of the most advanced European and Asian nations. (more…)
Combatants in the Net Neutrality wars often seem to talk past each other. Sometimes it’s legitimate miscommunication. More often, though, it arises from fundamental defects in the concept itself.
On December 2, Commissioner Ajit Pai wrote to Netflix, Inc., saying he “was surprised to learn of allegations that Netflix has been working to effectively secure ‘fast lanes’ for its own content on ISPs’ networks at the expense of its competitors.” Commissioner Pai noted press accounts that suggested Netflix’s Open Connect content delivery platform and its use of specialized video streaming protocols put video from non-Netflix sources at a disadvantage. Commissioner Pai concluded that “these allegations raise an apparent conflict with Netflix’s advocacy of strong net neutrality regulations” and thus asked for an explanation.
In its reply of December 11, Netflix made four basic points. Netflix (1) said it “designed Open Connect content delivery network (CDN) to provide consumers with a high-quality video experience”; (2) insisted “Open Connect is not a fast lane . . . . Open Connect helps ISPs reduce costs and better manage congestion, which results in a better Internet experience for all end users”; (3) said it “uses open-source software and readily-available hardware components”; and (4) applauded other firms for developing open video caching standards but “has focused” on its own proprietary system because it is more efficient and customer friendly than the collaborative industry efforts.
Three of Netflix’s four points are reasonable, as far as they go. The company is developing technologies and architectures to improve customer service and beat the competition. The firm, however, seems not to grasp Commissioner Pai’s central point: Netflix relishes aggressive competition on its own behalf but wants to outlaw similarly innovative behavior from the rest of the Internet economy.
Phone Company Screws Everyone: Forces Rural Simpletons and Elderly Into Broadband, Locks Young Suburbanites in Copper Cage
Big companies must often think, damned if we do, damned if we don’t.
See our coverage of Comcast-Netflix, which really began before any deal was announced. Two weeks ago we wrote about the stories that Netflix traffic had slowed, and we suggested a more plausible explanation (interconnection disputes negotiations) than the initial suspicion (so called “throttling”). Soon after, we released a short paper, long in the works, describing “How the Net Works” — a brief history of interconnection and peering. And this week we wrote about it all at TechPolicyDaily, Forbes, and USNews.
Netflix, Verizon, and the Interconnection Question – TechPolicyDaily.com – February 13, 2014
How the Net Works: A Brief History of Internet Interconnection – Entropy Economics – February 21, 2014
Comcast, Netflix Prove Internet Is Working – TechPolicyDaily.com – February 24, 2014
Netflix, Comcast Hook Up Sparks Web Drama – Forbes.com – February 26, 2014
Comcast, Netflix and the Future of the Internet – U.S. News & World Report – February 27, 2014
— Bret Swanson
My AEI tech policy colleagues and I discussed today’s net neutrality ruling, which upheld the FCC’s basic ability to oversee broadband but vacated the two major, specific regulations.
Today, the D.C. Federal Appeals Court struck down the FCC’s “net neutrality” regulations, arguing the agency cannot regulate the Internet as a “common carrier” (that is, the way we used to regulate telephones). Here, from a pre-briefing I and several AEI colleagues did for reporters yesterday, is a summary of my statement:
[W]e have these big agencies, some of which are outdated, some of which are not designed properly . . . . The White House is just a tiny part of what is a huge, widespread organization with increasingly complex tasks in a complex world.
That was President Obama, last week, explaining Obamacare’s failed launch. We couldn’t have said it better ourselves.
Where Washington thinks this is a reason to give itself more to do, with more resources, however, we see it as a blaring signal of overreach.
The Administration now says Healthcare.gov is operating with “private sector velocity and effectiveness.” But why seek to further governmentalize one-seventh of the economy if the private sector is faster and more effective than government?
Meanwhile, the New York Times notes that
The technology troubles that plagued the HealthCare.gov website rollout, may not have come as a shock to people who work for certain agencies of the government — especially those who still use floppy disks, the cutting-edge technology of the 1980s.
Every day, The Federal Register, the daily journal of the United States government, publishes on its website and in a thick booklet around 100 executive orders, proclamations, proposed rule changes and other government notices that federal agencies are mandated to submit for public inspection.
So far, so good.
It turns out, however, that the Federal Register employees who take in the information for publication from across the government still receive some of it on the 3.5-inch plastic storage squares that have become all but obsolete in the United States.
Floppy disks make us chuckle. But the costs of complexity are all too real.
A Bloomberg study found the six largest U.S. banks, between 2008 and August of this year, spent $103 billion on lawyers and related legal expenses. These costs pale compared to the far larger economic distortions imposed by metastasizing financial regulation. Even Barney Frank is questioning whether his signature law, Dodd-Frank, is a good idea. The bureaucracy’s decision to push regulations intended for big banks onto money managers and mutual funds seems to have tipped his thinking.
This is not an aberration. This is what happens with vast, complex, ambiguous laws, which ask “huge, widespread” bureaucracies to implement them.
It is the norm of today’s sprawling Administrative State and of Congress’s penchant for 2,000-page wish lists, which ineluctably empower that Administrative State.
We resist, however, the idea that the problem is merely “outdated” or “inefficient” bureaucracy.
We do not need better people to administer these “laws.” With laws and regulations this extensive and ambiguous, they are inherently political. The best managers would seek efficient and effective outcomes based on common-sense readings and would resist political tampering. Effective implementation of conflicting and economically irrational rules would still yield big problems. Regardless, the goal is not effective management — it is political control.
Agency “reform” is not the answer, although in most cases reform is preferable to no reform. Even reformed agencies do not possess the information to manage a “complex world.” Anyway, “competent” management is not what the political branches want. Agencies routinely evade existing controls — such as procurement rules — when convenient. The largest Healthcare.gov contractor, for example, reportedly got the work without any contesting bids. That is not an oversight, it is a decision.
The laws and rules are uninterpretable by the courts. Depending on which judges hear the cases, we get dramatically and unpredictably divergent analyses, or the type of baby splitting Chief Justice Roberts gave us on Obamacare. Judges thus end up either making their own law or throwing the question back into the political arena.
Infinite complexity of law means there is no law.
“With great power,” Peter Parker’s (aka Spiderman’s) uncle told us, “comes great responsibility.” For Washington, however, ambiguity and complexity are features, not bugs. Ambiguity and complexity promote control without accountability, power without responsibility.
The only solution to this crisis of complexity is to reform the very laws, rules, scope, and aims of government itself.
In a paper last spring called “Keep It Simple,” we highlighted two instances — one from the labor markets and one from the capital markets — where even the most well-intended rules yielded catastrophic results. We showed how the interactions among these rules and the supporting bureaucracies produced unintended consequences. And we outlined a basic framework for assessing “good rules and bad rules.”
As our motto and objective, we adopted Richard Epstein’s aspiration of “simple rules for a complex world.” Which, you will notice, is the just opposite of the problem so incisively outlined by the President — Washington’s failed attempts to perform “complex tasks in a complex world.”
As we wrote elsewhere,
The private sector is good at mastering complexity and turning it into apparent simplicity — it’s the essence of wealth creation. At its best, the government is a neutral arbiter of basic rules. The Administration says it is ‘discovering’ how these ‘complicated’ things can blow up. We’ll see if government is capable of learning.
My TechPolicyDaily colleague Roslyn Layton has begun a series comparing the European and U.S. broadband markets.
As a complement to her work, I thought I’d address a common misperception — the notion that American broadband networks are “pathetically slow.” Backers of heavier regulation of the communications market have used this line over the past several years, and for a time it achieved a sort of conventional wisdom. But is it true? I don’t think so.
Real-time speed data collected by the Internet infrastructure firm Akamai shows U.S. broadband is the fastest of any large nation, and trails only a few tiny, densely populated countries. Akamai lists the top 10 nations in categories such as average connection speed; average peak speed; percent of connections with “fast” broadband; and percent of connections with broadband. The U.S., for example, ranks eighth among nations in average connection speed. And this is the number that is oft quoted. (This is a bit better than the no-longer-oft-used broadband penetration figures, which perennially showed the U.S. further down the list, at 15th or 26th place, for example.) Nearly all the the nations on these speed lists, however, with the exception of the U.S., are small, densely populated countries where it is far easier and more economical to build high-speed networks.
How to fix this? Well, Akamai also lists the top 10 American states in these categories. Because states are smaller, like the small nations that top the global list, they are a more appropriate basis for comparison. Last winter I combined the national and state figures and compiled a more appropriate comparative list. Using the newest data, I’ve updated the tables, which show that U.S. states (highlighted in green) dominate.
- Ten of the top 13 entities for “average connection speed” are U.S. states.
- Ten of the top 15 in “average peak connection speed” are U.S. states.
- Ten of the top 12 in “percent of connections above 10 megabits per second” are U.S. states.
- Ten of the top 20 in “percent of connections above 4 megabits per second” are U.S. states.
U.S. states thus account for 40 of the top 60 slots — or two-thirds — in these measures of actual global broadband speeds.
This is not a comprehensive analysis of the entire U.S. Less populated geographic areas, where it is more expensive to build networks, don’t enjoy speeds this high. But the same is true throughout the world.
Wealth, however, can be a double-edged sword. With wealth comes resilience and thus an increased capacity to take risk. More risk can lead to further riches. Yet greater wealth also increases potential losses. In other words, we have a lot more to gain and a lot more to lose.
Perhaps it is not surprising then that many modern elites and policymakers see danger around every corner—from terrorism to climate change to financial calamity. In one sense, an obsession with risk is a luxury of wealth. It is prudent to identify present shortcomings and contemplate future problems and attempt to avoid them. Preventing hunger, unemployment, bomb plots, wars, and financial panics is a good thing.
What happens, though, when we develop a hyper-focus on shortcomings and potential losses? What happens when we seek a public policy remedy for every perceived problem? This kind of obsession with risk, danger, and downside may be counterproductive. It may exacerbate known problems and unleash dangers never dreamed of. . . . read the entire article.
The statute wants a competitive analysis, but as the Commission correctly points out, competition is not the goal, it the means. Better performance is the goal. When the evidence presented in the Sixteenth Report is viewed in this way, the conclusion to be reached about the mobile industry, at least to me, is obvious: the U.S. mobile wireless industry is performing exceptionally well for consumers, regardless of whether or not it satisfies someone’s arbitrarily-defined standard of “effective competition.”
— George Ford, Phoenix Center chief economist, commenting on the FCC’s 16th Wireless Competition report.
That’s the question Jim Tankersley asked in a page one Washington Post story this week.
Here is how he summarized the situation:
“In the past three recoveries from recession, U.S. growth has not produced anywhere close to the job and income gains that previous generations of workers enjoyed. The wealthy have continued to do well. But a percentage point of increased growth today simply delivers fewer jobs across the economy and less money in the pockets of middle-class families than an identical point of growth produced in the 40 years after World War II.
That has been painfully apparent in the current recovery. Even as the Obama administration touts the return of economic growth, millions of Americans are not seeing an accompanying revival of better, higher-paying jobs.
The consequences of this breakdown are only now dawning on many economists and have not gained widespread attention among policymakers in Washington. Many lawmakers have yet to even acknowledge the problem. But repairing this link is arguably the most critical policy challenge for anyone who wants to lift the middle class.”
Tankersley cites the historical heuristic that a percentage point of GDP growth usually delivers about a half-point (0.5-0.6%) of employment growth.
“Three and a half years into the recovery that began in 2001 under President George W. Bush, job intensity was stuck at less than 0.2 percent. The recovery under President Obama is now up to an intensity of 0.3 percent, or about half the historical average.”
If we measure incomes, rather than employment, the situation appears even more dire:
“Middle-class income growth looks even worse for those recoveries. From 1992 to 1994, and again from 2002 to 2004, real median household incomes fell — even though the economy grew more than 6 percent, after adjustments for inflation, in both cases. From 2009 to 2011 the economy grew more than 4 percent, but real median incomes grew by 0.5 percent.”
What’s going on? Is the American middle class really in such bad shape? If so, why? And can we do anything about it? If not, why do these data appear to show a fundamental shift in the link between GDP growth and overall prosperity? These are big, complicated questions. For which I don’t have lots of concrete answers. I would, however, suggest a number of factors that may help us think about.
First, our economy does look different from the 1950s or 1960s. It is more complex. Back then, during a recession, factories laid off shifts of workers, leading to sharp employment downturns. Coming out of recessions, factories often hired back those same workers to build the same products. It was a simple process.
Today, although American manufacturing output is larger than ever, it employs a much smaller portion of the economy. The service and knowledge economies now dominate employment. And when jobs are not so closely tied to making widgets and the output is more ambiguous, the simple lay-off/hire-back formula disappears. In other words, we have lots more organizational and human capital today, and less “labor.”
This could be one reason the 1990 and 2001 recessions were shallower, but the job bounce-backs were slower.
Another factor, which everyone points out, is education. The United States may dominate many of the high-end professions in technology and finance because we have large cohorts of highly educated people (and immigrants). During the Great Recession and its aftermath, for example, the new App Economy, based on smartphones, broadband, and software, has created an estimated 500,000-600,000 jobs. Perhaps an also large cohort, however, not nearly as well educated or without the necessary knowledge skills, has been caught in a two-decade wave of globalization that quickly reduced the jobs this cohort was used to doing, without the possibility for quick changes to higher-value industries.
The Great Recession, however, was deeper and its employment rebound slower than the 1990 and 2001 recessions.
So we look to other factors that appear to be suppressing employment. In his new book The Redistribution Recession, University of Chicago economist Casey Mulligan argues that a host of well-intended safety-net programs are the chief culprit. Unemployment insurance, disability payments, the minimum wage, Medicaid, the earned income tax credit, food stamps and other programs can create deep disincentives to work and/or hire. Mulligan estimated that the average marginal tax rate on the relevant population increased eight percentage points, from 40% to 48%, during the Great Recession. For many individuals and families, the complex effects of these programs conspire to yield 100% marginal tax rates — that is, an extra dollar earned loses a dollar or more in benefits and taxes.
I would throw out another possible factor: monetary policy. The Fed’s unorthodox zero-interest-rate-plus-bond-buying policy has created free money for large firms and for government. We see government growing and corporate profits at record highs. But for small and medium-sized firms, credit is being rationed by regulators. Low rates are meaningless if credit is unavailable. The slow recovery for small firms, which are often acknowledged to create most jobs, could be part of the equation.
Switching from employment to income, a few factors are commonly mentioned:
- Education and globalization may, as with employment, be boosting income for the top but limiting income prospects for the broad middle.
- Health care and other benefits are rising as a portion of overall compensation, thus limiting the measured portion that we call wages or salaries.
- Immigration has added millions of low-wage workers that may depress average measured incomes. These particular workers may be much better off than they were in their home countries and, by lowering wages for jobs few Americans want to do, may “harm” only a very small number of Americans.
- Many income measures do not account for taxes and larger transfer payments in recent times through EITC, Medicaid, disability, unemployment, food stamps, etc. When these are factored in, the numbers look much different.
Alan Reynolds made the case for these underestimates in his 2006 book, Income and Wealth. And now Bruce D. Meyer of the University of Chicago and James X. Sullivan of Notre Dame find that median income growth has not suffered nearly as much as the conventional wisdom says.
“After appropriately accounting for inflation, taxes, and noncash benefits, we show that median income rose by more than 50 percent over the past three decades. This increase is considerably greater than the gains implied by official statistics—official median income rose by only 14 percent between 1980 and 2009. Our improved measure of income increased in each of the past three decades, although the growth has been much slower since 2000. Median consumption also rose at a similar rate over the whole period but at a faster rate than income over the past decade.”
The real income slowdown in the 2000s is not surprising. The decade included two recessions—including the big one. The decade also saw, for the first time since the 1970s, a good whiff of inflation, especially in food, fuel, and housing. Add in spiraling health care and education costs. So, despite spectacular gains in computers, communications, and consumer goods, the middle class squeeze often seems real.
Mark Perry and Don Boudreaux, however, are even more emphatic than Meyer and Sullivan. They say the “trope” of the stagnant middle class is “spectacularly wrong”:
“It is true enough that, when adjusted for inflation using the Consumer Price Index, the average hourly wage of nonsupervisory workers in America has remained about the same. But not just for three decades. The average hourly wage in real dollars has remained largely unchanged from at least 1964—when the Bureau of Labor Statistics (BLS) started reporting it.
“Moreover, there are several problems with this measurement of wages. First, the CPI overestimates inflation by underestimating the value of improvements in product quality and variety. Would you prefer 1980 medical care at 1980 prices, or 2013 care at 2013 prices? Most of us wouldn’t hesitate to choose the latter.
“Second, this wage figure ignores the rise over the past few decades in the portion of worker pay taken as (nontaxable) fringe benefits. This is no small matter—health benefits, pensions, paid leave and the rest now amount to an average of almost 31% of total compensation for all civilian workers according to the BLS.
“Third and most important, the average hourly wage is held down by the great increase of women and immigrants into the workforce over the past three decades. Precisely because the U.S. economy was flexible and strong, it created millions of jobs for the influx of many often lesser-skilled workers who sought employment during these years.”
Perry and Boudreaux go on to say that no income figures—whether the officially stagnant ones or the higher adjusted figures—can account for the dramatic rise in the quantity and quality of consumption that income yields.
“Bill Gates in his private jet flies with more personal space than does Joe Six-Pack when making a similar trip on a commercial jetliner. But unlike his 1970s counterpart, Joe routinely travels the same great distances in roughly the same time as do the world’s wealthiest tycoons.
What’s true for long-distance travel is also true for food, cars, entertainment, electronics, communications and many other aspects of ‘consumability.’ Today, the quantities and qualities of what ordinary Americans consume are closer to that of rich Americans than they were in decades past. Consider the electronic products that every middle-class teenager can now afford—iPhones, iPads, iPods and laptop computers. They aren’t much inferior to the electronic gadgets now used by the top 1% of American income earners, and often they are exactly the same.”
Despite all the factors in this multifaceted debate, one thing is certain. Economic growth is better for the middle class than is economic stagnation.
It is currently in fashion to say, with great contrarian flair, that federal spending growth is the slowest since the Eisenhower Administration. Or, as someone famous recently put it, “We don’t have a spending problem.”
This assertion is, to put it mildly, debatable. Spending jumped 18% in just one year during the Panic of 2008-09. If the government keeps spending at that level, but starts counting after the jump, then the growth rate will appear modest. Spending as a share of GDP is higher than at anytime since World War II, and so is the debt-to-GDP ratio. As the OMB chart below shows, it gets much worse.
Nevertheless, does anyone disagree that we have a growth problem, and a serious one? Yesterday’s negative GDP estimate for the fourth quarter of 2012 (-0.1%) should jolt the nation.
Let’s stipulate the GDP reading’s anomalies — lower than expected inventories and defense spending, which could reverse and add a bit to future growth. Yet economists had expected fourth quarter growth of 1.1% — itself an abysmal projection — and actual growth for the entire year was a barely mediocre 2.2%. Consider, too, that lots of economic activity was moved forward into 2012 to beat the Fiscal Cliff taxman. And don’t forget the Federal Reserve’s extraordinary QE programs, which are supposed to boost growth.
Whatever we’re doing, it’s not working. Not nearly well enough to create jobs. And not nearly well enough to help the budget. Because whatever you think about spending or taxes, the key factor in the health of the budget is economic growth.
OMB projects spending will grow (from today’s historically high level) around 2.96% per year through 2050. It projects annual economic growth over the period of 2.5%. That gets us a debt crisis somewhere down the line, and lots of other economic and social problems along the way.
Last year, however, keep in mind, growth was just 2.2%, following 2011’s even worse reading of 1.8%. If we can’t even match the modest 2.5% long-term projection coming out of a severe downturn, our problems may be worse than we think. Economist Robert Gordon of Northwestern asks “Is U.S. Growth Over?” Outlining seven economic headwinds, he projects growth of around 1.5% over the next few decades. In the chart below, you can see what a budget disaster such a slowdown would produce. Deficits quickly grow from a trillion dollars a year today into the many trillions per year.
Perhaps, many are now suggesting, we can tax our way out of the problem. Almost all academic research, however, suggests higher taxes (in terms of rates and as a portion of the economy) hurt economic growth. The Tax Foundation, for example, surveyed the 26 major studies on the topic going back to the early 1980s. Twenty-three of the studies found that taxes hurt economic growth. No study found higher taxes helped growth. Recent experience in Europe tends to confirm these findings.
Today, most of the policy discussion revolves around debt ceilings, sequesters, and the (fading) possibility of grand bargain budget deal. Mostly lost in the equation is economic growth. One question should dominate the thinking of policymakers: What policies would encourage more productive economic activity?
The new possibility of a breakthrough on immigration reform is an encouraging example. A more rational immigration policy for both low-skilled and high-skilled workers could boost economic growth significantly. Can we find more such policies? As you can see in the chart below, higher taxes can’t make up the budget shortfall. Faster growth and modest spending restraint can. This chart once again shows the OMB projected spending path (solid black line). The solid blue line shows what would happen to tax receipts if (1) growth remains mediocre and (2) we somehow find a way to dramatically raise the portion of the economy Washington taxes from the historical 18% to 23%.
That’s a major jump in taxation. Yet it doesn’t get us close to a healthy budget.
Faster growth and modest spending restraint, on the other hand, close the budget gap. And they do so without increasing the share Washington historically takes from the economy. The orange dashed line shows tax receipts under an economy growing at 3.5% with the historic 18% tax-to-GDP ratio. (Growth of 3.5% may sound like an ambitious goal. Keep in mind, however, that we are still far below trend — we’ve never really recovered from the Great Recession. Long term growth of 3.5%, therefore, merely includes a more rapid recovery to trend over the next several years and then a resumption of the long-term average of 3%.) In the medium to long term, a faster growth-lower tax regime generates more tax revenue than a slow growth-high tax regime.
Faster growth alone would be enough to stabilize budget deficits at today’s levels. But that is not enough. Trillion dollar deficits and Washington spending an ever rising share of the economy are not acceptable. Look, however, at the very modest spending restraint that would be required to essentially balance the budget by 2050. If we slowed spending growth from the projected 2.96% annual rate to just 2.7%, we could close the gap.
Does anyone think spending growth of 2.7% per year versus 2.96% is going to tear apart Social Security, Medicare, the military, or other essential government functions. Many of us could imagine responsible ways to reduce projected spending far, far more than that. All this shows is that a little restraint and robust economic growth go a long way.
The slow growth-high tax scenario produces a budget deficit of almost $3.5 trillion in 2050. Under the faster growth-lower tax scenario, with a touch of spending restraint, the 2050 budget deficit would be just $58 billion.
Now, I’m not pretending I know that a higher tax-to-GDP ratio will produce a particular rate of economic growth. The above are just rough scenarios. Lots of factors are in play. And that is precisely the point. Given an complex, uncertain world, we should attempt to align all our policies for economic growth. We know what policies tend to encourage growth, and those that tend to stunt it.
That means getting immigration policy right — and it appears we may finally be getting somewhere. It means smart, reasonable regulatory policies in energy, health care, education, communications, and intellectual property. It means a healthy division of powers between the federal and state governments. And, yes, it means sweeping tax reform — both individual and corporate.
What we are doing today isn’t working. We are on a dangerous path. Two percent growth won’t get us anywhere. No matter how much we tax ourselves. Only robust growth fueled by entrepreneurship and investment, with a healthy faith in the unknown possibilities of America’s future, will get us there.
Grab a cup of coffee and check out our new article at The American, the online magazine of the American Enterprise Institute.
See our new report summarizing the short but amazing life of the mobile app: Soft Power: Zero to 60 Billion in Four Years.
What would “the New Normal” of a mere 1% per capita GDP growth mean for the American economy over the next few decades? What if it’s even worse, as many are now predicting? Is there anything we can do about it? If so, what? We address these items in our new article for the Business Horizon Quarterly — “Beyond the New Normal, a New Era of Growth.”
Today, Princeton’s Alan Blinder says things are looking up, that we’re finally traveling the road to prosperity, albeit slowly. It’s a rather timid claim:
there are definitely positive signs. The stock market is near a five-year high. Recent data on consumer spending and confidence show improvement, though we need more data before declaring victory. At long last, the housing market is growing rapidly, albeit from a very low base . . . .
On balance, the U.S. economy is healing its wounds—that’s another fact. But none of this puts us on the verge of an exuberant boom. Still, if the fiscal cliff is avoided and the European debt crisis doesn’t explode in our face, both GDP growth and job growth should be higher in 2013 than in 2012—even under current policies. But that’s a forecast, not a fact.
Stanford’s John Taylor counters some of Blinder’s claims:
First, he admits that real GDP growth—the most comprehensive measure we have of the state of the economy—is declining; that’s not an improvement.
Second, he admits that, according to the payroll survey, job growth isn’t faster in 2012 than 2011; that’s not an improvement either.
Third, he mentions that the household survey shows employment growth is faster, but that growth must be measured relative to a growing population. If you look at the employment to population ratio, it is the same (58.5%) in the 12 month period starting in October 2009 (the month he chooses as the low point) as in the past 12 months. That’s not an improvement.
Fourth, he shows that the unemployment rate is coming down. But much of that improvement is due to the decline in the labor force participation rate as people drop out of the labor force. According to the CBO, unemployment would be 9 percent if that unusual and distressing decline–certainly not an improvement–had not occurred.
He then goes on to consider forecasts, saying that there are promising signs, such as the housing market. The problem here, however, is that growth is weakening even as housing is less of a drag, because other components of GDP are flagging.
Meanwhile, there is Northwestern’s Bob Gordon, who is making a much stronger, longer term forecast — that the next several decades will be pretty awful. Specifically, that real U.S. economic growth is likely to halve — or worse — from its recent and historical trend of about 2% per-capita per-year.
We’ve been emphasizing just how important it is to get the economy moving again, and how important long term growth is for jobs, incomes, overall opportunity, and for governmental budgets. The Gordon scenario is even worse than the so-called New Normal of around 1% per-capita growth (or 2% overall growth). Gordon projects per-capita growth over the next few decades of around 0.7%. (In non-per-capita terms, the way GDP figures are most often reported, that’s about 1.7%). He thinks growth for the “99%” will be far worse — just 0.2% per-capita.
In the chart below, you can see just how devastating a New Normal scenario would be, let alone Gordon’s even more pessimistic projection. It’s urgent that we implement a sweeping new set pro-growth reforms on taxes, regulation, immigration, trade, education, and monetary policy.
It was nice of Ball State University’s Digital Policy Institute (@DigitalPolicy) to include me last Friday in a webinar discussion on broadband policy. Joining the virtual discussion were Leslie Marx, of Duke and formerly FCC chief economist; Anna-Maria Kovacs, well-known regulatory analyst and fellow at Georgetown; and Michael Santorelli of New York Law School.
You can find a replay of the webinar here. Our broadband discussion, which begins at 1:55:48, was preceded by a good discussion of consumer online privacy, which you might also enjoy.
The energy boom is an apparent surprise to many. I don’t know why. Here’s the photo, caption, and story right now (Wednesday night) on the front page of The Wall Street Journal :
Here was our take in 2006:
there is no inherent shortage of oil. One tiny shale formation right in America’s backyard — the 1,200 square mile Piceance Basin of western Colorado — contains a trillion barrels, more than all the proven reserves in the world. Vast open spaces across the globe remain unexplored or untapped.
Today, it’s Dakota, Texas, and Pennsylvania shale that is leading the new boom. As a few smart guys wrote, we have a “bottomless well” of energy, if only we allow ourselves to find, refine, and innovate.
Life’s only certainty is change. Nowhere more true than with modern technologies, particularly broadband. Problem is, lots of government rules are not coming along for the ride.
Yesterday the Communications Liberty and Innovation Project (CLIP) hosted regulatory experts to discuss ways the FCC might incent more investment in digital infrastructure.
A fresh voice at the FCC is focusing the agency and the country on such a policy path of abundant wired and wireless broadband. New FCC Commissioner Ajit Pai (@AjitPaiFCC) yesterday called for the creation of an IP Transition Task Force as a way to accelerate the transition from analog networks to faster and more ubiquitous digital networks. Network providers, he said, want to know how IP services will be regulated before making major infrastructure investments. Commissioner Pai also discussed economic growth and job creation, asserting every $1 billion spent on fiber deployment creates between 15,000 and 20,000 jobs. Therefore to pave the way for robust private sector investment in the IP infrastructure, the FCC must signal a clear intention not to apply outdated 20th century regulations to these 21st century technologies.
The follow-up discussion focused on the need for a regulatory framework that will promote competition and economic growth while also maximizing consumer benefits. Jonathan Banks of US Telecom pointed out that the telecommunications industry is investing $65 billion per year, every year, in broadband infrastructure — a huge boost to current and future economic growth. Whoever occupies the White House after November should make it clear that expanding the nation’s “infostructure” with private investment dollars is a key national priority that will generate huge dividends — digital and otherwise.
The central economic problem — one that exacerbates all our other serious challenges, from debt to entitlements to persistently low employment — is a sluggish rate of economic growth. Worse than sluggish, really. At less than 2% per annum real growth, the economy is barely limping along. We are growing at perhaps just a third or a fourth the speed (or worse!) compared to previous recoveries from recessions of similar severity.
One school of thought, however, says that there’s not much we can do about it. The nature of the panic — with housing and financial institutions at its core — makes stagnation all but certain. Nonsense, says John Taylor of Stanford, in this new video (part 2 of 3) hosted by the Hoover Institution’s Russ Roberts:
In the next video, Yale’s Robert Shiller reinforces the point about housing. The author of the Case-Shiller Home Price Index questions whether the Fed can reflate home prices with “one button” and whether its zero-rates-forever policy might not do more harm than good. It’s more about “animal spirits,” Shiller says, which means housing is more a function of economic growth than growth is a function of housing.
For years we’ve been highlighting the need for policies that encourage communications infrastructure investment. Fiber, cell towers, data centers — these are the foundation of our growing digital economy, the tools of which are increasingly integral components of every business in every industry. One of the most crucial inputs that makes the digital economy go, however, is invisible. It’s wireless spectrum, and today we don’t have the right spectrum allocation to ensure continued wireless growth and innovation.
So it was good news to hear that former FCC commissioner Jonathan Adelstein is the new CEO of the Personal Communications Industry Association, also known as the “Wireless Infrastructure Association.” The companies he will represent are the mobile service providers, cell tower operators, and associated service companies that build these often unseen networks.
“The ultimate goal for consumers and the economy is to accommodate the need for more wireless data,” Adelstein told Communications Daily. “More spectrum is sort of the effective means for getting there . . . As more spectrum comes online it will ultimately require new infrastructure to accomplish the goal of meeting the data crunch.”
This gives a boost to the prospects for better spectrum policy.
There’s more to life than economics, but almost nothing matters more to more people than the rate of long-term economic growth. It completely changes the life possibilities for individuals and families and determines the prospects of nations. It also happens to be the central factor in governmental budgets.
We’ve been saying for the last few years that growth is our biggest problem — but also our biggest opportunity. Faster growth would not only put Americans back to work but also help resolve budget impasses and assist in the long-overdue transformations of our entitlement programs. The current recovery, however, is worse than mediocre. It is dangerously feeble. With every passing day, we fall further behind. Investments aren’t made. Risks aren’t taken. Business ideas are shelved. Joblessness persists, and millions of Americans drop out of the labor force altogether. Continued stagnation would of course exacerbate an already dire long-term unemployment problem. It would also, however, turn America’s unattractive habitual overspending into a possible catastrophe of debt.
John Cochrane of the University of Chicago shows, in the chart below, just how far we’ve slipped from our historical growth path. The red line is the 1965-2007 trend line growth of 3.07%, and the thin black line shows the recession and weak recovery.
Recessions are of course downward deviations from a trend line of growth. Trendlines, however, include recessions, and recoveries thus usually exhibit faster-than-trend growth that catches up to trend. To be sure, trends may not continue forever. Historical performance, as they say, is not a guarantee of future results. Perhaps structural factors in the U.S. and world economies have lowered our “potential” growth rate. This possibility is shown in the blue “CBO Potential” line, which depicts the “new normal” of diminished expectations. Yet the current recovery cannot even catch up to this anemic trend line, which supposedly reflects the downgraded potential of the U.S. economy.
Here is another way to visualize today’s stagnation, from Scott Grannis:
Economies are built on expectations. If the “new normal” of 2.35% growth is correct, then we’ve got problems. All our individual, family, business, and government plans will have to downshift. If growth is even lower than that, tomorrow’s problems will tower over today’s. If, on the other hand, we can reignite the American growth engine, then we’ve got a shot to not only reverse today’s decline but also to open the door to a new era of renewed optimism and, yes, rising expectations.
Faster compounding growth over time makes all the difference. One new paper shows how, with a fundamentally new policy direction on taxes and regulation, real GDP in the U.S. could be “between $2.1 and $3.1 trillion higher in 2022 than it would be under a continuation of current slow growth.” Think of that — an American economy perhaps trillions of dollars larger in a single year a decade from now, with better pro-growth policies. That’s a lot of jobs, a lot of higher incomes, a lot of new businesses, and — whether your preference is more or less government spending — much healthier government budgets . . . summed up in one last chart.
In all the recent debates over deficits, debt, unemployment, entitlements, bond markets, the euro, housing, etc., the absolutely central factor has too often been ignored. A new book, however, deals with nothing but this central factor — economic growth. If we’re going to improve the economic discussion, and the economy itself, The 4% Solution: Unleashing the Economic Growth America Needs is likely to serve as a good foundation.
The book contains chapters by five Nobel economists, including the modern dean of economic growth Robert Lucas, Ed Prescott on marginal tax rates, and Myron Scholes on true innovation; also Bob Litan on “home run” start-up firms, Nick Schulz on intangible assets, David Malpass on monetary policy, and others on entrepreneurs, immigration, debt, and budgets.
I’ve only skimmed many of the chapters, but one thing that jumped out is an important point about the links, and distinctions, between supply and demand. When economic growth has been discussed these last few years, the cause/cure usually cited is a drop in aggregate demand and the “stimulus” measures needed to boost it. It’s of course true that the housing bust and banking troubles caused lots of deleveraging and that government spending and interest rate cuts may help tide over certain consumers and businesses during temporary tough times. Despite substantial Keynesian fiscal and monetary “stimulus,” however — wild deficit spending, four years of zero-interest-rates, and a tripling of the Fed’s balance sheet — businesses, consumers, and the economy-at-large have not responded as hoped. Even if you believe in the efficacy of short term Keynesian growth policies, you ignore at great forecasting peril the array of countervailing anti-growth policies.
Here is how I put it in a Forbes online column last December:
the real problem with demand is supply. Consumption is partly based on current income and needs, sure, but more importantly it is a function of the expected future. Milton Friedman’s version of this idea was the permanent income hypothesis. More generally, we might ask, what are the prospects for prosperity?
We live in a complex, uncertain world. But it’s not unreasonable to believe, even after the Great Recession, that America and the globe still have prodigious potential to create new wealth. It’s also not unreasonable to believe that Washington has severely impaired America’s innovative capacity and our ability to grow.
If you think ObamaCare reinforces and expands many of the worst features of our overpriced, government-heavy health system, then you worry we might not get the productivity revolution we need in one of the largest sectors of our economy. If you think Dodd-Frank and other post-crisis ideas will discourage true financial innovation while preserving “too big to fail,” then you worry more financial disruptions are in store. If you think tax rates on capital and entrepreneurship are going up, then you might downgrade your estimates of the amount of investment and dynamism — and thus good jobs — America will enjoy.
A downgrade of expected long term growth impairs growth today.
In the new book, Lucas makes a similar argument:
imagine that households and businesses were somehow convinced that the United States would soon move toward a European-level welfare state, financed by a European tax structure. These beliefs would naturally be translated into beliefs that labor costs would soon increase and returns on investment decrease. Beliefs of a future GDP reduction of 30% would be brought forward into the present even before these beliefs could be realized (or refuted).
This is just hypothetical, of course, but it is a hypothesis that is entirely consistent with the way that we know economies work, everyone basing current decisions on expectations about future returns. What I have called recovery growth has happened after previous U.S. recessions and depressions and is certainly a worthy and attainable objective for economic policy today, but it would be foolish to take it as a foregone conclusion.
In the next chapter, Ed Prescott reinforces the point:
what people expect policies to be in the future determines what happens now. Bad policies can and often do depress the economy even before they are implemented. Peoples actions now depend on what they think policy will be — not what it was.
. . .
The disturbing fact is that, as of the beginning of 2012, the economy has not even partially recovered from the this recession. When it will recover is a political question and not an economic question. Only if the Americans making personal economic decisions knew what future policy would be could economists predict when recovery would occur.
This is one reason long term growth policies are often more important, even in the short term, than most short term “growth” policies.
We’ve published a lot of linear and log-scale line charts of Internet traffic growth. Here’s just another way to visualize what’s been happening since 1990. The first image shows 1990-2004.
The second image scales down the first to make room for the next period.
The third image, using the same scale as image 2, shows 2005-2011.
These images use data compiled by MINTS, with our own further analysis and estimations. Other estimates from Cisco and Arbor/Labovitz — and our own analysis based on those studies — show even higher traffic levels, though roughly similar growth rates.
“It is the single worst telecom bill that I have ever seen.”
— Reed Hundt, Jan. 31, 2012
Isn’t this rich?
One of the most zealous regulators America has known says Congress is overstepping its bounds because it wants to unleash lots of new wireless spectrum but also wants to erect a few guardrails so that FCC regulators don’t run roughshod over the booming mobile broadband market.
At a New America Foundation event yesterday, former FCC chairman Reed Hundt said Congress shouldn’t micromanage the FCC’s ability to micromanage the wireless industry. Mr. Congressman, you don’t know anything about how the FCC should regulate the Internet. But the FCC does know how to build networks, run mobile Internet businesses, and perfectly structure a wildly tumultuous economic sector. It’s just the latest remarkable example of the growing hubris of the regulatory state.
In his book, You Say You Want a Revolution, Hundt famously recounted his staff’s interpretation and implementation of the 1996 Telecom Act.
The passage of the new law placed me on a far more public stage. But I felt Congress — in the constitutional sense — had asked me to exercise the full power of all ideas I could summon. And I believed that I and my team had learned, through many failures, how to succeed. Later, I realized that we knew almost nothing of the complexity and importance of the tasks in front of the FCC.
Meeting in several overlapping groups of about a dozen people each . . . we dedicated almost three weeks to studying the possible readings of each word in the 150-page statute. The conference committee compromises had produced a mountain of ambiguity that was generally tilted toward the local phone companies’ advantage. But under the principles of statutory interpretation, we had broad authority to exercise our discretion in writing the implementing regulations. Indeed, like the modern engineers trying to straighten the Leaning Tower of Pisa, we could aspire to provide the new entrants to the local telephone markets a fairer chance to compete than they might find in any explicit provision of the law. In addition, the law gave almost no guidance about how to treat the Internet, data networks, . . . and many other critical issues. (Three years later, Justice Antonin Scalia agreed, on behalf of the Supreme Court, that the law was profoundly ambiguous.)
The more my team studied the law, the more we realized our decisions could determine the winners and losers of the new economy. We did not want to confer advantage on particular companies; that seemed inequitable. But inevitably
a decision that promoted entry into the local market would benefit a company that followed such a strategy.
There are so many angles here.
(1) Hundt says he and his team basically stretched the statute to mean whatever they wanted. The law may have been ambiguous — and it was, I’m not going to defend the ’96 Act — yet the Supreme Court still found in a series of early-2000s cases that Hundt’s FCC had wildly overstepped even these flimsy bounds. That’s how aggressive and unconstrained Hundt was.
(2) Hundt’s rules helped crash the tech and telecom sectors in 2000-2002. His rules were so complex and intrusive that, whatever your views about the CLEC wars, the PCS C block spectrum debacle, and other battles, it’s hard to deny that the paralysis caused by the rules hurt broadband and the nascent Net.
(3) Is it surprising that, given the FCC’s poor record of reaching way past its granted powers, some in Congress want to circumscribe FCC regulators by giving them less-than-omnipotent authority? Is the new view of elite regulators that Congress should pass laws, the full text of which might read: “§1. Congress grants to the Internet Agency the authority to regulate the Internet. Go forth and regulate.”
(4) On the other hand, it’s not clear why Hundt would care particularly what Congress says in any new spectrum statute. He didn’t care much for the words or intent of the ’96 Act, and he thinks regulators should “aspire” to grand self-appointed projects. Who knows, maybe all those Supreme Court smack downs in the early 2000s made an impression.
(5) Hundt says he and his team later realized, in effect, how naive they were about “the complexity and importance of the tasks in front of the FCC.” So he’s acknowledging after things didn’t go so well that his FCC underestimated the complexity and thus overestimated their own expertise . . . yet he says today’s FCC deserves comprehensive power to structure the mobile Internet as it sees fit?
(6) Hundt admitted his FCC relished its capacity to pick winners and losers. Not particular companies, mind you — that would be improper — merely the types of companies who win and lose. A distinction without very much of a difference.
(7) We don’t argue that Congress, instead of the FCC, should impose intrusive regulation through statute. We don’t advocate long and complex laws. That’s not the point. Laws should be clear and simple, but stating the boundaries of a regulator’s authority is not a controversial act. No one should be imposing intrusive regulation or overdetermining the structure of an industry. And that’s what Congress — perhaps in a rare case! — is protecting against here.
The U.S. wireless sector has been only mildly regulated over the last decade. We’d argue this is a key reason for its success. But this presumption of mostly unfettered experimentation and dynamism may be changing.
Consider Sprint’s apparent decision to use “roaming” in Oklahoma and Kansas instead of building its own network. Now, roaming is a standard feature of mobile networks worldwide. Company A might not have as much capacity as it would like in some geography, so it pays company B, who does have capacity there, for access. Company A’s customers therefore get wider coverage, and Company B is paid for use of its network.
The problem comes with the FCC’s 2011 “digital roaming” order. Last spring three FCC commissioners decided that private mobile services — which the Communications Act says “shall not . . . be treated as a common carrier” — are a common carrier. Only D.C. lawyers smarter than you and me can figure out how to transfigure “shall not” into “may.” Anyway, the possible effect is to subject mobile data — one of the fastest growing sectors anywhere on earth — to all sorts of forced access mandates and price controls.
We warned here and here that turning competitive broadband infrastructure into a “common carrier” could discourage all players in the market from building more capacity and covering wider geographies. If company A can piggyback on company B’s network at below market rates, why would it build its own expensive network? And if company B’s network capacity is going to company A’s customers, instead of its own customers, do we think company B is likely to build yet more cell sites and purchase more spectrum?
With 37 million iPhones and 15million iPads sold last quarter, we need more spectrum, more cell towers, more capacity. This isn’t the way to get it. And what we are seeing with Sprint’s decision to roam instead of build in Oklahoma and Kansas may be the tip of this anti-investment iceberg.
Last spring when the data roaming order came down we began wondering about a possible “slow walk to a reregulated communications market.” Among other items, we cited net neutrality, possible new price controls for Special Access links to cell sites, and a host of proposed regulations affecting things like behavioral advertising and intellectual property (see, PIPA/SOPA). Since then we’ve seen the government block the AT&T-T-Mobile merger. And the FCC is now holding up its own important push for more wireless spectrum because it wants the right to micromanage who gets what spectrum and how mobile carriers can use it.
Many of these items can be thoughtfully debated. But the number of new encroachments onto the communications sector threatens to slow its growth. Many of these encroachments, moreover, are taking place outside any basic legislative authority. In the digital roaming and net neutrality cases, for example, the FCC appeared clearly to grant itself extra- if not il-legal authority. These new regulations are now being challenged in court.
We need some restraint across the board on these matters. The Internet is too important. We can’t allow a quiet, gradual reregulation of the sector to slow down our chief engine of economic growth.
— Bret Swanson
“One solution is giving back to bank creditors the job of policing bank risk-taking. Roll back deposit insurance, for instance. We may not be able to see the future, but we can incentivize caution as a general matter. And we can improve the odds that, when banks make mistakes, they won’t all make the same mistake at the same time.”
— Holman Jenkins, The Wall Street Journal, January 18, 2011
“If the Greeks had skimped on the olive oil in a liter bottle, that wouldn’t threaten the metric system.”
— John Cochrane, Bloomberg View, December 21, 2011
More bad news for U.S. economic growth. In the face of multiplying obstacles deployed by Washington regulators, AT&T today abandoned its pursuit of T-Mobile. The most important outcome of the merger would have been a quicker and broader roll-out of 4G mobile broadband services. Now AT&T will have to find other paths to the wireless radio spectrum (and cell towers) it needs to meet growing demand and build tomorrow’s networks. T-Mobile is left in purgatory, short of the spectrum and long-term financial wherewithal to effectively compete.
Some say, don’t worry, assuming that another U.S. mobile provider will pick up T-Mobile. Not so fast. If Washington disallowed AT&T, it would do the same for Verizon. Sprint was pursuing T-Mobile before AT&T swooped in, but a Sprint-TMo combo makes much less sense. The spectrum-technology-tower infrastructure positions of AT&T and TMo were almost perfectly complementary. Not so for Sprint, who uses mostly higher frequencies, has always been a CDMA company (as opposed to WCDMA), and is already finding it challenging to raise the funds to build its own LTE network, given rocky times with partner Clearwire.
The U.S. mobile industry has been a shining star in an otherwise dark U.S. economy. But with Washington nixing the AT&T- T-Mobile merger, and given recent struggles at Clearwire and engineering disputes with upstart LightSquared, it’s not clear mobile will continue on its steep ascent. The FCC “staff report” opposing the AT&T-TMo deal didn’t even address the elephant in the room – spectrum. It’s odd. The FCC declared a spectrum crisis two years ago and repeatedly emphasized the urgent need for broadband expansion. Then, poof, not hardly a mention of either in its report. Not a good sign when the expert agency has taken its eye off the ball.
The industry is still full of potential, but there will be near-term disruptions as companies sort out new spectrum, business, and technology strategies. And as millions of un- and underemployed Americans know, time is money. Regulatory impediments and foot-dragging are especially harmful – and even infuriating – for an industry that desperately wants to grow. For an industry that is in many ways the bedrock of the 21st century American knowledge economy.
Beyond the disquieting roller-coaster in the mobile industry, one wonders more broadly about the American economy. Just what kind of business are we allowed to conduct? What investments are preferred – by whom? How far will the tilt of decision-making from private entities to public bureaucracies go?
— Bret Swanson
See our new report “Into the Exacloud” . . . including analysis of:
> Why cloud computing requires a major expansion of wireless spectrum and investment
> An exaflood update: what Mobile, Video, Big Data, and Cloud mean for network traffic
> Plus, a new paradigm for online games, Web video, and cloud software
After the decision to separate its online streaming and DVD-in-the mail services, Wall St. Cheat Sheet asked, “Is Netflix the new Research In Motion?”
Translation: Will Netflix be just the latest technology titan to suffer a parabolic plunge? We don’t know ourselves. Netflix’s streaming-DVD split is a reaction to the overwhelming popularity of its streaming service. CEO Reed Hastings is trying to avoid complacency and stay ahead of the curve. Maybe he is panicking. Maybe he’s a genius. But that is just the point: the digital curve these days is shifting and steepening faster than ever.
Which makes the government’s attempted damming of this digital river all the more harmful. Wireless spectrum is a central resource in the digital economy, and a chief enabler of services like Netflix. Yet Washington hogs the best airwaves – at last count the government owned 61%, the mobile service providers just 10%. So AT&T, its pipes bursting with iPhone and iPad traffic, tries to add capacity by merging with T-Mobile. Nope. The Department of Justice won’t allow that either.
Something, however, has got to give. New data from wireless infrastructure maker Ericsson shows that mobile data traffic jumped 130% in the first quarter of 2011 from 2010. Just four years ago, mobile data traffic was perhaps 1/15th of mobile voice traffic. Today, mobile data is likely three times voice. Credit Suisse, meanwhile, reports that U.S. mobile networks are running at 80% of capacity, meaning many network nodes are tapped out.
More mobile traffic drivers are on the way, like mass adoption of video chat apps and Apple’s imminent iCloud service. iCloud will create an environment of pervasive computing, where all your computers and devices are in continuous communication, integrating your digital life through a virtual presence in the cloud. No doubt too, software app downloads and the rich content they unleash will only grow. As of July, 425,000 distinct Apple apps had been downloaded 15 billion times on 200 million devices. The Android ecosystem of devices and apps has been growing even faster.
Perhaps the iCloud service in particular won’t succeed, but no doubt others like it will, not to mention all the apps and services we haven’t thought of. We do know that more bandwidth and connectivity will encourage more new ideas, and thus more traffic. In all, IDC estimates that by 2015 we will create or replicate around 8 zettabytes (8,000,000,000,000,000,000,000 bytes) of new data each year.
Big Data, in turn, will yield large economic benefits, from medical research to retail. The McKinsey Global Institute estimates that Big Data – the sophisticated exploitation of large sets of fine-grained information – could boost annual economic value in the U.S. health care sector by $300 billion. McKinsey thinks personal geolocation services could expand annual consumer surplus by $600 billion globally.
The wide array of Big Data techniques and services is crucially dependent on robust and capacious networks. U.S. service providers invested $26 billion in 2010 – and $232 billion over the last decade – on wireless infrastructure alone. Total info-tech investment in the U.S. last year was $488 billion. We’ll need more of the same to spur and accommodate Big Data, Cloud, Mobile, Netflix, and the rest. But without more spectrum, the whole enterprise of building the digital infrastructure could slow.
Picocells and femtocells – smaller network nodes that cover less area – can effectively expand capacity for some users by reusing existing wireless spectrum. These mini cells work together as HetNets (heterogeneous networks) and will be a central feature in the next decade of wireless expansion. But the new 4G mobile standard, called LTE, gets the biggest bang for the buck in wider spectrum bands. LTE also is by far the most powerful and flexible standard to manage the complexities and unlock the big potential of HetNets. So we see a virtuous complementarity: more, better spectrum will boost spectrum reuse efficiencies. In other words, spectrum reuse and more spectrum are not either-or alternatives but are mutually helpful and reinforcing.
We don’t know whether the new Netflix strategy will fly, whether iCloud will succeed, how HetNets will evolve, or exactly what the mobile ecosystem will look like. But in such an arena, we do know that maximum flexibility – and LOTS more spectrum – will give a beneficial tilt toward innovation and growth.
— Bret Swanson
A paper out today challenges the assertion that the AT&T-T-Mobile merger will create jobs. AT&T has said it would invest an additional $8 billion in wireless network infrastructure, above and beyond its usual $8-10 billion per year, and the Economic Policy Institute estimated this would result in between 55,000 and 96,000 job-years. The Communication Workers of America has cited the EPI study as one reason it supports the mobile union.
In a study prepared for Sprint, however, professor David Neumark says the EPI estimate fails to account for the fact that T-Mobile will no longer be investing its normal couple billion dollars per year after it is subsumed by AT&T. He says EPI is only looking at AT&T’s gross increase, not the net industry effect. He thinks the net effect will be negative and will thus cost jobs.
This is a fair point. We should analyze these things in as dynamic and realistic a way as possible. But the Sprint study appears to be relying on its own static, simplistic view of the world. Namely, it assumes an independent T-Mobile would keep investing billions a year on network infrastructure. Even though T-Mobile says it has neither the spectrum nor the financial resources from its parent Deutche Telekom to continue as an effective competitor in the highly dynamic mobile market where companies must constantly upgrade their networks to exploit all the good stuff offered by Moore’s law. In other words, it’s unlikely T-Mobile will continue investing several billion per year as a stand-alone company.
Another point that needs clarification: Some smart people think the AT&T estimate of $8 billion in additional capex is specific to the merger — connecting the two networks, expanding LTE beyond its previous plans, etc. But if these people are right, it’s still the case that AT&T will have to adopt at least some portion of network upgrades and maintenance that T-Mobile does every day on its own network. So AT&T’s capex spend is likely to go up beyond this additional $8 billion. In a merger scenario, therefore, not all, perhaps not even most, of the existing T-Mobile network investment “goes away.”
Another scenario in which a non-AT&T carrier acquired T-Mobile would result in whatever similar loss of T-Mobile specific investment that Sprint claims under the AT&T-T-Mobile scenario. But it doesn’t account for this possibility either.
So it seems the new Neumark-Sprint analysis also is not really a net estimate, just another form of gross estimate.
Ultimately, no one knows exactly what will happen in an ever-changing economy in our ever-changing world. But it is pretty safe to say that a healthy, growing, vibrant mobile industry will support more sustainable jobs than an unhealthy industry. The Sprint paper correctly acknowledges that efficiencies from mergers can result in all sorts of economic welfare gains, both for consumers and for workers who move into higher-value jobs.
A stand-alone T-Mobile is not a healthy company, and without T-Mobile, AT&T, although healthy, doesn’t have the spectrum or cell towers it needs to match current growth and fuel new growth. The proposed merger would result in a major supplier of next gen 4G broadband mobile services across most of the U.S. The benefits of this go far beyond the capex it takes to build the network (though very important) and extend to every citizen and industry that will enjoy ubiquitous go-anywhere broadband. These jobs created across the economy are incalculable but are likely to be substantial.
Where to begin. The economy is still in the doldrums some three years after an historic crash, the Administration is having a tough time boosting output and job growth, and so its Justice Department thinks it would be a good idea to discourage one of the nation’s biggest investors and employers from building yet more high-tech infrastructure in a sector of the economy that is manifestly healthy and which serves as a productivity platform for the rest of the economy.
It’s hard to believe, but that’s exactly what’s happening with the DoJ’s attempt to block AT&T’s merger with T-Mobile.
AT&T wants T-Mobile’s wireless spectrum and compatible cell-tower infrastructure so it can more quickly roll out next generation 4G mobile broadband services. It can’t wait for much needed spectrum auctions that will hopefully occur over the next several years. Meanwhile, T-Mobile doesn’t have the spectrum or financial wherewithal (through its parent Deutche Telekom) to build its own 4G network. Perfect fit, right? Join forces to rapidly deploy new network capacity and coverage for the next iteration of iPads, Androids, Thunderbolts, Galaxy Tabs, and broadband everywhere.
The Communication Workers of America union thinks the union is a good idea, estimating the merger will create 96,000 jobs. AT&T even this morning sweetened the pot by announcing – before DoJ’s surprise announcement – that on completion of the merger it would bring back 5,000 call center jobs from overseas and guarantee no job cuts for T-Mobile call center employees.
DoJ says a combination will hurt competition, but T-Mobile itself says it can’t really compete in the next generation of 4G. And DoJ ignores the fact, reported by the FCC, that 90% of the U.S. population has five or more mobile service provider choices, with brand new entrants like Clearwire, LightSquared, and Dish Network coming online and expanding every day. DoJ relies on indirect evidence of current market share to infer that bad things might happen in the future even as it ignores direct evidence of low prices, wild innovation, and widespread consumer choice in networks and devices.
This July 11 paper from economists Gerald Faulhaber, Robert Hahn, and Hal Singer really says it all.
With the economy in crisis, you’d think someone with a bit of business sense would be seeking every way to expand investment and employment, not find creative ways to quash it. Antitrust lawyers imagine themselves guardians of the public good, but there’s a big problem: they usually see the world through a rear-view mirror, wearing blinders, while experiencing tunnel vision.
Was it antitrust that saved the world from big, bad Microsoft. No, the Internet, Google, and Apple, among hundreds of other innovators, diluted Microsoft’s very temporary dominance. Did the AOL-TimeWarner merger kill competition in the online content or broadband markets? No. To remember the alarmism over that merger is to laugh. DoJ did block WorldCom’s bid for Sprint, and of course WorldCom went bankrupt. Did Verizon’s acquisition of Alltel kill innovation in the mobile market? What? Who’s Alltel?
There’s just no way a few attorneys in Washington can decree the proper organization of an industry that is so exceedingly dynamic. Meanwhile, the economy shuffles along slowly, very slowly.
— Bret Swanson
See our new column in Forbes:
As we entered August, a time of family vacations and corporate retreats, a CEO friend, who is a director of several companies, made a darkly humorous observation. “I’m impressed,” he said. “At our upcoming retreat, the CEO is dedicating an entire day to talk about . . . the business.”
This was a break from the new normal, where management is consumed with compliance, legality, accounting, risk mitigation, and political prognostication and manipulation. Carving time out of a business retreat to talk strategy, execution, product, and sales was a welcome novelty. It also revealed a chief challenge of our times – the obsession with and aversion to risk.
Update: Steve Lohr, the excellent New York Times technology reporter, offers his own take on risk-taking through the lens of Steve Jobs. Lohr and I picked the same great quote from Jobs’ Stanford commencement address.
A host of telecom and cable companies today announced a new plan to reform the Universal Service Fund and extend broadband further into rural America. I’ve spent years only partially understanding how USF works. Or how it doesn’t work, as seems the case. I think even in the old days, when it may have made some kind of sense, USF probably retarded investment and new technology in the areas it aimed to support. Unsubsidized potential entrants sporting new technologies couldn’t hope to compete with heavily subsidized incumbents. Even incumbents effectively couldn’t deploy newer, more efficient unsubsidized technologies. The result was probably some extension of phone service in the early days but lots of stagnation for decades after that. In today’s communications market, however, where many companies and many technologies supply many wholesale, commercial, and consumer services — and where broadband, Internet cloud, and wireless complement, compete, and overlap — USF has really broken down. Reform is long overdue, and this consensus industry plan should finally help move USF into the Internet age.
The new proposal — called America’s Broadband Connectivity Plan — also reforms the antiquated and broken Inter Carrier Compensation system, which sets the terms for traffic exchange among communications companies. In a broadband-mobile-Internet world, ICC, like USF, no longer works and is often exploited with arbitrage schemes that add no value but shuffle money via clever manipulation of the rules.
For too long wrangling and indecision between industry and government — and among industry players themselves — has delayed action. We now have a good consensus leap on the road to modernization.
I’m no in-the-weeds budget expert — not even close — but it seemed to me that among all the important debates over deficits, entitlements, and debt ceilings, the biggest factor of all is being mostly ignored. That factor is the compound rate of economic growth, and I made the case for “The Growth Imperative” at a Tuesday meeting of the National Chamber Foundation Fellows. Here’s my column at Forbes. See the slides below:
“Over the 10-year budget window, the president plans for Washington to extract $39 trillion in taxes and spend $46 trillion. The debt limit, currently $14.3 trillion, would have to grow to over $26 trillion.
“Making matters worse, these horrendous spending, taxing and debt numbers would be even grimmer if not for the budget’s rosy assumptions. The budget assumes that real growth will climb from an already wishful 4% in 2012 to 4.5% in 2013 and 4.2% in 2014 — despite plans for sweeping tax increases. The assumed GDP growth is well over any growth rate achieved in the Bush expansion. The budget also reflects the unrealistic assumption that the Federal Reserve will be able to keep interest rates very low and generate $476 billion in profits through highly leveraged financial speculation.”
— David Malpass, The Wall Street Journal, February 16, 2011
Tyler Cowen talks to Matt Yglesias about The Great Stagnation . . . . Here was my book review – “Tyler Cowen’s Techno Slump.”
If it’s true, as Nick Schulz notes, that FCC Commissioner Copps and others really think Chairman Genachowski’s proposal today “is the beginning . . . not the end,” then all bets are off. The whole point is to relieve the overhanging regulatory threat so we can all move forward. More — much more, I suspect — to come . . . .
Adam Thierer nicely dissects a bunch of really sloppy arguments by Tim Wu, author of a new book on information industries called The Master Switch. (Scroll down to the comments section.)
Libertarians do NOT believe everything will be all sunshine and roses in a truly free marketplace. There will indeed be short term spells of what many of us would regard as excessive market power. The difference between us comes down to the amount of faith we would place in government actors versus market forces / evolution to better solve that problem. Libertarians would obviously have a lot more patience with markets and technological change, and would be willing to wait and see how things work out. We believe, as I have noted in my previous responses, that it’s often during what critics regard as a market’s darkest hour that innovation is producing some of the most exciting technologies with the greatest potential to disrupt the incumbents whose “master switch” you fear. Again, we are simply more bullish on what I have called experimental, evolutionary dynamism. Innovators and entrepreneurs don’t sit still; they respond to incentives, and for them, short-term spells of “market power” are golden opportunities. Ultimately, that organic, bottom-up approach to addressing “market power” or “market failure” simply makes a lot more sense to us – especially because it lacks the coercive element that your approach would bring to bear preemptively to solve such problems.
For Adam’s comprehensive six-part review of the book, go here.
In what may be the final round of comments in the Federal Communications Commission’s Net Neutrality inquiry, I offered some closing thoughts, including:
- Does the U.S. really rank 15th — or even 26th — in the world in broadband? No.
- The U.S. generates and consumes substantially more IP traffic per Internet user and per capita than any other region of the world.
- Among individual nations, only South Korea generates significantly more IP traffic than the U.S. (Canada and the U.S. are equal.)
- U.S. wired and wireless broadband networks are among the world’s most advanced, and the U.S. Internet ecosystem is healthy and vibrant.
- Latency is increasingly important, as demonstrated by a young company called Spread Networks, which built a new optical fiber route from Chicago to New York to shave mere milliseconds off the existing fastest network offerings. This example shows the importance — and legitimacy — of “paid prioritization.”
- As we wrote: “One way to achieve better service is to deploy more capacity on certain links. But capacity is not always the problem. As Spread shows, another way to achieve better service is to build an entirely new 750-mile fiber route through mountains to minimize laser light delay. Or we might deploy a network of server caches that store non-realtime data closer to the end points of networks, as many Content Delivery Networks (CDNs) have done. But when we can’t build a new fiber route or store data — say, when we need to get real-time packets from point to pointover the existing network — yet another option might be to route packets more efficiently with sophisticated QoS technologies.”
- Exempting “wireless” from any Net Neutrality rules is necessary but not sufficient to protect robust service and innovation in the wireless arena.
- “The number of Wi-Fi and femtocell nodes will only continue to grow. It is important that they do, so that we might offload a substantial portion of traffic from our mobile cell sites and thus improve service for users in mobile environments. We will expect our wireless devices to achieve nearly the robustness and capacity of our wired devices. But for this to happen, our wireless and wired networks will often have to be integrated and optimized. Wireline backhaul — whether from the cell site or via a residential or office broadband connection — may require special prioritization to offset the inherent deficiencies of wireless. Already, wireline broadband companies are prioritizing femtocell traffic, and such practices will only grow. If such wireline prioritization is restricted, crucial new wireless connectivity and services could falter or slow.”
- The same goes for “specialized services,” which some suggest be exempted from new Net Neutrality regulations. Again, necessary but not sufficient.
- “Regulating the ‘basic’ Internet but not ‘specialized’ services will surely push most of the network and application innovation and investment into the unregulated sphere. A ‘specialized’ exemption, although far preferable to a Net Neutrality world without such an exemption, would tend to incentivize both CAS providers and ISPs service providers to target the ‘specialized’ category and thus shrink the scope of the ‘open Internet.’ In fact, although specialized services should and will exist, they often will interact with or be based on the ‘basic’ Internet. Finding demarcation lines will be difficult if not impossible. In a world of vast overlap, convergence, integration, and modularity, attempting to decide what is and is not ‘the Internet’ is probably futile and counterproductive. The very genius of the Internet is its ability to connect to, absorb, accommodate, and spawn new networks, applications and services. In a great compliment to its virtues, the definition of the Internet is constantly changing.”
A good conversation between Harry McCracken of Technologizer and Bob Wright of bloggingheads.tv. Topics include Apple’s ascent (and world domination?); iPhone vs. Android; whither Microsoft; Facebook’s privacy flub; etc.
“My guess is that the euro will survive, but no one will trust it like they used to. At the end of the day, it’s an entitlement problem. In Greece, the public sector makes up 40% or more of the work force, with short weeks, lots of vacation and lavish retirement benefits. All of that needs to be paid for with real income, not debt, and the markets are anticipating the day of reckoning. One can only hope European policy makers listen to the market. I wonder if California and Medicare are taking notes.”
— Andy Kessler, May 8, 2010
See our new CircleID commentary on the China-Google dustup and its implications for an open Internet:
China is nowhere near closing for business as it did five centuries ago. One doubts, however, that the Ming emperor knew he was dooming his people for the next couple hundred years, depriving them of the goods and ideas of the coming Industrial Revolution. China’s present day leaders know this history. They know technology. They know turning away from global trade and communication would doom them far more surely than would an open Internet.
Akamai announced a record peak in traffic volume on its content delivery network on April 9.
In addition to reaching a milestone for peak traffic served this past Friday, the Akamai network also hit a new peak during the same day for video streaming, as well as a near high for total requests served.
- With online interest in major sporting events – including professional golf and baseball – helping to drive the surge in demand, Akamai delivered its largest ever traffic for high definition video streaming.
- Over the course of the day, Akamai logged over 500 billion requests for content, a sum equal to serving content to every human once every 20 minutes
- At peak, Akamai supported over 12 million requests per second – a rate roughly equivalent to serving content to the entire population of the United States every 30 seconds.
If you really want to understand the climate debate, you simply must read this book, by A.W. Montford, about a Canadian scientific detective named Steve McIntyre, who humbly but doggedly pursued the truth about the 1,000-year temperature reconstructions that generated the famed “hockey stick.”
The November 2009 email “hack” of Britain’s Climatic Research Unit that has generated so much recent news is only a brief epilogue. The real story happened day by day over the last decade as McIntyre, a retired mining engineer, and a his fellow Canadian Ross McKitrick, an economist, searched for, and then through, shabbily constructed data sets and magical algorithms, with surprising finds on almost every page.
As my friend George Gilder wrote:
The reader should know that the supposed email “scandal,” as described in the book, is in fact a rather trivial and even defensible part of the story. Few people are at their best in emails. What is shocking — and I use the word advisedly as a confirmed sceptic not easily shocked — is the so called science. I never imagined that it was quite this bad. It is shoddy beyond easy belief.
The hockey stick chart mostly reflects a defective algorithm that extends and inflates a few deceptive signals from as few as 20 cherry-picked trees in Colorado and Russia into a hockey stick chart that is replicated repeatedly through reshuffles of the same or similar defective and factitious data to capture and define two thousand years of climate history. These people simply had no plausible case and were pressed by their political sponsors to contrive a series of Potemkin charts.
Almost, but not quite, as surprising, was Montford’s narrative itself. Somehow he turned an esoteric battle over statistical methodology into a captivating “what happens next” mystery. British science writer Matt Ridley agreed:
Montford’s book is written with grace and flair. Like all the best science writers, he knows that the secret is not to leave out the details (because this just results in platitudes and leaps of faith), but rather to make the details delicious, even to the most unmathematical reader. I never thought I would find myself unable to put a book down because — sad, but true — I wanted to know what happened next in an r-squared calculation. This book deserves to win prizes.
Engrossing. Astonishing. Devastating.
The Yo-Yos versus the Distavores. The HurriKeynes versus the Invisible Hands. And the team with more Monetary Madness appearances than any other — Stuff Happens. This was the scientific bracketology that determined the real cause of the Great Panic at the American Economic Association’s recent meetings:
(hat tip: David Warsh)
“…commercial real estate loans should not be marked down because the collateral value has declined. It depends on the income from the property, not the collateral value.”
— Ben Bernanke, Feb. 24, 2009, finally, if tamely, acknowledging the crucial role of mark-to-market accounting in the financial death spiral.
(via Brian Wesbury)
That’s the question I ask in this Huffington Post article today.
Among all the books, articles, and academic papers analyzing the financial meltdown, very few have pinpointed and exposed what I think was the accelerant that turned a problem into an all-out panic: namely, the zealous application of mark-to-market accounting beginning in the autumn of 2007. In this video, two of these very few — Brian Wesbury and Steve Forbes — discuss the meltdown, mark-to-market’s crucial role, and the stock market’s short and mid-term prospects. Wesbury and Forbes have also written two great books explaining the Great Panic, why it’s not as bad as you think, and how capitalism will save us.
Holman Jenkins today also picks up the theme of mark-to-market’s central role in the panic.
Wyoming wireless operator Brett Glass has 20 questions for the FCC on Net Neutrality. Some examples:
1. I operate a public Internet kiosk which, to protect its security and integrity, has no way for the user to insert or connect storage devices. The FCC’s policy statement says that a provider of Internet service must allow users to run applications of their choice, which presumably includes uploading and downloading. Will I be penalized if I do not allow file uploads and downloads on that machine?
4. I operate a wireless hotspot in my coffeehouse. I block P2P traffic to prevent one user from ruining the experience for my other customers. Do the FCC rules say that I must stop doing this?
6. I am a cellular carrier who offers Internet services to users of cell phones. Due to spectrum limitations, multimedia streaming by more than a few users would consume all of the bandwidth we have available not only for data but also for voice calls. May we restrict these protocols to avoid running out of bandwidth and to avoid disruption to telephone calls (some of which may be E911 calls or other urgent traffic)?
7. I am a wireless ISP operating on unlicensed spectrum. Because the bands are crowded and spectrum is scarce, I must limit each user’s bandwidth and duty cycle. Rather than imposing hard limits or overage charges, I would like to set an implicit limit by prohibiting P2P, with full disclosure that I am doing so. Is this permitted under the FCC’s rules?
14. I am an ISP that accelerates users’ Web browsing by rerouting requests for Web pages to a Web cache (a device which speeds up Web browsing, conceived by the same people who developed the World Wide Web) and then to special Internet connections which are asymmetrical (that is, they have more downstream bandwidth than upstream bandwidth). The result is faster and more economical Web browsing for our users. Will the FCC say that our network “discriminates” by handling Web traffic in this special way to improve users’ experience?
15. We are an ISP that improves the quality of VoIP by prioritizing VoIP packets and sending them through a different Internet connection than other traffic. This technique prevents users from experiencing problems with their telephone conversations and ensures that emergency calls will get through. Is this a violation of the FCC’s rules?
18. We’re an ISP that serves several large law offices as well as other customers. We are thinking of renting a direct “fast pipe” to a legal research database to shorten the attorneys’ response times when they search the database. Would accelerating just this traffic for the benefit of these customers be considered “discrimination?”
19. We’re a wireless ISP. Most of our customers are connected to us using “point-to-multipoint” radios; that is, the customers’ connection share a single antenna at our end. However, some high volume customers ask to buy dedicated point-to-point connections to get better performance. Do these connections, which are engineered by virtually all wireless ISPs for high bandwidth customers, run afoul of the FCC’s rules against “discrimination?”
See our new commentary at CircleID:
The Internet has two billion global users, and the developing world is just hitting its growth phase. Mobile data traffic is doubling every year, and soon all four billion mobile phones will access the Net. In 2008, according to a new UC-San Diego study, Americans consumed over 3,600 exabytes of information, or an average of 34 gigabytes per person per day. Microsoft researchers argue in a new book, “The Fourth Paradigm,” that an “exaflood” of real-world and experimental data is changing the very nature of science itself. We need completely new strategies, they write, to “capture, curate, and analyze” these unimaginably large waves of information.
As the Internet expands, deepens, and thrives—growing in complexity and importance—managing this dynamic arena becomes an ever bigger challenge. Iran severs access to Twitter and Gmail. China dramatically restricts individual access to new domain names. The U.S. considers new Net Neutrality regulation. Global bureaucrats seek new power to allocate the Internet address space. All the while, dangerous “botnets” roam the Web’s wild west. Before we grab, restrict, and possibly fragment a unified Web, however, we should stop and think. About the Internet’s pace of growth. About our mostly successful existing model. And about the security and stability of this supreme global resource.
Excellent analysis of Google’s plan to build a few experimental fiber networks from my former colleague Barbara Esbin:
NetworkWorld reports that by constructing its own fiber network, Google “is trying to push its vision for how the Internet as a whole should operate.” I wish the company all the success in the world with GoogleNet. Business model experimentation and new entry to the broadband Internet service provider market like this should be encouraged. If this “open access” common carrier network proves to be a viable business model that attracts both customers and followers, it will be a fabulous addition to the domestic Internet ecosystem. But this vision should not be turned into unnecessary government mandates for other Internet network operators who are similarly trying to experiment with their business models in this brave new digital world.
Surprisingly, I also agree with Harold Feld’s analysis:
the telecom world is all abuzz over the news that Google will build a bunch of Gigabit test-beds. I am perfectly happy to see Google want to drop big bucks into fiber test beds. I expect this will have impact on the broadband market in lots of ways, and Google will learn a lot of cool things that will help it make lots of money at its core business — organizing information and selling that service in lots of different ways to people who value it for different reasons. But Google no more wants to be a wireline network operator than it wanted to be a wireless network operator back when it was willing to bid on C Block in the 700 MHz Auction.
So what does Google want? As I noted then: “Google does not want to be a network operator, but it wants to be a network architect.” Oh, it may end up running networks. Google has a history of stepping up to do things that further its core business when no one else wants to step up, as witnessed most recently by their submitting a bid to serve as the database manager for the broadcast white spaces devices. But what it actually wants to do is modify the behavior of the platforms on which it rides to better suit its needs. Happily, since those needs coincide with my needs, I don’t mind a bit.
I do mind.
We’ve been discussing the dramatic growth of the global Internet and the expansion of physical devices and virtual spaces that come with the mobile revolution, social networking, cloud computing, and the larger move of the Net into every business practice and cultural nook.
Last week ICANN, the organization that administers the Internet’s domain space, announced that fewer than 10% of current-generation Internet addresses (IPv4) remain unallocated. In any network realm, a move above 90% capacity is an alarm bell that needs attention. IPv6 is the next generation address space and is being deployed. But the move needs to accelerate to ensure the unabated growth of the Net.
Developed in the 1990s, IPv6 has been available for allocation to ISPs since 1999. An increasing number of ISPs have been deploying IPv6 over the past decade, as have governments and businesses. The biggest attraction of IPv6 is the enormous address space it provides. Instead of just 4 billion IPv4 addresses – fewer than the number of people on the planet – there are 340,282,366,920,938,463,463,374,607,431,768,211,456 IPv6 addresses. An easier way to think of this number is 340 trillion trillion trillion addresses.
Or, the famous comparison: If IPv4 is a golf ball, IPv6 is the Sun.
See our new analysis of Net Neutrality regulation’s possible impact on the U.S. job market.
The Wall Street Journal‘s Digits blog asks, “Could Verizon Handle Apple Tablet Traffic?”
The tablet’s little brother, the iPhone, has already shown how an explosion in data usage can overload a network, in this case AT&T’s. And the iPhone is hardly the kind of data guzzler the tablet is widely expected to be. After all, it’s one thing to squint at movies on a 3.5-inch screen and quite another to watch them in relatively cinematic 10 inches.
“Clearly this is an issue that needs to be fixed,” says Broadpoint Amtech analyst Brian Marshall. “It can grind the networks to a halt.”
Just two more New York Times articles that point out what’s obvious around here: the Internet’s dramatic and unpredictable disruption of the whole “media” space. Isn’t Washington’s assumption that it can sort all this out and impose particular business models on the media space through prescriptive Net Neutrality regulation, a case of supreme hubris?
For those of you not wishing to sift through 15,000 comments submitted to the FCC for its Net Neutrality proposed rule making, let me recommend what — so far — is the best technical filing I’ve read. It comes from Richard Bennett and Rob Atkinson of the Information Technology Innovation Foundation.
Also very useful is a new post by George Ou on content delivery and paid peering, with important policy implications.
These are among the least discussed — but most important — items in the whole Net Neutrality debate.
“Here’s one problem with digital collectivism: We shouldn’t want the whole world to take on the quality of having been designed by a committee. When you have everyone collaborate on everything, you generate a dull, average outcome in all things. You don’t get innovation.
“If you want to foster creativity and excellence, you have to introduce some boundaries. Teams need some privacy from one another to develop unique approaches to any kind of competition. Scientists need some time in private before publication to get their results in order. Making everything open all the time creates what I call a global mush.
“There’s a dominant dogma in the online culture of the moment that collectives make the best stuff, but it hasn’t proven to be true. The most sophisticated, influential and lucrative examples of computer code — like the page-rank algorithms in the top search engines or Adobe’s Flash — always turn out to be the results of proprietary development. Indeed, the adored iPhone came out of what many regard as the most closed, tyrannically managed software-development shop on Earth.”
— Jaron Lanier, author of the new book You Are Not a Gadget.
A bunch of good metrics on the decade that was from Oliver Chiang. Here are a few:
–Number of e-mails sent per day in 2000: 12 billion
–Number of e-mails sent per day in 2009: 247 billion
–Revenues from mobile data services in the first half of 2000: $105 million
–Revenues from mobile data services in the first half of 2009: $19.5 billion
–Number of text messages sent in the U.S. per day in June 2000: 400,000
–Number of text messages sent in the U.S. per day in June 2009: 4.5 billion
–Number of pages indexed by Google in 2000: 1 billion
–Number of pages indexed by Google in 2008: 1 trillion
–Amount of hard-disk space $300 could buy in 2000: 20 to 30 gigabytes
–Amount of hard-disk space $300 could buy in 2009: 2,000 gigabytes (2 terabytes)
Completely missing from the health care debate is a conversation about health care innovation and productivity. But not only are these legitimate factors — they are the most important factors.
Look around the world, however, and see the crucial advances being made.
“Japanese companies reinvented the process of making cars. That’s what we’re doing in health care,” Dr. Shetty says. “What health care needs is process innovation, not product innovation.”
At his flagship, 1,000-bed Narayana Hrudayalaya Hospital, surgeons operate at a capacity virtually unheard of in the U.S., where the average hospital has 160 beds, according to the American Hospital Association.
Narayana’s 42 cardiac surgeons performed 3,174 cardiac bypass surgeries in 2008, more than double the 1,367 the Cleveland Clinic, a U.S. leader, did in the same year. His surgeons operated on 2,777 pediatric patients, more than double the 1,026 surgeries performed at Children’s Hospital Boston.
Before we turn the whole U.S. system into a larger, more rigid and stagnant, less entrepreneurial, more costly version of Medicare, one that “bends the cost curve” up instead of down, shouldn’t we give at least a few minutes consideration to the real solution to our health care problem: technological, process, and business model innovation?
We Hoosiers are lucky:
Perhaps most appreciated was the governor’s overhaul of the Bureau of Motor Vehicles. It’s gone from one of the worst in the country—a place, he says, “where people would take a copy of ‘Crime and Punishment'”—to one of the best, with an “average visit time of seven minutes and 36 seconds.”
I had my own experience about four years ago, before the BMV was overhauled, where I made some seven trips to the license branch and various other government offices over a period of weeks just to renew my driver’s license.
But as Kim Strassel tells us in her interview with Mitch Daniels, this is only the very tip of the iceberg. In a state challenged by our reliance on the automobile industry in particular and manufacturing in general, instead of imploding like Michigan or profligate California, we had a governor whose common sense, hard work, business savvy, and courageous budgeting has left Indiana in a much better spot than many other states. Especially given our special old-economy obstacles.
This is the ironic but very legitimate question AT&T is asking.
As Adam Thierer writes,
Whatever you think about this messy dispute between AT&T and Google about how to classify web-based telephony apps for regulatory purposes — in this case, Google Voice — the key issue not to lose site of here is that we are inching ever closer to FCC regulation of web-based apps! Again, this is the point we have stressed here again and again and again and again when opposing Net neutrality mandates: If you open the door to regulation on one layer of the Net, you open up the door to the eventual regulation of all layers of the Net.
George Gilder and I made this point in Senate testimony five and a half years ago. Advocates of big new regulations on the Internet should be careful for what they wish.
The Technology Liberation Front is five today. Go check out these courageous defenders of Internet freedom, of which I am a too-infrequent yet proud comrade-in-arms.
Watch Tyler Cowen talk about his new book to Will Wilkinson. The book’s title, Create Your Own Economy, sounds like just another self-help business pamphlet. But you’ll see Cowen isn’t talking about business or money at all — at least not directly. His subjects are autism, “neurodiversity,” Adam Smith’s division of labor, and the Internet’s ability to match more people with their highly specific talents and passions. Far from the latest trope that “Google [or the Web] is making us stupid,” Cowen argues the the Web helps a vast variety of previously undiscovered intelligences, or “neuro profiles,” flourish.
Tough stuff from the always-insightful David Malpass, who warns that slow growth from a lower economic base could yield an historic downgrade of the U.S. experiment:
With the crisis taking a deep toll on our economy, the expectation is for a “new norm” once recovery kicks in. It’s a dismal prospect: slower growth from a lower base, with higher unemployment and bigger government.
Rather than a healthy frugality, the new norm implies an outright decline in median living standards, a disaster for both prosperity and fairness. For President Obama such economic mediocrity spells extended deficits, a “jobless” recovery and, at best, a stiff reelection fight instead of the cakewalk that his perfect timing–inaugurated at the exact bottom of the crisis–deserves.
The U.S. decline isn’t inevitable. Game changers exist. The Fed could improve dollar policy to make the tens of trillions of dollars in new U.S. Treasury debt more salable. It should stop buying Treasurys to make it utterly clear that it will not monetize debt. Buying Treasurys is the monetary equivalent of government workers digging a hole, filling it back up and calling it GDP.
To underscore the new commitment to price stability and creditors the Fed has to stop using core inflation for its report card. It’s a loud signal to the world, proclaiming: “The Fed is not serious. Money should flow to Asia. Sell the dollar.”
In the wake of EC antitrust chief Neelie Kroes’s charge that Intel’s microchips are too tiny, too fast, and too inexpensive, the company has quickly unveiled a new line of huge, power-hungry, slow, overpriced, out-dated products.
Of the Treasury’s long-awaited non-plan bank plan, Andy Kessler writes, “Mr. Geithner should instead use his ‘stress test’ and nationalize the dead banks via the FDIC — but only for a day or so.”
strip out all the toxic assets and put them into a holding tank inside the Treasury. . . . inject $300 billion in fresh equity for both Citi and Bank of America. Create 10 billion new shares of each of the companies to replace the old ones. The book value of each share could be $30. Very quickly, a new board of directors should be created and a new management team hired. Here’s the tricky part: Who owns the shares? Politics will kill a nationalized bank. So spin them out immediately.
Some $6 trillion in income taxes were paid by individuals in 2006, 2007 and 2008. On a pro-forma basis, send out those 10 billion shares of each bank to taxpayers. They paid for the recapitalization.
Each taxpayer would get about $100 worth of stock for each $1,000 of taxes paid. Of course, each taxpayer has the ability to sell these shares on the open market, maybe at $40, maybe $20, maybe $80. It depends on management, their vision, how much additional capital they are willing to raise, the dividend they declare, etc. Meanwhile, the toxic assets sitting inside the Treasury will have residual value and the proceeds from their eventual sale, I believe, will more than offset the capital injected. That would benefit all citizens, not the managements and shareholders who blew up the banking system in the first place.
Is Kessler crazy? Well, maybe. In his own creative and boisterous way. But not nearly so crazy as Washington’s fumble-bumble these last few months. I’d much prefer Kessler’s out-of-the-box plan to D.C.’s muddle.
What becomes clearer every day is that all the government’s efforts, from the AIG “bailout” to TARP 1.0 and TARP 2.0 onward, have essentially been efforts to get around the terribly destructive interaction of “mark-to-market” accounting and regulatory capital requirements. A few keen observers — David Malpass (I), Brian Wesbury (I, II, III, IV), Steve Forbes (I, II) — have made this point from the start. But the government and most economists clung stubbornly to “fair value” in an apparent attempt not to “let the banks off the hook.”
But what a time for an attack of conscience, a principled stand for supposed accounting purity! We’ll spend trillions and totally alter the nation’s financial landscape, but a minor (though powerful and free!) accounting change — relaxing mark-to-market — is a bridge too far? Explain that one. (more…)
Futurist and The Singularity Is Near author Ray Kurzweil and the very impressive X-Prize Foundation chief Peter Diamandis launch Singularity University with the goal of “preparing humanity for accelerating technological change.”
May your 2009 be less “interesting” than 2008.
‘Twas the night before Christmas
And all through the house
Not a thing was dry
Not even my spouse
The icy rain fell
The water pipe froze
Slipping and sliding
There was no time to doze
We vacuumed and pumped
The whole night through
Yet when we awoke
Santa’s dream had come true
Thank goodness Nancy Pelosi and Harry Reid are reviewing the Big Three’s business plans before “investing” a couple hundred bil’ of taxpayer money. I’m so relieved. Silicon Valley could learn a few things from these bleeding edge venture capitalists…and the CEOs groveling for our money.
Or opportunity of a lifetime? That’s the question economist Scott Grannis asks as he surveys the sickly prices of stocks and corporate bonds.
Any way you look at it, the pricing on corporate bonds and stocks today implies that the next several years will be the most disastrous in the history of the U.S.
In order to fully appreciate why that prediction is unlikely to prove correct, consider that not one of the key ingredients that precipitated the depression exists today.
(Hat tip: Real Clear Markets)
“The point is not to deflate asset bubbles, but to avoid them in the first place.”
– Gerald P. O’Driscoll, Jr., November 17, 2008
After an exhausting political season, Rich Karlgaard relaunches his great Digital Rules blog to focus on innovation, mavericks, and entrepreneurs.
“Hyperbole is not harmless; careless language bewitches the speaker’s intelligence. And falsely shouting ‘socialism!’ in a crowded theater such as Washington causes an epidemic of yawning.”
— George Will, November 15, 2008, wondering what to call existing bail-outs galore and almost $4 trillion of annual Federal spending.
Is Paul Ryan the future?
After two straight electoral defeats, it is time for a substantial party shake-up. We don’t need a feather duster; we need a fire hose.
We need to be honest about the root causes of our current financial crisis: loose money, crony capitalism and a lack of market transparency and information.
The credit crisis almost deep-sixes the company that makes the commemorative M&A toys for Wall Street execs.
This spring came what Kern called “the tiny-globe craze.” Sokoler rolled his eyes. “Oh, don’t start with that,” he said. “You know those things you see in Sharper Image, where there’s a base and a little globe just floats over it?” (They work by means of magnets.) Somebody at Merrill decided to order a hundred of them to celebrate an M. & A. deal, and all of a sudden everybody had to have one. “It was a real pain in the ass,” Sokoler said. “People were calling my cell phone in the middle of the night, saying, ‘It’s not floating!’ And you’d have to, like, walk them through it. You’d say, ‘Yes, it is floating—you just have to hold it in the right place.’”
“Let us bend over and kiss our ass goodbye. Our 28-year conservative opportunity to fix the moral and practical boundaries of government is gone — gone with the bear market and the Bear Stearns and the bear that’s headed off to do you-know-what in the woods on our philosophy.”
— P.J. O’Rourke, November 10, 2008, repeating George Will’s sentiment, but from a . . . shall we say . . . different angle.
(via Don Luskin)
Advancing faster than Moore’s Law, hard disk digital storage technologies are are the unsung heroes of the tech revolution. The beat goes on, and a large number of new technologies, from hybrid drives to phase-change ovonics to racetrack memory, promise to match the capacity of digital storage and/or DRAM with the speed of SRAM and other solid state memories. See a big special report from MIT’s Technology Review on all these “next memory” candidates, and more.
Janet Novack writing in Forbes details the long-term U.S. budget and tax realities that will lead to “The Coming Shakedown.” Big taxes and skimpy benefits, she writes, are baked in the cake:
Here’s what sober budget analysts (from both parties) see when they focus on 2020 and beyond: The well-off will pay higher federal taxes, for sure. But ordinary folks will pay more, too. They will pay as tax burdens diffuse into the costs of things they buy. They will likely pay more for fuel and electricity, as the costs of carbon permits and renewable-fuels mandates get built in. They may be asked to pay a European-style value-added tax. And they will pay on the other side of the ledger: Their retirement benefits will get clipped.
The federal government will get bigger, but not big enough to keep all the promises Washington has made. So the normal age to receive Social Security retirement benefits, already rising in steps to 67 for those born in 1960 or later, will increase further, perhaps to 69. High earners will pay more in and get less back in retirement. Call it a “tax” or call it “means testing”—it’s government, and it will make you poorer.
Economists of all stripes think we absolutely need a new value added (or a kind of sales) tax.
“A VAT has got to happen. We’re at a point where the traditional money-raising options are not going to work,” says Yale law professor Michael J. Graetz, who was a Treasury official during the Administration of President George H.W. Bush and has been pushing a plan to use proceeds from a VAT to reduce corporate income taxes and exempt families earning less than $100,000 from the income tax. A VAT encourages personal savings, which the U.S. needs more of. Plus, it forces retirees to help pay for their government benefits. Says Graetz: “You tax the elderly and you tax the coupon clippers. But no politician is going to say that out loud.”
To be sure, a VAT faces tremendous hurdles. Two decades ago economist Lawrence H. Summers, who later became President Clinton’s Treasury Secretary and is now an Obama adviser, famously observed that the U.S. hadn’t adopted a VAT because “liberals think it’s regressive and conservatives think it’s a money machine.” The country might get a VAT, he went on, when liberals realized it was a money machine and conservatives figured out it was regressive.
Larry Summers is a smart liberal, one of my favorites, and he’s probably right. But let’s get something straight: layering a VAT on top of our current tax system would be catastrophic. Some have suggested trading a VAT for a reduction in the corporate income tax. But that too is a dangerous game. One option I’d consider, versions of which have been suggested by both Art Laffer and Sen. Jim Demint, is a low-rate flat income tax with a low-rate VAT — say 8% each. That would be a hugely efficient and growth-fueling fundamental tax reform.
But it wouldn’t be the increase in government that the normal VAT backers have in mind. The way to “solve” the Medicare problem is not to gouge American producers in an inevitably futile effort to close the $43 trillion dollar unfunded budget gap. We shouldn’t wrack our brains trying to pay for the current bloated system. Much better to transcend the issue altogether by transforming health care with new medical technology and a newly dynamic, entrepreneurial, and consumer-driven market.