The Engines of Economic Growth

A look at different sectors’ contribution to economic growth.

We know that economic growth has been lackluster for several years. Why is this happening? One way to address the question is to look at different economic sectors’ contribution to overall growth. The figure below looks at overall economic growth since 1950 (top panel), and four sectors’ contribution to that growth:

  • Private consumption, which includes purchase of services and consumer goods by non-government enterprises
  • Private investment, which includes non-government actors’ purchases of equipment, buildings, intellectual property, and residential homes
  • Net Exports, which is total exports minus imports
  • Government Consumption and Investment, federal, state and local government’s consumption and investment.

Data come from the Bureau of Economic Analysis.1



Some points of note:

Consumption-Driven Economy. Economic growth is primarily driven by consumption spending by both households and businesses. Until the 2008 crisis, private consumption contributed between two and three percentage points of GDP growth every year. Since 2008, mean private consumption has been about one percentage point lower than the historical norm. Our present economic slowdown is substantially driven by the fact that people and businesses are not consuming as much as they did earlier.

Private Investment Low for Years. The present economic slowdown is also partly attributable to the fact that private investment is lower. People aren’t buying as much equipment or building as many buildings as they had in past decades. Interestingly, private investment has been generally low for the better part of the past 15 years. This dip in private investment preceded the Great Recession.

This low investment level is noteworthy, as many of the pre-crisis 21st century’s tax cuts and deregulation were sold to the public as a means of stimulating prosperity by inducing more investment. Investment was a much more important engine of prosperity in the 1990s, possibly due to investments in IT equipment (equipment purchases surged from 1993-2000). By the 2000s, private investment dropped substantially, despite the fact that economic policies had pressed tax cuts, financial deregulation, and other policies that were sold as spurring the economy by promoting investment.

Trade a Minior Contributor. Net exports have generally made only minor contributions to overall economic growth. It has not been an engine of economic prosperity over the entire post-WWII era.

The Effects of Austerity. This series was particularly interesting. During the very prosperous 1950s and 1960s, public sector consumption and investment made large contributions to economic growth, providing just under one percentage point of economic growth. Government spending fell during the 1970s, alongside general economic growth. After the crisis, government spending has on average been a source of stagnation – it has lowered GDP growth. This illustrates the argument that austerity is hurting the economic recovery.

The table below presents mean percentage point contributions to overall GDP growth by decade, across sectors, from 1950 to 2014:

Sectoral Contribution to GDP Growth, in Percentage Points, 1950 – 2014

Sector 1950s 1960s 1970s 1980s 1990s 2000-7 2008+
GDP Growth 4.1% 4.3% 3.2% 3.4% 3.5% 2.5% 1.1%
Private Consumption 2.2% 2.6% 2.0% 2.2% 2.4% 2.0% 0.9%
Private Investment 0.8% 0.8% 0.8% 0.7% 1.2% 0.3% 0.02%
Net Exports -0.08% 0.04% 0.3% -0.2% 0.4% -0.4% 0.3%
Government Consumption & Investment 1.1% 0.9% 0.2% 0.7% 0.2% 0.4% -0.02%

  1. Bureau of Economic Analysis (2015) “Table 1.1.2. Contributions to Percent Change in Real Gross Domestic Product” Data table downloaded June 7, 2015

Download the raw data and Markup file here

Long-Term Trends in Corporate Profits

Corporate profits fell for decades, but have bounced back since 2000.

Over most of the late-20th century, corporate profits fell in proportion to the overall economy, but it has roared back since 2000. One can interpret these changes in several ways.

Figure 1 (below) describes changes in the ratio of US corporate profits to GDP from 1945 to 2013. Data are from the Federal Reserve:1

US Corporate Profits


The graph suggests that corporate profits fell steadily between World War II and the mid-1990s, but rebounded after 2000. Concretely, this means that corporations collectively took in progressively less operational profit between the end of World War II until 2001, relative to the overall value of what the economy produces. After 2001, corporate profits grew in proportion to overall economic activity. How might we interpret this trend?

One possible interpretation focuses on distributional struggles between corporations and other societal actors. During the 1940s through 1970s, economic policies tended to be more progressively redistributive. They taxes and regulated corporations more heavily, and were quicker to implement policies that redistributed wealth or bestowed bargaining power to workers and consumers. These policies are said to have been widely dismantled under neoliberalism and globalization of the 1980s through 2000s, which has led corporate profits to rise. This story line involves corporations losing a distributional battle with workers and consumers for decades, then an abrupt reversal of fortunes to our present situation in which they are now winning this distributional battle.

Other perspectives interpret these changes as a product of businesses working less well under postwar “big government” capitalism, then turning around to do great business after the Reagan Revolution and globalization. This narrative stresses the ways in which government taxes, spending, or regulations cause businesses and society to waste resources, suppress innovation, or focus on the ways in which lobbying for political favor becomes more important than being economically competitive. Here, the focus is on profit as a byproduct of fielding successful businesses, rather than a matter of redistribution. Through this view, one might infer that neoliberalism and globalization slowed American business’ precipitous fall into decrepitude, and ultimately set it on a path towards renewed profitability.

One explanation that interests me involves the erosion of America’s postwar position of international economic dominance. Most highly developed countries emerged from World War II greatly weakened, while the US economy and society remained largely unscathed, well-resourced, and ready to capitalize on the great technological advancements of the wartime era. American businesses did not face the strong foreign competition to which they are now subject, and America was well-positioned to dictated very favorable international economic terms with other countries. As the other Western countries recovered from the war, and became better able to assert themselves in the market and in international affairs, America’s competitive advantages eroded and its businesses found it harder to earn big. This explanation does not discuss why profits recovered after 2000.

Other explanations are possible. Whatever the reason, it seems clear that corporate profit shrank in proportion to the overall economy up until the 1980s or 1990s, and has slowly but steadily grown since then.

  1. Federal Reserve Board (2014) Series A464RC1A027NBEA and FYGDP from Federal Reserve Economic Data set. Accessed in Spring 2015.

Download the raw data and Markdown file here

Housing Prices Since 1975

A look at housing prices in the US

Housing is generally the biggest asset on people’s balance sheets. Those with money tend to own homes, and home owners are generally deeply invested in their houses. For many years, housing prices rose, and these price rises made major contributions to the middle and upper-middle class’s asset base.

Those who owned houses during this boom benefited with increased wealth, but the rise in home prices also made it more expensive to buy a home. Up until the 2008 crash, many of the families who were not able to get in on the housing market “at the ground floor” made their way in by using a range of new debt products that a booming financial market was offering. It became easier to purchase a home with no downpayment. People could get bigger mortgages if they agreed to gamble on adjustible-rate mortgages. Eventually, some people didn’t even need documentation to get a loan. Once credit got that loose, it was only a matter of time before some major problem emerged.

When the housing bubble eventually burst in 2008, both lenders and consumers became more reluctant to buy new homes. The market dried up and housing prices crashed. Presumably, this crash meant that more people could get into the housing market, although it might have ultimately hurt the wealth accumulation of home owners.

How much did housing prices fall? How much affordable did houses become? How much damage did home owners have to absorb? One way to address these questions is to look at changes in housing affordability over time. The figure below describes changes in the Case-Shiller price index.1 The index measures home prices in 20 metropolitan areas, and expresses itself as a relative price level to that which prevailed in January 2000.2



In 2014, housing prices were about 65% higher than in 2000. This represents a modest recovery of about 19% from the trough in housing prices in 2011. However, it still represents 10% lower prices than those that prevailed at the peak of the housing boom in 2006. So housing prices have not recovered. Homeowners who bought near the peak of the last bubble are still in the hole, and those who were counting on home values returning to their 2006 levels are still behind in their financial plans.

The graph also imparts a sense of the 2007-9 recession’s impact on home prices. Home prices fell by about 24% from the 2006 peak to the 2011 trough. That is a considerable amount of lost wealth, particularly because homeowners are generally deeply invested in their homes.

Regardless of the ups and downs of recent years, housing is far more expensive than it was thirty years ago. Prices in 2014 have more than quintupled over the past forty years. Meanwhile, incomes have not.

  1. Data from Federal Reserve Board (2015) “S&P/Case-Shiller U.S. National Home Price Index©” Data series CSUSHPISA downloaded June 9, 2015.
  2. S&P Dow Jones Indices (2015) S&P/Case-Shiller Home Price Indices: Methodology Methodological report.

Download the raw data and Markup file here

Long-Run Trends in the Poverty Rate

What can we glean from long-term changes in the poverty rate?

In 2013, about 14.5% of society was officially “poor”, with incomes that fall below the federal government’s official poverty line. This represents an increase from the early 2000s, when only about 11% – 13% of Americans registered as being poor.

Interestingly, today’s povery rates seem well within the normal boundaries established since the late-1960s. Over the past forty-plus years, poverty has bounced within an 11% to 15% range, going up during bad times and falling during good ones.

Long-term poverty patterns suggests that we have not made sustained progress in eradicating poverty. The figure below describes the poverty rate since 1959. Data is from the Census Bureau.1



Poverty fell considerably during the early-1960s, and reached its present range by 1966. Since then, we have not experienced any secular decline in the poverty rate.

Conservative critics of the welfare state often cite this fact as evidence that the policies established under Lyndon Johnson’s “War on Poverty” failed to achieve their goals. The signature policies of that “War”, the expansion of Social Security in 1964, the Food Stamp Act of 1964, the Economic Opportunity Act of 1965, and the Elementary and Secondary Education Act of 1965, marked substantial expansions of the social safety net.2 Critics argue that poverty stopped falling soon after these implementations, and infer that these programs were not effective. Moreover, one might note that median wages began to falter only a few years later, and argue that the expansion of the welfare state ultimately hurt people’s ability to secure jobs and earn money.

I’m more skeptical about this view. First, it is not altogether clear that domestic welfare policy was the primary determinant shaping hosueholds’ economic fortunes. One can read this graph and infer that poverty declined more or less steadily until 1973, right after the oubreak of the Stagflation Crisis, a range of systemic economic and financial problems that are just as much rooted in a changing geopolitical context than anything that was happening domestically.3 There is also the prospect that the dynamism of mid-20th century’s industry-led economy was nearing exhaustion.4 The country would soon also confront a range of demographic changes, like the Baby Boomers’ coming of age, the rise of divorce, and other assorted social changes that could have ultimately driven households into poverty.

More deeply, it is worth keeping in mind what is meant by “poor.” The official poverty line is the inflation-adjusted cost of what the USDA determined to be the cost of a minimal food diet in 1963. Those with incomes above that line are not poor, and those below are poor. This is a very crude measure of poverty, which to my mind borders on meaninglessness. In effect, a stagnating poverty rate does not imply that the number of poor people stagnated. Rather, it implies that gross household incomes roughly paced the real cost of basic food in 1963.

A different possibility is that these War on Poverty programs have prevented poverty (real or official) from exploding. This is particularly true of America’s burgeoninng elderly population, many more of whom would certainly be impoverished without their Social Security checks. Without public health care programs, like Medicaid, Medicare or CHIP, many more people would have much more trouble accessing medical care. Subsidized health insurance, subsidized school lunches, and other facets of the welfare state do not appear in household balance sheets as income, and so they would probably not affect poverty rates. Still, people’s overall wellbeing is likely helped by these programs.

Overall, what we can glean from long-term changes in the poverty rate is limited. Still, the graph is tought-provoking, and unpackaging what is happening here might help shed light on whether or not the welfare state actually helps the poor. I would wager it does, but the debate will likely continue for a long time.

  1. US Census Bureau (2014) “Table 2. Poverty Status of People by Family Relationship, Race, and Hispanic Origin: 1959 to 2013” Data table downloaded from
  2. For an accessible overview, see Dylan Matthews (2014) “Everything you need to know about the war on poverty” Blog entry at Wonkblogfrom the Washington Post, January 8
  3. See Fred Block (1977) The Origins of International Economic Disorder: A Study of United States International Monetary Policy from World War II to the Present University of Calfornia Press
  4. See Daniel Bell (1977) The Coming of Post-Industrial Society: A Venture in Social Forecasting Basic Books.

The Spectacular Fall, and Very Modest Recovery, of Household Savings

Household savings rates have fallen considerably since the early-1970s

The personal savings rate tries to estimate the proportion of people’s disposable (post-tax) income that is not spent. Presumably, it gives us a sense of how much money people are putting aside for the future. A lower savings rate suggests that more people are financially ill-prepared for the future.

The figure below illustrates changes in US households’ personal savings rates. The data comes from the Federal Reserve Board.1 Keep in mind that these rates are derived from aggregate data, and represent means rather than medians. It seems likely that wealthier households with much higher savings rates push these averages up, and that the median household would save below the average personal savings rate.

During the 1960s and early-1970s, households saved between 10% and 14% of their incomes. At those rates, a household earning a yearly salary would put aside between $6,000 and $8,400 a year. Over thirty years of compounding 5% real annual returns, such savings would result in a nest egg of between $400 and $558 thousand.

us personal savings rate

Since 1976, the personal savings rate has declined steadily, eventually reaching near-zero right before the 2008 crisis. Throughout the Great Recession, many observers have celebrated a resurgence in savings, but the magnitude and expected durability of this rebound can easily be overstated. When savings rebounded to about 5% in 2013,[^21] it was reverting to levels that prevailed in the mid-1990s, not the mid-1960s. This decline is enough to produce a substantial diminishment in long-term wealth accumulation. Were our $60,000 a year family to save between 2% and 5% of their income (as opposed to 10% – 14%), they would be left with a nest egg of $78 to $199 thousand. The effect of low savings seems more moderate on a year-by-year basis, but they can render substantial differences in wealth over a lifetime.

Why have savings been falling? There are many explanations. Part of the picture probably involves income stagnation. Many analysts argue that people are more spendthrift today, and more amenable to borrowing money. Consumer debt is also cheaper and easier to obtain than in the 1960s or 1970s. In my own research, I also focus on the rising cost of living, particularly the rising cost of essential goods and services like health care or education.

Whatever the cause, it seems likely that people save less money, which suggests that they will have less wealth from which to draw in the future.

Federal Reserve Board (2014) “Personal Saving Rate, Percent, Annual, Seasonally Adjusted Annual Rate” Series PSAVERT from Federal Reserve Economic Data set. Accessed in Spring 2015.↩

Demographic Predictors of Wealth: Rough Estimates

Which demographics have more or less wealth? Some rough estimates

Who is wealthier, and who is poorer? The question can be answered by looking at data from the Survey of Consumer Finances. Figures are for 2013.

I consider the following predictors:

  • Sex of household head
  • White Non-Hispanic vs. Others
  • Age of head
  • Education of head
  • Household headed by pair/single
  • Children vs. no children

Notes: Wealth bottom-coded at $1 and top-coded at $4MM. Model predicts logged net worth.

Predictor exp(Coef) SE
Baseline $662
Female -53% 0.12
Non-White -77% 0.09
Age 35-44 +426% 0.15
age 45-54 +1,327% 0.16
Age 55-64 +3,227% 0.16
Age 65-74 +13,110% 0.17
Age 75+ +22,471% 0.21
HS +77% 0.14
Some College +109% 0.13
College +574% 0.13
Unpaired -73% -0.12
Working +214% 0.13
Parents +34% 0.10

The model does not do a very good job of developing precise estimates, but it still imparts some sense of the magnitude of different predictors’ effects.

The model suggests that age is the strongest predictor of net worth. This makes sense: Not only does it take time to accumulate wealth, but family wealth is more likely to be passed on to older people, because their ancestors are more likely to have passed away. The effects of age are huge. Holding all other factors constant, a senior is predicted to have several thousand times the amount of wealth than his or her otherwise similar under-35 counterpart.

The effects of sex, race, cohabitation status, and education are also considerable, where the wealth of a household headed by a man who is non-Hispanic white, a college graduate, married or cohabiting, and working has several times higher wealth. The effects of these different factors are multiplicative. For example, the model predicts that a household headed by a working, married, college-educated, non-white woman, age 40 with no children is $8,267. If she were white, it is predicted to be just under $36 thousand. If she were still black but were 60 years old, her household is predicted to have a net worth of just over $50 thousand. If she were white, it would be $217,510.

Again, it is important not to take these coefficients too literally. The model only gives a sense of relative magnitudes. What we do learn is that age matters a lot in predicting how much wealth someone holds. While age is of critical importance, sex, race, education, and working status also seem to play an important role.

The Rise and Fall of Unionism in America

America’s workforce was most heavily unionized in the early 1950s. Since then, unions have experienced a long-term decline.

Unions are a persistent point of conflict in economic policy debates. Many observers argue that the economic difficulties faced by America’s middle class are at least partly attributable to the decline of unions. Others see unions as a detrimental force in the economy, and believe that they harm employers, the US economy, and ultimately workers themselves. Unions are portrayed as powerful and corrupting forces in US society, and dying institutions that are being squashed by all-powerful business interests.

How strong are unions? One way to answer that question is to look at union density, the ratio of unionized workers to total workers. The figure below depicts changes in union density across all US workers from 1880 and 2013. Data come from Klaus Walde and Barry Hirsch and David Macpherson.1

Union Density

The graph suggests that unionization developed slowly between the 1880s and the Great Depression. By the 1880s, struggles to unionize labor were heated and sometimes violent. Larger movements to advance unionism included the Knights of Labor, and later the American Federation of Labor. Through the Great Depression, more workers unionized under the auspices of the Congress of Industrial Organizations. These latter two groups eventually merged to create the AFL-CIO.

Unionization increased dramatically with the institution of the 1935 National Labor Relations Act, which legally protected workers’ rights to organize unions, restricted businesses ability to interfere with or fire unionizing workers, and enabled compulsory union membership in organizations where unions had been established. Between World War II and the mid-1950s, union density peaked at around one-third of the work force. Union density began a secular decline after 1954, and is quickly approaching levels that prevailed before the passage of the NLRA.

The figure depicts the decline of union density, which is the proportion of workers in unions. This decline in density does not necessarily represent a decline in the absolute number of union jobs, but is more a reflection of a long-term faster growth in non-union jobs. However, after 1980, the absolute number of union jobs began to fall. In 1980, there were approximately 20 million union jobs, whereas there were about 14.5 million in 2013.2 This decline followed several changes, including the deinstitutionalization of regulations that benefitted unions (e.g., “right to work” legislation), employment declines in traditionally unionized sectors (e.g., automotive, utilities), and an increasingly cultural antipathy towards unionism.

Insofar as union membership is concerned, it is clear that the institution of unionism has declined considerably over the past several decades. Union jobs are less prevalent and decreasing in number. There are disagreements about whether or not the decline of unionism is a good or bad thing for workers and society-at-large, but it seems clear that this decline is taking place.

  1. Pre-1973 ata from Alejandro Donado and Klaus Walde (2012) “How Trade Unions Increase Welfare” Economic Journal 112(563): 990 – 1009. Set draws strongly from Richard B. Freeman (1998) “Spurts in Union Growth: Moments and Social Processes” in Michael D. Bordo, Claudia Goldin, and Eugene N. White (eds.) The Defining Moment: The Great Depression and the American Economy in the Twentieth Century University of Chicago Press. Post-1973 numbers from Barry Hirsch and David Macpherson “Union Membership, Coverage, Density, and Employment, Among All Wage and Salary Workers, 1973-2014”
  2. Donado and Walde (2012) Op. Cit.

Download the Markdown file and raw data

Family Business in Decline?

Data from the Survey of Consumer Finances suggest that family-owned businesses are experiencing a decline. There appear to be as many households earning some income from proprietary businesses, but the proceeds from these businesses are falling. There are fewer businesses that earn a living wage, and the prevalence of high-earning proprietary businesses also seems to be falling.

Proportion of Households Earning Business Income

Consider the figure below, which shows the proportion of US households earning (1) any income, (2) at least a median income, and (3) at least a 90th percentile income from a proprietary business.


Between 8% and 10% of US households receive some business income, but between a two-thirds and three-quarters of these households fail to secure the equivalent of a median household income through personally-owned businesses. In 1992, about 3.3% of US households received at least a median household income from personally-owned businesses. By 2013, just over half as many households – 1.8% – received a median income from such businesses. The proportion of high-earning personal businesses has fallen even more sharply, from 1.3% in 1992 to 0.4% in 2013.

On one hand, these reduced earnings could be the byproduct of random variation. For example , business earnings took a bit of a dip in 1995 and 2004, but then recovered. This could be a product of sample variability or a natural minor fluctuation in small business profitability. However, that seems less likely when we look at the wider distribution of personal business earnings.

Distribution of Business Earnings

The figure below shows the distribution of business earnings from 1992 and 2013. It describes the 25th, 50th, 75th, 90th, and 95th percentile earnings from proprietary businesses.


This figure shows a long-term decline in proprietary business earnings, which seems to have begun after the 2001 recession. In 2001, the median household business earned $26,664. This figure fell regularly through 2013, where the median stood at $16,400. This represents a fall of about 38%.

A similar decline occurred among the higher ranks of the business income scale. From 2001 to 2013, 75th percentile income fell from $79,200 to $40,000. Ninetieth percentile income fell from $217,800 to $87,000. Ninety-fifth percentile income from $370,920 to $156,600. These are staggering losses.

What Does It Mean?

If these figures accurately reflect changes in personal business earnings, it suggests that their earnings are falling quickly. Why might this be happening? It may be that small business faces mounting pressures from many quarters. Retailers may have increasing difficulty competing with big box stores and online retailers. Small manufacturing outfits may have trouble competing with foreigners. It might be that many small businesses are being replaced with automated substitutes (e.g., TurboTax and LegalZoom are killing small accountants and lawyers).

Whatever the cause, it seems quite clear that small businesses (at least unincorporated ones) are doing badly in the US.

The Long Consumption Boom, 1980 – 2014

US households’ finances have been deteriorating for decades. Rising expenditures and consumption are part of the puzzle.

Any attempt to explain US households’ financial struggles must engage the issue of rising spending. While income stagnation is almost certainly part of what is causing households’ money problems, it is only a partial explanation at best. “Stagnation” means not growing quickly – it does not imply that household incomes have necessarily been shrinking (although they often have done so over the past several years). Even if incomes are stagnating, people should be able to maintain their savings by restraining their spending.

The problem is that households generally have not tightened their belts when faced with earnings difficulties – at least not until the Great Recession. The first figure below depicts changes in average personal income and expenditures for the United States from 1921 until 2014. All values are denominated in inflation-adjusted 2014 dollars.

At first glance, the graph suggests that average incomes have generally outpaced average expenditures.However, a closer look reveals that the space between disposable incomes and expenditures has been shrinking. In other words, the average American was spending more of their take-home pay.

This space can be seen more clearly in the figure below, which depicts the ratio of mean per capita total expenditures to disposable incomes. As noted in the previous chapter, the typical household saves about 10% or so of the take-home pay, but spending grew steadily – relative to income – and households were putting aside pennies on the dollar right before the Great Recession. Household savings rebounded after the Recession, but it was to savings rates that prevailed in the early 2000’s, not the 1970s.

These shifts of 5 to 10 percentage points in household savings rates translated into substantial differences in the amount of money people were putting aside from year-to-year. In the late 1950s, the average household put aside about $1750 (at 2014 prices) yearly. With the passage of time, households found themselves able to put aside more money, and by the early 1970s the typical household was putting aside over $3000 (again, at 2014 prices). However, per capita savings fell steadily over the ensuing decades. Even though the average person earn far more money in 2005 than in 1970, Americans typically put aside three times as much from year-to-year. Savings did rebound after the 2008 Crisis, even though people should be putting aside much more money today – retirement, health care, a college education, and many other living costs are much higher today than 40 years ago.

More spending is undoubtedly part of what is causing US households’ money problems. One might argue that, in an age of Walmart, Costco, and cheap Chinese imports, it has never been so easy to save money. Yet people are not saving money. Any endeavor to explain US households’ financial insecurity must engage over-spending.

Download the R Markdown and Data FIles Here

Who Pays the Federal Government's Bills?

Who pays the federal government’s bills? Not corporations.

In conservative circles, one often hears about the ways in which high taxes are unfairly strangling the rich and businesses, while everyone else enjoys a free ride on their tab. They often argue that the Obama administration has implemented some egregiously expropriatory taxes on them. In fact, the government draws less tax (in proportion to the size of the overall economy) from corporations and from income taxes (the primary tax on affluent people’s incomes) than it did over most of the postwar era.

The stacked area plot below shows how the composition and overall level of federal taxes have changed since the mid-1930s. It measures the ratio of government receipts to GDP, which approximates the size of government taxes relative to the overall size of national economic output (a rough proxy for the overall size of the economy). Data come from the US Office of Management and Budget.1



After World War II, the government increased taxes dramatically, with government revenues rising from around 5% of GDP in the mid-1930s to just over 18% by 1952. Public sector revenues more than tripled.

In the early-1950s, governments drew about 42% of its revenue from individual income taxes, and about 32% from corporate income taxes. What I term payroll taxes involve taxes related to “Social Insurances and Retirement” programs, like Social Security and Medicare taxes. Excisetaxes are taxes on particular products, like alcohol, cigarrettes, and gasoline. During the mid-20th century, many governments also drew considerable money from trade tariffs.

Beginning in 1953, corporate income taxes decreased in proportion to the government’s overall revenues. Often, this was instituted by implementing and altering rules related to tax deductions. The first cut in corporate taxes is believed to be related to the Eisenhower’s passage of more generous capital depreciation rules. These types of corporate tax deductions have been implemented almost continuously throughout the postwar era.

WIth the passage of time, government revenues were increasingly funded through payroll taxes, while excise and corporate income taxes were eliminated. By 2000, about 50% of government revenues were drawn from personal income taxes, 10% from corporate taxes, 32% from payroll taxes, and 4% from excise taxes. The burden of funding government operations made a substantial shift away from corporate sources, and towards household sources.

Under the George W. Bush administration, personal taxes were cut along with overall government revenues, and the lost revenue was covered by public borrowing. By 2004, income taxes fell to 43% of total revenues, while payroll taxes, which were not reduced, rose to 40% of total government revenues.

What can we glean from these findings? First, business taxes seem much more modest than they’ve been throughout most of the postwar era. The government has been reducing taxes on corporations for decades. Whatever arguments might be made about the high tax rates levied on corporations, the federal government’s low overall take from this source suggests that tax deductions make the actual amount paid rather low. Insofar as personal taxes are concerned, top income tax rates have decreased considerably over decades (I’ll post on that another day), while the take from payroll taxes has risen considerably. Payroll taxes fall harder on lower income earners, as payroll taxes are only levied on the first $118,000 that someone earns.

Overall, taxes on corporations are very low by modern historical standards. Personal income taxes are not particularly high by historical standards either.

  1. Office of Management and Budget (2015) “Table 2.3: Receipts by Source as Percentages of GDP: 1934-2020” Accessed June 5, 2015.