The Spectacular Fall, and Very Modest Recovery, of Household Savings

Household savings rates have fallen considerably since the early-1970s

The personal savings rate tries to estimate the proportion of people’s disposable (post-tax) income that is not spent. Presumably, it gives us a sense of how much money people are putting aside for the future. A lower savings rate suggests that more people are financially ill-prepared for the future.

The figure below illustrates changes in US households’ personal savings rates. The data comes from the Federal Reserve Board.1 Keep in mind that these rates are derived from aggregate data, and represent means rather than medians. It seems likely that wealthier households with much higher savings rates push these averages up, and that the median household would save below the average personal savings rate.

During the 1960s and early-1970s, households saved between 10% and 14% of their incomes. At those rates, a household earning a yearly salary would put aside between $6,000 and $8,400 a year. Over thirty years of compounding 5% real annual returns, such savings would result in a nest egg of between $400 and $558 thousand.

us personal savings rate

Since 1976, the personal savings rate has declined steadily, eventually reaching near-zero right before the 2008 crisis. Throughout the Great Recession, many observers have celebrated a resurgence in savings, but the magnitude and expected durability of this rebound can easily be overstated. When savings rebounded to about 5% in 2013,[^21] it was reverting to levels that prevailed in the mid-1990s, not the mid-1960s. This decline is enough to produce a substantial diminishment in long-term wealth accumulation. Were our $60,000 a year family to save between 2% and 5% of their income (as opposed to 10% – 14%), they would be left with a nest egg of $78 to $199 thousand. The effect of low savings seems more moderate on a year-by-year basis, but they can render substantial differences in wealth over a lifetime.

Why have savings been falling? There are many explanations. Part of the picture probably involves income stagnation. Many analysts argue that people are more spendthrift today, and more amenable to borrowing money. Consumer debt is also cheaper and easier to obtain than in the 1960s or 1970s. In my own research, I also focus on the rising cost of living, particularly the rising cost of essential goods and services like health care or education.

Whatever the cause, it seems likely that people save less money, which suggests that they will have less wealth from which to draw in the future.

Federal Reserve Board (2014) “Personal Saving Rate, Percent, Annual, Seasonally Adjusted Annual Rate” Series PSAVERT from Federal Reserve Economic Data set. Accessed in Spring 2015.↩

Demographic Predictors of Wealth: Rough Estimates

Which demographics have more or less wealth? Some rough estimates

Who is wealthier, and who is poorer? The question can be answered by looking at data from the Survey of Consumer Finances. Figures are for 2013.

I consider the following predictors:

  • Sex of household head
  • White Non-Hispanic vs. Others
  • Age of head
  • Education of head
  • Household headed by pair/single
  • Children vs. no children

Notes: Wealth bottom-coded at $1 and top-coded at $4MM. Model predicts logged net worth.

Predictor exp(Coef) SE
Baseline $662
Female -53% 0.12
Non-White -77% 0.09
Age 35-44 +426% 0.15
age 45-54 +1,327% 0.16
Age 55-64 +3,227% 0.16
Age 65-74 +13,110% 0.17
Age 75+ +22,471% 0.21
HS +77% 0.14
Some College +109% 0.13
College +574% 0.13
Unpaired -73% -0.12
Working +214% 0.13
Parents +34% 0.10

The model does not do a very good job of developing precise estimates, but it still imparts some sense of the magnitude of different predictors’ effects.

The model suggests that age is the strongest predictor of net worth. This makes sense: Not only does it take time to accumulate wealth, but family wealth is more likely to be passed on to older people, because their ancestors are more likely to have passed away. The effects of age are huge. Holding all other factors constant, a senior is predicted to have several thousand times the amount of wealth than his or her otherwise similar under-35 counterpart.

The effects of sex, race, cohabitation status, and education are also considerable, where the wealth of a household headed by a man who is non-Hispanic white, a college graduate, married or cohabiting, and working has several times higher wealth. The effects of these different factors are multiplicative. For example, the model predicts that a household headed by a working, married, college-educated, non-white woman, age 40 with no children is $8,267. If she were white, it is predicted to be just under $36 thousand. If she were still black but were 60 years old, her household is predicted to have a net worth of just over $50 thousand. If she were white, it would be $217,510.

Again, it is important not to take these coefficients too literally. The model only gives a sense of relative magnitudes. What we do learn is that age matters a lot in predicting how much wealth someone holds. While age is of critical importance, sex, race, education, and working status also seem to play an important role.

The Rise and Fall of Unionism in America

America’s workforce was most heavily unionized in the early 1950s. Since then, unions have experienced a long-term decline.

Unions are a persistent point of conflict in economic policy debates. Many observers argue that the economic difficulties faced by America’s middle class are at least partly attributable to the decline of unions. Others see unions as a detrimental force in the economy, and believe that they harm employers, the US economy, and ultimately workers themselves. Unions are portrayed as powerful and corrupting forces in US society, and dying institutions that are being squashed by all-powerful business interests.

How strong are unions? One way to answer that question is to look at union density, the ratio of unionized workers to total workers. The figure below depicts changes in union density across all US workers from 1880 and 2013. Data come from Klaus Walde and Barry Hirsch and David Macpherson.1

Union Density

The graph suggests that unionization developed slowly between the 1880s and the Great Depression. By the 1880s, struggles to unionize labor were heated and sometimes violent. Larger movements to advance unionism included the Knights of Labor, and later the American Federation of Labor. Through the Great Depression, more workers unionized under the auspices of the Congress of Industrial Organizations. These latter two groups eventually merged to create the AFL-CIO.

Unionization increased dramatically with the institution of the 1935 National Labor Relations Act, which legally protected workers’ rights to organize unions, restricted businesses ability to interfere with or fire unionizing workers, and enabled compulsory union membership in organizations where unions had been established. Between World War II and the mid-1950s, union density peaked at around one-third of the work force. Union density began a secular decline after 1954, and is quickly approaching levels that prevailed before the passage of the NLRA.

The figure depicts the decline of union density, which is the proportion of workers in unions. This decline in density does not necessarily represent a decline in the absolute number of union jobs, but is more a reflection of a long-term faster growth in non-union jobs. However, after 1980, the absolute number of union jobs began to fall. In 1980, there were approximately 20 million union jobs, whereas there were about 14.5 million in 2013.2 This decline followed several changes, including the deinstitutionalization of regulations that benefitted unions (e.g., “right to work” legislation), employment declines in traditionally unionized sectors (e.g., automotive, utilities), and an increasingly cultural antipathy towards unionism.

Insofar as union membership is concerned, it is clear that the institution of unionism has declined considerably over the past several decades. Union jobs are less prevalent and decreasing in number. There are disagreements about whether or not the decline of unionism is a good or bad thing for workers and society-at-large, but it seems clear that this decline is taking place.

  1. Pre-1973 ata from Alejandro Donado and Klaus Walde (2012) “How Trade Unions Increase Welfare” Economic Journal 112(563): 990 – 1009. Set draws strongly from Richard B. Freeman (1998) “Spurts in Union Growth: Moments and Social Processes” in Michael D. Bordo, Claudia Goldin, and Eugene N. White (eds.) The Defining Moment: The Great Depression and the American Economy in the Twentieth Century University of Chicago Press. Post-1973 numbers from Barry Hirsch and David Macpherson “Union Membership, Coverage, Density, and Employment, Among All Wage and Salary Workers, 1973-2014”
  2. Donado and Walde (2012) Op. Cit.

Download the Markdown file and raw data

Family Business in Decline?

Data from the Survey of Consumer Finances suggest that family-owned businesses are experiencing a decline. There appear to be as many households earning some income from proprietary businesses, but the proceeds from these businesses are falling. There are fewer businesses that earn a living wage, and the prevalence of high-earning proprietary businesses also seems to be falling.

Proportion of Households Earning Business Income

Consider the figure below, which shows the proportion of US households earning (1) any income, (2) at least a median income, and (3) at least a 90th percentile income from a proprietary business.


Between 8% and 10% of US households receive some business income, but between a two-thirds and three-quarters of these households fail to secure the equivalent of a median household income through personally-owned businesses. In 1992, about 3.3% of US households received at least a median household income from personally-owned businesses. By 2013, just over half as many households – 1.8% – received a median income from such businesses. The proportion of high-earning personal businesses has fallen even more sharply, from 1.3% in 1992 to 0.4% in 2013.

On one hand, these reduced earnings could be the byproduct of random variation. For example , business earnings took a bit of a dip in 1995 and 2004, but then recovered. This could be a product of sample variability or a natural minor fluctuation in small business profitability. However, that seems less likely when we look at the wider distribution of personal business earnings.

Distribution of Business Earnings

The figure below shows the distribution of business earnings from 1992 and 2013. It describes the 25th, 50th, 75th, 90th, and 95th percentile earnings from proprietary businesses.


This figure shows a long-term decline in proprietary business earnings, which seems to have begun after the 2001 recession. In 2001, the median household business earned $26,664. This figure fell regularly through 2013, where the median stood at $16,400. This represents a fall of about 38%.

A similar decline occurred among the higher ranks of the business income scale. From 2001 to 2013, 75th percentile income fell from $79,200 to $40,000. Ninetieth percentile income fell from $217,800 to $87,000. Ninety-fifth percentile income from $370,920 to $156,600. These are staggering losses.

What Does It Mean?

If these figures accurately reflect changes in personal business earnings, it suggests that their earnings are falling quickly. Why might this be happening? It may be that small business faces mounting pressures from many quarters. Retailers may have increasing difficulty competing with big box stores and online retailers. Small manufacturing outfits may have trouble competing with foreigners. It might be that many small businesses are being replaced with automated substitutes (e.g., TurboTax and LegalZoom are killing small accountants and lawyers).

Whatever the cause, it seems quite clear that small businesses (at least unincorporated ones) are doing badly in the US.

The Long Consumption Boom, 1980 – 2014

US households’ finances have been deteriorating for decades. Rising expenditures and consumption are part of the puzzle.

Any attempt to explain US households’ financial struggles must engage the issue of rising spending. While income stagnation is almost certainly part of what is causing households’ money problems, it is only a partial explanation at best. “Stagnation” means not growing quickly – it does not imply that household incomes have necessarily been shrinking (although they often have done so over the past several years). Even if incomes are stagnating, people should be able to maintain their savings by restraining their spending.

The problem is that households generally have not tightened their belts when faced with earnings difficulties – at least not until the Great Recession. The first figure below depicts changes in average personal income and expenditures for the United States from 1921 until 2014. All values are denominated in inflation-adjusted 2014 dollars.

At first glance, the graph suggests that average incomes have generally outpaced average expenditures.However, a closer look reveals that the space between disposable incomes and expenditures has been shrinking. In other words, the average American was spending more of their take-home pay.

This space can be seen more clearly in the figure below, which depicts the ratio of mean per capita total expenditures to disposable incomes. As noted in the previous chapter, the typical household saves about 10% or so of the take-home pay, but spending grew steadily – relative to income – and households were putting aside pennies on the dollar right before the Great Recession. Household savings rebounded after the Recession, but it was to savings rates that prevailed in the early 2000’s, not the 1970s.

These shifts of 5 to 10 percentage points in household savings rates translated into substantial differences in the amount of money people were putting aside from year-to-year. In the late 1950s, the average household put aside about $1750 (at 2014 prices) yearly. With the passage of time, households found themselves able to put aside more money, and by the early 1970s the typical household was putting aside over $3000 (again, at 2014 prices). However, per capita savings fell steadily over the ensuing decades. Even though the average person earn far more money in 2005 than in 1970, Americans typically put aside three times as much from year-to-year. Savings did rebound after the 2008 Crisis, even though people should be putting aside much more money today – retirement, health care, a college education, and many other living costs are much higher today than 40 years ago.

More spending is undoubtedly part of what is causing US households’ money problems. One might argue that, in an age of Walmart, Costco, and cheap Chinese imports, it has never been so easy to save money. Yet people are not saving money. Any endeavor to explain US households’ financial insecurity must engage over-spending.

Download the R Markdown and Data FIles Here

Who Pays the Federal Government's Bills?

Who pays the federal government’s bills? Not corporations.

In conservative circles, one often hears about the ways in which high taxes are unfairly strangling the rich and businesses, while everyone else enjoys a free ride on their tab. They often argue that the Obama administration has implemented some egregiously expropriatory taxes on them. In fact, the government draws less tax (in proportion to the size of the overall economy) from corporations and from income taxes (the primary tax on affluent people’s incomes) than it did over most of the postwar era.

The stacked area plot below shows how the composition and overall level of federal taxes have changed since the mid-1930s. It measures the ratio of government receipts to GDP, which approximates the size of government taxes relative to the overall size of national economic output (a rough proxy for the overall size of the economy). Data come from the US Office of Management and Budget.1



After World War II, the government increased taxes dramatically, with government revenues rising from around 5% of GDP in the mid-1930s to just over 18% by 1952. Public sector revenues more than tripled.

In the early-1950s, governments drew about 42% of its revenue from individual income taxes, and about 32% from corporate income taxes. What I term payroll taxes involve taxes related to “Social Insurances and Retirement” programs, like Social Security and Medicare taxes. Excisetaxes are taxes on particular products, like alcohol, cigarrettes, and gasoline. During the mid-20th century, many governments also drew considerable money from trade tariffs.

Beginning in 1953, corporate income taxes decreased in proportion to the government’s overall revenues. Often, this was instituted by implementing and altering rules related to tax deductions. The first cut in corporate taxes is believed to be related to the Eisenhower’s passage of more generous capital depreciation rules. These types of corporate tax deductions have been implemented almost continuously throughout the postwar era.

WIth the passage of time, government revenues were increasingly funded through payroll taxes, while excise and corporate income taxes were eliminated. By 2000, about 50% of government revenues were drawn from personal income taxes, 10% from corporate taxes, 32% from payroll taxes, and 4% from excise taxes. The burden of funding government operations made a substantial shift away from corporate sources, and towards household sources.

Under the George W. Bush administration, personal taxes were cut along with overall government revenues, and the lost revenue was covered by public borrowing. By 2004, income taxes fell to 43% of total revenues, while payroll taxes, which were not reduced, rose to 40% of total government revenues.

What can we glean from these findings? First, business taxes seem much more modest than they’ve been throughout most of the postwar era. The government has been reducing taxes on corporations for decades. Whatever arguments might be made about the high tax rates levied on corporations, the federal government’s low overall take from this source suggests that tax deductions make the actual amount paid rather low. Insofar as personal taxes are concerned, top income tax rates have decreased considerably over decades (I’ll post on that another day), while the take from payroll taxes has risen considerably. Payroll taxes fall harder on lower income earners, as payroll taxes are only levied on the first $118,000 that someone earns.

Overall, taxes on corporations are very low by modern historical standards. Personal income taxes are not particularly high by historical standards either.

  1. Office of Management and Budget (2015) “Table 2.3: Receipts by Source as Percentages of GDP: 1934-2020” Accessed June 5, 2015.


Slowdown in Educational Attainment

Society professes a dedication to promoting higher educational attainment. Are we doing a good job?

We always hear how education is a top priority in today’s economy.  We are told that society needs people to be better educated, and that progress in educating people is slow but steady.  Educational attainment may be rising, but is society doing a good job of ensuring that its young are educated?

The graph below, which is built on data from and reproduces a slightly modified graphic produced by the Census Bureau1, describes how educational attainment has changed across society since 1940.



The graph suggests remarkable improvements in educational attainment over the past seventy years. In 1940, roughly three-quarters of the population dropped out before completing high school, about 5% attended some college and 4.6% completed college. By 2014, only 12% of society had less than a high school education, 27% had some college, and 32% completed college.

The figure suggests that US educational attainment has improved continuously over the past seventy or so years. The smooth transition from a less- to more-educated society seems to have continued unabated. It looks like society has been in a continuous march toward more education.

However, the appearance of a smooth transition to a more educated society is partly an artifact of the data. Overall educational attainment figures include people of different generations, who came of age during during different periods. The inclusion of older generations obfuscates the ways in which society’s young have been educated at different rates.  It will make change look slower and more incremental.

To get a sense of these changing rates at which the young are being educated, it makes sense to focus on educational attainment among young people who are of an age at which college completion is likely. To do this, Census figures look at attainment among those aged 25 to 34.

The figure below shows changing educational enrollment among Americans in this age group. The data source is the same:



This graph provides a different picture to the image of continuous improvement imparted in the first figure. The graph suggests that the pace of increasing educational attainment was much faster in the 1940s through late-1970s, but slowed afterwards.

For example, high school drop outs fell from about 63% of young adults in 1940 to 15% in 1980. From 1980 to 2014, this proportion fell to 10%, a marginal improvement. On one hand, society might be forgiven for not being able to eradicate the phenomenon of dropping out. That final 10% to 15% of drop outs might be a particularly tough group to marshal towards high school completion. On the other hand, society has not made a concerted press for universal high school completion, much in the way that it stamped out illiteracy. It is reluctant to make bigger investments in education, and it is much more reluctant to expand social assistance to those who drop out due to economic pressures. It is hard to tell whether our failure to ensure universal completion is a matter of the problem being too difficult or society not caring enough to do the needed work. In any case, the pace at which minimum educational attainment improved has slowed considerably over the past thirty to forty years.

The proportion of young adults who have only completed high school completion is roughly where it was in 1940. In 1940, it was a result of too many people not having gone far enough in attaining education. By 2014, it was a matter of more people attaining high school and moving on to at least some college.

The pace at which college attainment rose accelerated between 1940 and 1977, and then stalled until the mid-1990s. From then to today, the proportion of young adults with college attainment has grown steadily. The proportion of college graduates grew faster between 1940 to 1977 (3.8% average annual growth) than from 1994 to today (2.1% average annual). Near-college attainment also grew faster before 1977 (2.9% average annual growth rate) than after 1991 (1.4% average annual growth rate).

What can we glean from these figures? Educational attainment rose much more quickly during the mid-20th century than since the mid- to late-1970s. In part, this is probably because the easier work has been done. Conceivably, it is easier to promote high school and college completion when fewer people do it. As rates rise, schools and education policy-makers have to find ways to educate more obstinate cases.

However, I’m not so sure that we should give the past several decades a total pass. Society has shown that it can stamp out obstinate problems if it is sufficiently motivated. We seem to lack that kind of motivation. It is harder to convince society to invest more in education. More people oppose extending economic aid to poor people to help them complete college. We are much less concerned with making college affordable, and more concerned with ensuring that college students don’t get “free rides.”  We might say that we are committed to educating Americans, but this self-image may be at odds with our revealed true preferences.

  1. US Census Bureau (2015) “Table A-1. Years of School Completed by People 25 Years and Over, by Age and Sex: Selected Years 1940 to 2014” Data table downloaded June 2015 from

Download the raw data and Markdown file

Immigration Boom: Levels Back to 19th Century Levels

Immigration has been a hot button issue in both the United States and over much of Europe. Has immigration risen substantially? We examine the issue below.

How Big Is the Immigrant Population?

Compared to previous decades, the immigrant population is large.  The figure below charts out the ratio of migrants to the overall population since 1850.  It uses Census data compiled by the Migration Policy Institute.


Throughout much of the 19th century, America’s borders were largely open to immigration. America began to restrict immigration at the beginning of the 20th century.  Its immigrant stock fell as immigrants died and were not replaced by new immigration. By the 1970s, immigration reached a low point, after which the country progressively opened its borders. By the 2010s, the country’s immigrant stock is roughly where it was during the 19th century.

The Changing Composition of Immigration

Not only has the number of immigrants risen, but so has teh composition of the immigrant population. The Migration Policy Institute’s Jie Zong and Jeanne Batalova note:

In 2014, Mexican immigrants accounted for approximately 28 percent of the 42.4 million foreign born in the United States, making them by far the largest immigrant group in the country. India, closely trailed by China (including Hong Kong but not Taiwan), and the Philippines were the next largest countries of origin, accounting for about 5 percent each. El Salvador, Vietnam, Cuba, and Korea (3 percent each), as well as the Dominican Republic and Guatemala (2 percent each), rounded out the top ten. Together, immigrants from these ten countries represented close to 60 percent of the U.S. immigrant population in 2014.

The predominance of Latin American and Asian immigration in the late 20th and early 21st centuries starkly contrasts with the trend seen in 1960 when immigrants largely originated from Europe. Italian-born immigrants made up 13 percent of the foreign born in 1960, followed by those born in Germany and Canada (about 10 percent each). In the 1960s no single country accounted for more than 15 percent of the total immigrant population.

So, not only have the raw numbers risen, but their composition has changed. In contrast to fifty years ago, immigrants come mainly from non-white or Hispanic countries, and today’s migrant population has one large group from a common origin (Mexico).

Anxiety over Immigration

A rising tide of non-white and Hispanic immigrants, and a perceived large influx of a particular community, evokes anxiety among those who fear or harbor animosity towards these ethnic groups. These groups are often blamed with damaging the economic fortunes of native Americans, while pushing up crime. The data suggests that there is little basis to arguments linking immigration to crime – immigrants are more law-abiding than natives, and crime has generally been falling during this long growth in the immigration stock. The argument linking immigration to the native-born population’s economic problems is far more complex, and there is little reason to believe that people would be better off economically if immigration were to be cut (I will save that for another entry).

Still, the high level of salience attached to immigration in our policy debates reflects the fact that it is a real and major trend. America is certainly becoming a national of immigrants again.

Entrepreneurial Business Formation over Time

Entrepreneurs come to own their businesses in a variety of ways. Most small businesses are startups, in which the entrepreneur starts the business from scratch. Other businesses are bought. Some are inherited or received as gifts.

How to entrepreneurs come to own their businesses? The figure below, which is built from Survey of Consumer Finances data, summarizes how business owners report having come to own their enterprise:


Over time, the proportion of businesses that were acquired through purchase or gifts has declined as an overall proportion of businesses. In 1989, about 25% of enterpreneurial enterprises were acquired through purchase, and another 9% or so were received as gifts or inheritances. About 64% had been started by the entrepreneur operating the business.

The proportions appear to have been changing slowly but steadily over the past several years. By 2013, the proportion of bought businesses fell by half to about 13%. Given businesses stood at about 5%, roughly the range in which they stood since the early 1990s. In contrast, over three-quarters (76%) of enterpreneurial enterprises were started by their owners.

The striking decline in purchased businesses suggests that the secondary market for owned businesses is shrinking. Perhaps today’s small businesses are much more likely to close up shop than be sold off. The marketplace may be more competitive, and businesses might not be as profitable and thus valuable on secondary markets, so more entrepreneurs may simply opt to close up rather than sell their businesses to someone else.

The small proportion of businesses that have been acquired as gifts, including inheritances is striking. Small enterprise are rarely a vehicle for maintaining familial economic dynasties – most businesses live and die by the entrepreneur that started them. Of course, a successful small business may put an entrepreneur in a position to set up future generations, but these seem to be more the exception than the rule. About 5% of all entrepreneurial enterprises – less than 1% of society – operates a business that they did not build or buy.

These figures suggest that most business people live off of enterprises that they created and nurtured. Inherited businesses are not very commonplace, and the secondary market for businesses seems to be drying up over the long-term.


Home Prices and Income

Home prices are high, but how much more expensive than in the past?  Are all homes more expensive, or just homes for rich people?

Home Ownership Rates Roughly Similar

The table below describe changes in home ownership rates across the income scale. Although it looks like home ownership widely fell, these differences are generally insignificant at the bottom of the income scale. Below the 90th income percentile, home ownership rates in 1989 and 2013 are indistinguishable. At the top of the income scale, ownership rates seem to have fallen slightly, but these changes are at the border of statistical significance.

Income Class % Own Homes (1989) % Own Homes (2013)
Bottom 20% 29% 29%
Second-Lowest 20% 49% 45%
Middle 20% 56% 57%
Second-Highest 20% 73% 78%
80% – 90% 82% 86%
90% – 95% 92% 90%
Top 5% 92% 95%

Any difference in home values across the income scale are likely not the product of changes in home ownership rates.

Prices Have Risen Faster at the Lower and Middle End of the Market

The table below compares the median home value among owned homes in each income category. All values are in 2013 inflation-adjusted dollars.

Income Class Median Value of Owned Home (1989) Median Value of Owned Home (2013) Change 1989 – 2013
Bottom 20% $33,200 $100,000 +201%
Second-Lowest 20% $51,200 $125,000 +144%
Middle 20% $63,000 $133,000 +111%
Second-Highest 20% $75,000 $175,000 +133%
80% – 90% $112,800 $250,000 +121%
90% – 95% $167,600 $335,800 +100%
Top 5% $224,000 $646,000 +188%

Home prices rose fastest for those at the bottom of the income scale. On one hand, this means that poorer home owners got the proportionally greatest returns, relative to those higher on the scale. On the other hand, this means that poorer young people have to bear proportionally higher costs to buy a home. Overall, home values doubled to tripled between 1989 and 2013. Of course, incomes did not rise commensurately.

The table below shows how the ratio of home values to income changes during this period. In 1989, most households’ homes cost the equivalent of one year’s salary. Near the bottom end of the income scale, owned homes were two to three times annual salaries. By 2013, home values were double to triple annual salaries, and even higher at the bottom end of the income scale.

Income Class Median Income (1989) Median Home Value:Income (1989) Median Income (2013) Median Home Value:Income (2013)
Bottom 20% $11,313 2.9 $14,203 7.0
Second-Lowest 20% $26,398 1.9 $28,407 4.4
Middle 20% $47,139 1.3 $45,654 2.9
Second-Highest 20% $74,290 1.0 $76,090 2.9
80% – 90% $111,248 1.0 $121,945 2.1
90% – 95% $158,764 1.1 $183,021 1.8
Top 5% $285,473 0.8 $361,579 1.8

Homes have certainly gotten more expensive, relative to incomes.  In relation to incomes, housing costs have roughly doubled, and in some cases tripled.  The rise in home values seems to have been greatest for the middle of the income scale.