Connect with us

Spread & Containment

December Payrolls Huge Miss Again, Just 199K Jobs Added, As Unemployment Rate Tumbles

December Payrolls Huge Miss Again, Just 199K Jobs Added, As Unemployment Rate Tumbles

With everyone bulled up on the December jobs print which was expected to more than double the disappointing November print of 210K to 447K with a whisper…

Published

on

December Payrolls Huge Miss Again, Just 199K Jobs Added, As Unemployment Rate Tumbles

With everyone bulled up on the December jobs print which was expected to more than double the disappointing November print of 210K to 447K with a whisper of more than 500K, moments ago the BLS reported that in December the US job market deteriorated again, as only 199K jobs were added, a huge miss to expectations, and the lowest number since December 2020.

This was the second consecutive month of big misses in the Establishment Survey relative to expectations:

As expected, the November payrolls data was revised higher, but not nearly as much as some had expected, rising just 39K from 210K to 249K. At the same time, nonfarm payroll employment for October was revised up by 102,000, from +546,000 to +648,000, and the change for November was revised up by 39,000, from +210,000 to +249,000. With these revisions, employment in October and November combined is 141,000 higher than previously reported. Still this does not explain why the current month continues to print disappointly lower than expected.

However, in a carbon copy of last month's schism between the Household and the Establishment survey, in December the former once again showed solid gains, with the number of employed workers rising by a whopping 651K to 155.975MM, and with the number of Unemployed sliding by almost half a million from 6.802MM to 6.319MM, the unemployment number once again slumped sharply, dropping to just 3.9% from 4.2%, and below the consensus estimate of 4.1%, even as Black unemployment gained notably.

Among the major worker groups, the unemployment rates for adult men (3.6 percent), adult women (3.6 percent), and Whites (3.2 percent) declined in December. The jobless rates for teenagers (10.9 percent), Blacks (7.1 percent), Asians (3.8 percent), and Hispanics (4.9 percent) showed little or no change over the month.

As Bloomberg chief economist Carl Riccadonna writes, the biggest news from this report, without doubt, is the fact that the labor market has now arrived at (and surpassed) the Federal Reserve’s estimate of full employment (which the Fed pegs at 4.0%).

“As we are already beyond that level (3.9% reported), this will compel any lingering fence sitters on the FOMC to the view that the threshold for interest rate liftoff has been met -- thereby titling policy makers’ inclination toward March vs. June liftoff.”

Bottom line: just like one month ago, we got a very weak Establishment survey (which traditionally has been far more reliable) and a strong Household survey, i.e., the plunge in the unemployment rate, which we expect Biden will be touting when he addresses the jobs number at 10:45am ET.

The labor force participation rate was unchanged at 61.9 percent in December but remains 1.5 percentage points lower than in February 2020. The employment-population ratio increased by 0.2 percentage point to 59.5 percent in December but is 1.7 percentage points below its February 2020 level. Over the year, these measures have increased by 0.4 percentage point and 2.1 percentage points, respectively

Looking at wage growth, average hourly earnings rose 4.7% (up 0.6% on the month, above the exp. 0.4%), which while down from the upward revised 5.1% Y/Y, was a whopping 0.5% above the 4.2% expected. And while the headline print was weak, the wage growth is clearly good news for workers, and another sign of the tightening labor market:

The average workweek for all employees on private nonfarm payrolls was unchanged at 34.7 hours in December. In manufacturing, the average workweek edged down by 0.1 hour to 40.3 hours, and overtime edged down by 0.1 hour to 3.2 hours. The average workweek for
production and nonsupervisory employees on private nonfarm payrolls edged up by 0.1 hour to 34.2 hours

Among the unemployed, the number of permanent job losers, at 1.7 million in December, declined by 202,000 over the month and is down by 1.8 million over the year. The number of persons on temporary layoff was little changed at 812,000 in December but is down by 2.3 million over the year. The number of permanent job losers in December is 408,000 higher than in February 2020, while the number on temporary layoff has essentially returned to its February 2020 level.

Some more details from the report: the number of persons employed part time for economic reasons, at 3.9 million in December, decreased by 337,000 over the month. The over-the-year decline of 2.2 million brings this measure to 461,000 below its February 2020 level. These individuals, who would have preferred full-time employment, were working part time because their hours had been reduced or they were unable to find full-time jobs.

The number of persons not in the labor force who currently want a job was little changed at 5.7 million in December. This measure decreased by 1.6 million over the year but is 717,000 higher than in February 2020. These individuals were not counted as unemployed because they were not actively looking for work during the 4 weeks preceding the survey or were unavailable to take a job.

The number of workers unable to work due to bad weather was 74K, which is below the historical average for Dec. of 137k.

Among those not in the labor force who wanted a job, the number of persons marginally attached to the labor force was essentially unchanged at 1.6 million in December. These individuals wanted and were available for work and had looked for a job sometime in the prior 12 months but had not looked for work in the 4 weeks preceding the survey. The number of discouraged workers, a subset of the marginally attached who believed that no jobs were available for them, was also essentially unchanged over the month, at 463,000.

Curiously, in December, 3.1 million persons reported that they had been unable to work because their employer closed or lost business due to the pandemic--that is, they did not work at all or worked fewer hours at some point in the 4 weeks preceding the survey due to the pandemic. This measure was down from the level of 3.6 million in November. Among those who reported in December that they were unable to work because of pandemic-related closures or lost business, 15.9 percent received at least some pay from their employer for the hours not worked, little changed from the prior month.

Breaking down job gains by job category:

  • Employment in leisure and hospitality continued to trend up in December (+53,000). Leisure and hospitality has added 2.6 million jobs in 2021, but employment in the industry is down by 1.2 million, or 7.2 percent, since February 2020. Employment in food services and drinking places rose by 43,000 in December but is down by 653,000 since February 2020.
  • Employment in professional and business services continued its upward trend in December (+43,000). Over the month, job gains occurred in computer systems design and related services (+10,000), in architectural and engineering services (+9,000), and in scientific
  • research and development services (+6,000). Employment in professional and business services overall is slightly below (-35,000) its level in February 2020.
  • Manufacturing added 26,000 jobs in December, primarily in durable goods industries. A job gain in machinery (+8,000) reflected the return of workers from a strike. Manufacturing employment is down by 219,000 since February 2020.
  • Construction employment rose by 22,000 in December, following monthly gains averaging 38,000 over the prior 3 months. In December, job gains occurred in nonresidential specialty trade contractors (+13,000) and in heavy and civil engineering construction (+10,000). Construction employment is 88,000 below its February 2020 level.
  • Employment in transportation and warehousing increased by 19,000 in December. Job gains occurred in support activities for transportation (+7,000), in air transportation (+6,000), and in warehousing and storage (+5,000). Employment in couriers and messengers was essentially unchanged. Since February 2020, employment in transportation and warehousing is up by 218,000, reflecting job growth in couriers and messengers (+202,000) and in warehousing and storage (+181,000).
  • Employment in wholesale trade increased by 14,000 in December but is 129,000 lower than in February 2020.
  • Mining employment rose by 7,000 in December. Employment in the industry is down by 81,000 from a peak in January 2019.

So what's going on? Well, blame Covid of course (which no economist could factor into their forecasts apparently): addressing the second consecutive big miss, Bloomberg Intelligence’s chief economist, Carl Riccadonna, said that in thinking about the near-term labor trend, it’s important to consider that through the December employment survey period, the Covid case count was up 50% relative to the relevant November period. In early January, it is already up 440% relative to December, so the Covid drag will be an order of magnitude larger in the January data and could easily push net payrolls into negative territory for the next few months.

CIBC economist Katherine Judge was also quick to latch on the Omicron excuse: “The leisure and hospitality sector was the single largest contributor to the job gains (+53K). However, that won’t last in the months ahead given the spread of omicron. Indeed, we expect continued softness in employment, and particularly hours worked, over the next couple of months.”

In terms of what this report means for markets, SocGen's head of U.S. rates strategy said that the Federal Reserve will probably overlook a miss in headline employment figures to focus on the declining jobless rate and “eye-popping” wage increases: “This is an inflation story and the curve is responding by bear steepening." This view was echoed by Bloomberg editor Chris Anstey who said that "all in all, there’s nothing here to dissuade the Fed from considering a March interest-rate hike" because of course the Fed has to focus on anything that doesn't show the economy is slowing as it is about to hike.

And sure enough, the March "lift off" chorus is singing :“This is a green light for March. The U3 unemployment rate plunged 0.3ppt to 3.9%, 0.4ppt below the Fed’s Q4 2021 estimate and only 0.4ppt above the Fed’s estimate for year end 2022. Average hourly earnings are coming in firm as the labor force participation rate remains flat”, said Neil Dutta of Renaissance Macro.

The final takeaway from today's report comes from Michael Pierce, an economist at Capital Economics, who effectively said ignore the negative and just focus on the positive: “The key takeaway for the Fed is that, with few signs of a recovery in labor supply, the continued decline in the unemployment rate and surge in wage growth looks set to be sustained over 2022."

Tyler Durden Fri, 01/07/2022 - 08:34

Read More

Continue Reading

International

Catastrophic Risk: Investing and Business Implications

    In the context of valuing companies, and sharing those valuations, I do get suggestions from readers on companies that I should value next. While…

Published

on

    In the context of valuing companies, and sharing those valuations, I do get suggestions from readers on companies that I should value next. While I don't have the time or the bandwidth to value all of the suggested companies, a reader from Iceland, a couple of weeks ago, made a suggestion on a company to value that I found intriguing. He suggested Blue Lagoon, a well-regarded Icelandic Spa with a history of profitability, that was finding its existence under threat, as a result of volcanic activity in Southwest Iceland. In another story that made the rounds in recent weeks, 23andMe, a genetics testing company that offers its customers genetic and health information, based upon saliva sample, found itself facing the brink, after a hacker claimed to have hacked the site and accessed the genetic information of millions of its customers. Stepping back a bit, one claim that climate change advocates have made not just about fossil fuel companies, but about all businesses, is that investors are underestimating the effects that climate change will have on economic systems and on value. These are three very different stories, but what they share in common is a fear, imminent or expected, of a catastrophic event that may put a company's business at risk. 

Deconstructing Risk

   While we may use statistical measures like volatility or correlation to measure risk in practice, risk is not a statistical abstraction. Its impact is not just financial, but emotional and physical, and it predates markets. The risks that our ancestors faced, in the early stages of humanity, were physical, coming from natural disasters and predators, and physical risks remained the dominant form of risk that humans were exposed to, almost until the Middle Ages. In fact, the separation of risk into physical and financial risk took form just a few hundred years ago, when trade between Europe and Asia required ships to survive storms, disease and pirates to make it to their destinations; shipowners, ensconced in London and Lisbon, bore the financial risk, but the sailors bore the physical risk. It is no coincidence that the insurance business, as we know it, traces its history back to those days as well.

    I have no particular insights to offer on physical risk, other than to note that while taking on physical risks for some has become a leisure activity, I have no desire to climb Mount Everest or jump out of an aircraft. Much of the risk that I think about is related to risks that businesses face, how that risk affects their decision-making and how much it affects their value. If you start enumerating every risk a business is exposed to, you will find yourself being overwhelmed by that list, and it is for that reason that I categorize risk into the groupings that I described in an earlier post on risk. I want to focus in this post on the third distinction I drew on risk, where I grouped risk into discrete risk and continuous risk, with the later affecting businesses all the time and the former showing up infrequently, but often having much larger impact. Another, albeit closely related, distinction is between incremental risk, i.e., risk that can change earnings, growth, and thus value, by material amounts, and catastrophic risk, which is risk that can put a company's survival at risk, or alter its trajectory dramatically.

    There are a multitude of factors that can give rise to catastrophic risk, and it is worth highlighting them, and examining the variations that you will observe across different catastrophic risk. Put simply, a  volcanic eruption, a global pandemic, a hack of a company's database and the death of a key CEO are all catastrophic events, but they differ on three dimensions:

  1. Source: I started this post with a mention of a volcano eruption in Iceland put an Icelandic business at risk, and natural disasters can still be a major factor determining the success or failure of businesses. It is true that there are insurance products available to protect against some of these risks, at least in some parts of the world, and that may allow companies in Florida (California) to live through the risks from hurricanes (earthquakes), albeit at a cost.  Human beings add to nature's catastrophes with wars and terrorism wreaking havoc not just on human lives, but also on businesses that are in their crosshairs. As I noted in my post on country risk, it is difficult, and sometimes impossible, to build and preserve a business, when you operate in a part of the world where violence surrounds you. In some cases, a change in regulatory or tax law can put the business model for a company or many company at risk. I confess that the line between whether nature or man is to blame for some catastrophes is a gray one and to illustrate, consider the COVID crisis in 2020. Even if you believe you know the origins of COVID (a lab leak or a natural zoonotic spillover), it is undeniable that the choices made by governments and people exacerbated its consequences. 
  2. Locus of Damage: Some catastrophes created limited damage, perhaps isolated to a single business, but others can create damage that extends across a sector geographies or the entire economy. The reason that the volcano eruptions in Iceland are not creating market tremors is because the damage is likely to be isolated to the businesses, like Blue Lagoon, in the path of the lava, and more generally to Iceland, an astonishingly beautiful country, but one with a small economic footprint. An earthquake in California will affect a far bigger swath of companies, partly because the state is home to the fifth largest economy in the world, and the pandemic in 2020 caused an economic shutdown that had consequences across all business, and was catastrophic for the hospitality and travel businesses.
  3. Likelihood: There is a third dimension on which catastrophic risks can vary, and that is in terms of likelihood of occurrence. Most catastrophic risks are low-probability events, but those low probabilities can become high likelihood events, with the passage of time. Going back to the stories that I started this post with, Iceland has always had volcanos, as have other parts of the world, and until recently, the likelihood that those volcanos would become active was low. In a similar vein, pandemics have always been with us, with a history of wreaking havoc, but in the last few decades, with the advance of medical science, we assumed that they would stay contained. In both cases, the probabilities shifted dramatically, and with it, the expected consequences.

Business owners can try to insulate themselves from catastrophic risk, but as we will see in the next sections those protections may not exist, and even if they do, they may not be complete. In fact, as the probabilities of catastrophic risk increase, it will become more and more difficult to protect yourself against the risk.

Dealing with catastrophic risk

    It is undeniable that catastrophic risk affects the values of businesses, and their market pricing, and it is worth examining how it plays out in each domain. I will start this section with what, at least for me, I is familiar ground, and look at how to incorporate the presence of catastrophic risk, when valuing businesses and markets. I will close the section by looking at the equally interesting question of how markets price catastrophic risk, and why pricing and value can diverge (again).

Catastrophic Risk and Intrinsic Value

    Much as we like to dress up intrinsic value with models and inputs, the truth is that intrinsic valuation at its core is built around a simple proposition: the value of an asset or business is the present value of the expected cash flows on it:

That equation gives rise to what I term the "It Proposition", which is that for "it" to have value, "it" has to affect either the expected cashflows or the risk of an asset or business. This simplistic proposition has served me well when looking at everything from the value of intangibles, as you can see in this post that I had on Birkenstock, to the emptiness at the heart of the claim that ESG is good for value, in this post. Using that framework to analyze catastrophic risk, in all of its forms, its effects can show in almost every input into intrinsic value:


Looking at this picture, your first reaction might be confusion, since the practical question you will face when you value Blue Lagoon, in the face of a volcanic eruption, and 23andMe, after a data hack, is which of the different paths to incorporating catastrophic risks into value you should adopt. To address this, I created a flowchart that looks at catastrophic risk on two dimensions, with the first built around whether you can buy insurance or protection that insulates the company against its impact and the other around whether it is risk that is specific to a business or one that can spill over and affect many businesses.


As you can see from this flowchart, your adjustments to intrinsic value, to reflect catastrophic risk will vary, depending upon the risk in question, whether it is insurable and whether it will affect one/few companies or many/all companies. 

A.  Insurable Risk: Some catastrophic risks can be insured against, and even if firms choose not to avail themselves of that insurance, the presence of the insurance option can ease the intrinsic valuation process. 
  • Intrinsic Value Effect: If the catastrophic risk is fully insurable, as is sometimes the case, your intrinsic valuation became simpler, since all you have to do is bring in the insurance cost into your expenses, lowering income and cash flows, leave discount rates untouched, and let the valuation play out. Note that you can do this, even if the company does not actually buy the insurance, but you will need to find out the cost of that foregone insurance and incorporate it yourself. 
  • Pluses: Simplicity and specificity, because all this approach needs is a line item in the income statement (which will either exist already, if the company is buying insurance, or can be estimated). 
  • Minuses: You may not be able to insure against some risks, either because they are uncommon (and actuaries are unable to estimate probabilities well enough, to set premiums) or imminent (the likelihood of the event happening is so high, that the premiums become unaffordable). Thus, Blue Lagoon (the Icelandic spa that is threatened by a volcanic eruption) might have been able to buy insurance against volcanic eruption a few years ago, but will not be able to do so now, because the risk is imminent. Even when risks are insurable, there is a second potential problem. The insurance may pay off, in the event of the catastrophic event, but it may not offer complete protection. Thus, using Blue Lagoon again as an example, and assuming that the company had the foresight to buy insurance against volcanic eruptions a few years ago, all the insurance may do is rebuild the spa, but it will not compensate the company for lost revenues, as customers are scared away by the fear of  volcanic eruptions. In short, while there are exceptions, much of insurance insures assets rather than cash flow streams.
  • Applications: When valuing businesses in developed markets, we tend to assume that these businesses have insured themselves against most catastrophic risks and ignore them in valuation consequently. Thus, you see many small Florida-based resorts valued, with no consideration given to hurricanes that they will be exposed to, because you assume that they are fully insured. In the spirit of the “trust, but verity” proposition, you should probably check if that is true, and then follow up by examining how complete the insurance coverage is.
2. Uninsurable Risk, Going-concern, Company-specific: When a catastrophic risk is uninsurable, the follow up questions may lead us to decide that while the risk will do substantial damage, the injured firms will continue in existence. In addition, if the risk affects only one or a few firms, rather than wide swathes of the market, there are intrinsic value implications.
  • Intrinsic Value Effect: If the catastrophic risk is not insurable, but the business will survive its occurrence even in a vastly diminished state, you should consider doing two going-concern valuations, one with the assumption that there is no catastrophe and one without, and then attaching a probability to the catastrophic event occurring. 
    Expected Value with Catastrophe = Value without Catastrophe (1 – Probability of Catastrophe) + Value with Catastrophe (Probability of Catastrophe)
    In these intrinsic valuations, much of the change created by the catastrophe will be in the cash flows, with little or no change to costs of capital, at least in companies where investors are well diversified.

  • Pluses: By separating the catastrophic risk scenario from the more benign outcomes, you make the problem more tractable, since trying to adjust expected cash flows and discount rates for widely divergent outcomes is difficult to do.
  • Minuses: Estimating the probability of the catastrophe may require specific skills that you do not have, but consulting those who do have those skills can help, drawing on meteorologists for hurricane prediction and on seismologists for earthquakes. In addition, working through the effect on value of the business, if the catastrophe occurs, will stretch your estimation skills, but what options do you have?
  • Applications: This approach comes into play for many different catastrophic risks that businesses face, including the loss of a key employee, in a personal-service business, and I used it in my post on valuing key persons in businesses. You can also use it to assess the effect on value of a loss of a big contract for a small company, where that contract accounts for a significant portion of total revenues. It can also be used to value a company whose business models is built upon the presence or absence of a regulation or law, in which case a change in that regulation or law can change value. 

3. Uninsurable Risk. Failure Risk, Company-specific: When a risk is uninsurable and its manifestation can cause a company to fail, it poses a challenge for intrinsic value, which is, at its core, designed to value going concerns. Attempts to increase the discount rate, to bring in catastrophic risk, or applying an arbitrary discount on value almost never work.
  • Intrinsic Value Effect: If the catastrophic risk is not insurable, and the business will not survive, if the risk unfolds, the approach parallels the previous one, with the difference being that that the failure value of the business, i.e, what you will generate in cash flows, if it fails, replaces the intrinsic valuation, with catastrophic risk built in:
    Expected Value with Catastrophe = Value without Catastrophe (1 – Probability of Catastrophe) + Failure Value (Probability of Catastrophe)
    The failure value will come from liquidation the assets, or what is left of them, after the catastrophe.
  • Pluses: As with the previous approach, separating the going concern from the failure values can help in the estimation process. Trying to estimate cash flows, growth rates and cost of capital for a company across both scenarios (going concern and failure) is difficult to do, and it is easy to double count risk or miscount it. It is fanciful to assume that you can leave the expected cash flows as is, and then adjust the cost of capital upwards to reflect the default risk, because discount rates are blunt instruments, designed more to capture going-concern risk than failure risk. 
  • Minuses: As in the last approach, you still have to estimate a probability that a catastrophe will occur, and in addition, and there can be challenges in estimating the value of a business, if the company fails in the face of catastrophic risk.
  • Applications: This is the approach that I use to value highly levered., cyclical or commodity companies, that can deliver solid operating and equity values in periods where they operate as going concerns, but face distress or bankruptcy, in the face of a severe recession. And for a business like the Blue Lagoon, it may be the only pathway left to estimate the value, with the volcano active, and erupting, and it may very well be true that the failure value can be zero.
4 & 5 Uninsurable Risk. Going Concern or Failure, Market or Sector wide: If a risk can affect many or most firms, it does have a secondary impact on the returns investors expect to make, pushing up costs of capital.
  • Intrinsic Value Effect: The calculations for cashflows are identical to those done when the risks are company-specific, with cash flows estimated with and without the catastrophic risk, but since these risks are sector-wide or market-wide, there will also be an effect on discount rates. Investors will either see more relative risk (or beta) in these companies, if the risks affect an entire sector, or in equity risk premiums, if they are market-wide. Note that these higher discount rates apply in both scenarios.
  • Pluses: The risk that is being built into costs of equity is the risk that cannot be diversified away and there are pathways to estimating changes in relative risk or equity risk premiums. 
  • Minuses: The conventional approaches to estimating betas, where you run a regression of past stock returns against the market, and equity risk premiums, where you trust in historical risk premiums and history, will not work at delivering the adjustments that you need to make.
  • Applications: My argument for using implied equity risk premiums is that they are dynamic and forward-looking. Thus, during COVID, when the entire market was exposed to the economic effects of the pandemic, the implied ERP for the market jumped in the first six weeks of the pandemic, when the concerns about the after effects were greatest, and then subsided in the months after, as the fear waned:

    In a different vein, one reason that I compute betas by industry grouping, and update them every year, is in the hope that risks that cut across a sector show up as changes in the industry averages. In 2009, for instance, when banks were faced with significant regulatory changes brought about in response to the 2008 crisis, the average beta for banks jumped from 0.71 at the end of 2007 to 0.85 two years later.
Catastrophic Risk and Pricing
    The intrinsic value approach assumes that we, as business owners and investors, look at catastrophic risk rationally, and make our assessments based upon how it will play out in cashflows, growth and risk. In truth, is worth remembering key insights from psychology, on how we, as human beings, deal with threats (financial and physical) that we view as existential.
  • The first response is denial, an unwillingness to think about catastrophic risks. As someone who lives in a home close to one of California's big earthquake faults, and two blocks from the Pacific Ocean, I can attest to this response, and offer the defense that in its absence, I would wither away from anxiety and fear. 
  • The second is panic, when the catastrophic risk becomes imminent, where the response is to flee, leaving much of what you have behind. 
When looking at how the market prices in the expectation of a catstrophe occurring and its consequences, both these human emotions play out, as the overpricing of businesses that face catastrophic risk, when it is low probability and distant, and the underpricing of these same businesses when catastrophic risk looms large. 

    To see this process at work, consider again how the market initially reacted to the COVID crisis in terms of repricing companies that were at the heart of the crisis. Between February 14, 2020 and March 23, 2020, when fear peaked, the sectors most exposed to the pandemic (hospitality, airlines) saw a decimation in their market prices, during that period:


With catastrophic risk that are company-specific, you see the same phenomenon play out. The market capitalization of many young pharmaceutical company have been wiped out by the failure of blockbuster drug, in trials. PG&E, the utility company that provides power to large portions of California saw its stock price halved after wildfires swept through California, and investors worried about the culpability of the company in starting them. 
    The most fascinating twist on how markets deal with risks that are existential is their pricing of fossil fuel companies over the last two decades, as concerns about climate change have taken center stage, with fossil fuels becoming the arch villain. The expectation that many impact investors had, at least early in this game, was that relentless pressure from regulators and backlash from consumers and investors would reduce the demand for oil, reducing the profitability and expected lives of fossil fuel companies.  To examine whether markets reflect this view, I looked at the pricing of fossil fuel stocks in the aggregate, starting in 2000 and going through 2023:

In the graph to the left, I chart out the total market value for all fossil fuel companies, and note a not unsurprising link to oil prices. In fact, the one surprise is that fossil fuel stocks did not see surges in market capitalization between 2011 and 2014, even as oil prices surged.  While fossil fuel pricing multiples have gone up and down, I have computed the average on both in the 2000-2010 period and again in the 2011-2023 period. If the latter period is the one of enlightenment, at least on climate change, with warnings of climate change accompanied by trillions of dollars invested in combating it, it is striking how little impact it has had on how markets, and investors in the aggregate, view fossil fuel companies. In fact, there is evidence that the business pressure on fossil fuel companies has become less over time, with fossil fuel stocks rebounding in the last three years, and fossil fuel companies increasing investments and acquisitions in the fossil fuel space. 
    Impact investors would point to this as evidence of the market being in denial, and they may be right, but market participants may point back at impact investing, and argue that the markets may be reflecting an unpleasant reality which is that despite all of the talk of climate change being an existential problem, we are just as dependent on fossil fuels today, as we were a decade or two decades ago:

Don’t get me wrong! It is possible, perhaps even likely, that investors are not pricing in climate change not just in fossil fuel stocks, and that there is pain awaiting them down the road. It is also possible that at least in this case, that the market's assessment that doomsday is not imminent and that humanity will survive climate change, as it has other existential crises in the past. 
    
Mr. Market versus Mad Max Thunderdome
    The question posed about fossil fuel investors and whether they are pricing in the risks of gclimated change can be generalized to a whole host of other questions about investor behavior. Should buyers be paying hundreds of millions of dollars for a Manhattan office building, when all of New York may be underwater in a few decades? Lest I be accused of pointing fingers, what will happen to the value of my house that is currently two blocks from the beach, given the prediction of rising oceans. The painful truth is that if doomsday events (nuclear war, mega asteroid hitting the earth, the earth getting too hot for human existence) manifest, it is survival that becomes front and center, not how much money you have in your portfolio. Thus, ignoring Armageddon scenarios when valuing businesses and assets may be completely rational, and taking investors to task for not pricing assets correctly will do little to alter their trajectory! There is a lesson here for policy makers and advocates, which is that preaching that the planet is headed for the apocalypse, even if you believe it is true, will induce behavior that will make it more likely to happen, not less.
    On a different note, you probably know that I am deeply skeptical about sustainability, at least as preached from the Harvard Business School pulpit. It remains ill-defined, morphing into whatever its proponents want it to mean. The catastrophic risk discussion presents perhaps a version of sustainability that is defensible. To the extent that all businesses are exposed to catastrophic risks, some company-level and some having broader effects, there are actions that businesses can take to, if not protect to themselves, at least cushion the impact of these risks. A personal-service business, headed by an aging key person, will be well served designing a succession plan for someone to step in when the key person leaves (by his or her choice or an act of God). No global company was ready for COVID in 2020, but some were able to adapt much faster than others because they were built to be adaptable. Embedded in this discussion are also the limits to sustainability, since the notion of sustaining  a business at any cost is absurd. Building in adaptability and safeguards against catastrophic risk makes sense only if the costs of doing so are less than the potential benefits, a simple but powerful lesson that many sustainability advocates seem to ignore, when they make grandiose prescriptions for what businesses should and should not do to avoid the apocalypse.

YouTube



Read More

Continue Reading

Government

Redefining Poverty: Towards a Transpartisan Approach

 A new report from the National Academies of Science,
Engineering, and Medicine (NASEM), An Updated Measure of
Poverty: (Re)Drawing the Line, has hit…

Published

on

 

A new report from the National Academies of Science, Engineering, and Medicine (NASEM), An Updated Measure of Poverty: (Re)Drawing the Linehas hit Washington with something of a splash. Its proposals deserve a warm welcome across the political spectrum. Unfortunately, they are not always getting it from the conservative side of the aisle. 

The AEI’s Kevin Corinth sees the NASEM proposals as a path to adding billions of dollars to federal spending. Congressional testimony by economist Bruce Meyer takes NASEM to task for outright partisan bias. Yet in their more analytical writing, these and other conservative critics offer many of the same criticisms of the obsolete methods that constitute the current approach to measuring poverty. As I will explain below, many of their recommendations for improvements are in harmony with the NASEM report. Examples include the need for better treatment of healthcare costs, the inclusion of in-kind benefits in resource measures, and greater use of administrative data rather than surveys.

After some reading, I have come to think that the disconnect between the critics’ political negative reaction to the NASEM report and their accurate analysis of flaws in current poverty measures has less to do with the conceptual basis of the new proposals and more with the way they should be put to work. That comes more clearly into focus if we distinguish between what we might call the tracking and the treatment functions, or macro and micro functions, of poverty measurement. 

The tracking function has an analytic focus. It is a matter of assessing how many people are poor at a given time and tracing how their number varies in response to changes in policies and economic conditions. The treatment function, in contrast, has an administrative focus. It sets a poverty threshold that can be used to determine who is eligible for specific government programs and what their benefits will be.

There are parallels in the tracking and treatment methods that were developed during the Covid-19 pandemic. By early in 2020, it was clear to public health officials that something big was happening, but slow and expensive testing made it hard to track how and where the SARS-CoV-2 virus was spreading. Later, as tests became faster and more accurate, tracking improved. Wastewater testing made it possible to track the spread of the virus to whole communities even before cases began to show up in hospitals. As time went by, improved testing methods also led to better treatment decisions at the micro level. For example, faster and more accurate home antigen tests enabled effective use of treatments like Paxlovid, which works best if taken soon after symptoms develop.

Poverty measurement, like testing for viruses, also plays essential roles in both tracking and treatment. For maximum effectiveness, what we need is a poverty measure that can be used at both the macro and micro level. The measures now in use are highly flawed in both applications. Both the NASEM report itself and the works of its critics offer useful ideas about where we need to go. The following sections will deal first with the tracking function, then with the treatment function, and then with what needs to be done to devise a poverty measure suitable for both uses.

The tracking function of poverty measurement

As a tracking tool, the purpose of any poverty measure is to improve understanding. Each proposed measure represents the answer to a specific question. To understand poverty fully – what it is, how it has changed, who is poor, and why – we need to ask lots of questions. At this macro level, it is misguided to look for the one best measure of poverty. 

First some basics. All of the poverty measures discussed here consist of two elements, a threshold, or measure of needs, and a measure of resources available to meet those needs. The threshold is based on a basic bundle of goods and services considered essential for a minimum acceptable standard of living. The first step in deriving a threshold is to define the basic bundle and determine its cost. The basic bundle can be defined in absolute terms or relative to median standards of living. If absolute, the cost of the basic bundle can be adjusted from year to year to reflect inflation, and if relative, to reflect changes in the median. Once the cost of the basic bundle is established, poverty thresholds themselves may be adjusted to reflect the cost of essentials not explicitly listed in the basic bundle. Further adjustments in the thresholds may be developed to reflect household size and regional differences. 

Similarly, a poverty measure can include a narrower or broader definition of resources. A narrow definition might consider only a household’s regular inflows of cash. A broader definition might include the cash-equivalent value of in-kind benefits, benefits provided through the tax system, withdrawals from retirement savings, and other sources of purchasing power.

Finally, a poverty measure needs to specify the economic unit to which it applies. In some cases, that may be an individual. In other cases, it may be a family (a group related by kinship, marriage, adoption, or other legal ties) or a household (a group of people who live together and share resources regardless of kinship or legal relationships). Putting it all together, an individual, family, or household is counted as poor if their available resources are less than the applicable poverty threshold.

The current official poverty measure (OPM), which dates from the early 1960s, includes very simple versions of each of these components. It defines the basic bundle in absolute terms as three times the cost of a “thrifty food plan” determined by the U.S. Department of Agriculture. It then converts that to a set of thresholds for family units that vary by family size, with special adjustments for higher cost of living in Alaska and Hawaii. The OPM defines resources as before-tax regular cash income, including wages, salaries, interest, and retirement income; cash benefits such as Temporary Assistance for Needy Families and Supplemental Security Income; and a few other items. Importantly, the OPM does not include tax credits or the cash-equivalent value of in-kind benefits in its resource measure.

The parameters of the OPM were initially calibrated to produce a poverty rate for 1963 of approximately 20 percent. After that, annual inflation adjustments used the CPI-U, which to this day remains the most widely publicized price index. As nominal incomes rose due to economic growth and inflation, and as cash benefits increased, the share of the population living below the threshold fell. By the early 1970s, it had reached 12 percent. Since then, however, despite some cyclical ups and downs, it has changed little.

Today, nearly everyone views the OPM as functionally obsolete. Some see it as overstating the poverty rate, in that its measure of resources ignores in-kind benefits like the Supplemental Nutrition Assistance Program (SNAP) and tax credits like the Earned Income Tax Credit (EITC). Others see it as understating poverty on the grounds that three times the cost of food is no longer enough to meet minimal needs for shelter, healthcare, childcare, transportation, and modern communication services. Almost no one sees the OPM as just right.

In a recent paper in the Journal of Political Economy, Richard V. Burkhauser, Kevin Corinth, James Elwell, and Jeff Larrimore provide an excellent overview of the overstatement perspective. The question they ask is what percentage of American households today lack the resources they need to reach the original three-times-food threshold. Simply put, was Lyndon Johnson’s “War on Poverty,” on its own terms, a success or failure?

To answer that question, they develop what they call an absolute full-income poverty measure (FPM). Their first step in creating the FPM was to include more expense categories in the basic bundle, but calibrate it to make the FPM poverty rate for 1963 equal to the OPM rate for that year. Next, they expanded the resource measure to reflect the growth of in-kind transfer programs and the effects of taxes and tax credits. They also adopt the household, rather than the family, as their basic unit of analysis. Burkhauser et al. estimate that adding the full value of the EITC and other tax credits to resources, along with all in-kind transfer programs, cuts the poverty rate in 2019 to just 4 percent, far below the official 10.6 percent. 

Going further, Burkhauser et al. raise the issue of the appropriate measure of the price level to be used in adjusting poverty thresholds over time. They note that many economists consider that the CPI-U overstates the rate of inflation, at least for the economy as a whole. Instead, they prefer the personal consumption expenditure (PCE) index, which the Fed uses as a guide to monetary policy. Replacing the CPI-U with the PCE reduces the FPM poverty rate for 2019 to just 1.6 percent. It is worth noting, however, that some observers maintain that prices in the basket of goods purchased by poor households tend to rise at a faster rate than the average for all households. In that case, an FPM adjusted for inflation using the PCE would not fully satisfy one of the criteria set by Burkhauser et al., namely, that “the poverty measure should reflect the share of people who lack a minimum level of absolute resources.” (For further discussion of this point, see, e.g., this report to the Office of Management and Budget.)

Burkhauser et al. do not represent 1.6 percent as the “true” poverty rate. As they put it, although the FPM does point to “the near elimination of poverty based on standards from more than half a century ago,” they see that as “an important but insufficient indication of progress.” For a fuller understanding, measuring success or failure of Johnson’s War on Poverty is not enough. A poverty measure for today should give a better picture of the basic needs of today’s households and the resources available to them. 

The Supplemental Poverty Measure (SPM), which the Census Bureau has published since 2011, is the best known attempt to modernize the official measure. The SPM enlarges the OPM’s basic bundle of essential goods to include not only food, but also shelter, utilities, telephone, and internet. It takes a relative approach, setting the threshold as a percentage of median expenditures and updating it not just for inflation, but also for the growth in real median household income. On the resource side, the SPM adds many cash and in-kind benefits, although not as many as the FPM. It further adjusts measured resources by deducting some necessary expenses, such as childcare and out-of-pocket healthcare costs. Finally, the SPM moves away from the family as its unit of analysis toward a household concept.

Since the SPM adds items to both thresholds and resources, it could, depending on its exact calibration, have a value either higher or lower than the OPM. In practice, as shown in Figure 1, it has tracked a little higher than the OPM. (For comparison, Burkhauser et al. calculate a variant of their FPM that uses a relative rather than an absolute poverty threshold. The relative FPM, not shown in Figure 1, tracks slightly higher than the SPM in most years.)


That brings us back to our starting point, the NASEM report. Its centerpiece is a recommended revision of the SPM that it calls the principal poverty measure (PPM). The PPM directly addresses several acknowledged shortcomings of the SPM. Some of the most important recommendations include:

  • A further movement toward the household as the unit of analysis. To accomplish that, the PPM would place more emphasis on groups of people who live together and less on biological and legal relationships.

  • A change in the treatment of healthcare costs. The SPM treats out-of-pocket healthcare spending as a deduction from resources but does not treat healthcare as a basic need. The PPM adds the cost of basic health insurance to its basic package of needs, adds the value of subsidized insurance (e.g., Medicaid or employer-provided) to its list of resources, and (like the SPM) deducts any remaining out-of-pocket healthcare spending from total resources.

  • A change in the treatment of the costs of shelter that does a better job of distinguishing between the situation faced by renters and homeowners.

  • Inclusion of childcare as a basic need and subsidies to childcare as a resource.

Although Burkhauser et al. do not directly address the PPM, judging by their criticisms of the SPM, it seems likely that they would regard nearly all of these changes as improvements. Many of the changes recommended in the NASEM report make the PPM more similar to the FPM than is the SPM.

Since the NASEM report makes recommendations regarding methodology for the PPM but does not calculate values, the PPM is not shown in Figure 1. In principle, because it modifies the way that both needs and resources are handled, the PPM could, depending on its exact calibration, produce poverty rates either above or below the SPM.

Measurement for treatment

The OPM, FPM, SPM, and PPM are just a few of the many poverty measures that economists have proposed over the years. When we confine our attention to tracking poverty, each of them adds to our understanding. When we turn to treatment, however, things become more difficult. A big part of the reason is that none of the above measures is really suitable for making micro-level decisions regarding the eligibility of particular households for specific programs. 

For the OPM, that principle is laid out explicitly in U.S. Office of Management and Budget Statistical Policy Directive No. 14, first issued in 1969 and revised in 1978:

The poverty levels used by the Bureau of the Census were developed as rough statistical measures to record changes in the number of persons and families in poverty and their characteristics, over time. While they have relevance to a concept of poverty, these levels were not developed for administrative use in any specific program and nothing in this Directive should be construed as requiring that they should be applied for such a purpose.

Despite the directive, federal and state agencies do use the OPM, or measures derived from it, to determine eligibility for many programs, including Medicaid, SNAP, the Women, Infants, and Children nutrition program, Affordable Care Act premium subsidies, Medicare Part D low-income subsidies, Head Start, and the National School Lunch Program. The exact eligibility thresholds and the rules for calculating resources vary from program to program. The threshold at which eligibility ends or phase-out begins for a given household may be equal to the Census Bureau’s official poverty threshold, a fraction of that threshold, or a multiple of it. The resource measure may be regular cash income as defined by the OPM, or a modification based on various deductions and add-backs. Some programs include asset tests as well as income tests in their measures of resources. The exact rules for computing thresholds and resources vary not only from one program to another, but from state to state.

That brings us to what is, perhaps, the greatest source of alarm among conservative critics of the NASEM proposals. That is the concern that a new poverty measure such as the PPM would be used in a way that raised qualification thresholds while resource measures remained unchanged. It is easy to understand how doing so by administrative action could be seen as a backdoor way of greatly increasing welfare spending without proper legislative scrutiny. 

Kevin Corinth articulates this fear in a recent working paper for the American Enterprise Institute. He notes that while the resource measures that agencies should use in screening applications are usually enshrined in statute, the poverty thresholds are not. If the Office of Management and Budget were to put its official stamp of approval on a new poverty measure such as the SPM or PPM, Corinth maintains that agencies would fall in line and recalculate their thresholds based on the new measure without making any change in the way they measure resources. 

Corinth calculates that if the current OPM-based thresholds were replaced by new thresholds derived from the SPM, federal spending for SNAP and Medicaid alone would increase by $124 billion by 2033. No similar calculation can be made for the PPM, since it has not been finalized, but Corinth presumes that it, too, would greatly increase spending if it were used to recalculate thresholds while standards for resources remained unchanged.

Clearly, Corinth is onto a real problem. The whole conceptual basis of the SPM and PPM is to add relevant items in a balanced manner to both sides of the poverty equation, so that they more accurately reflect the balance between the cost of basic needs and the resources available to meet them. Changing one side of the equation while leaving the other untouched makes little sense.

Logically, then, what we need is an approach to poverty measurement that is both conceptually balanced and operationally suitable for use at both the macro and micro levels. Corinth himself acknowledges that at one point, when he notes that “changes could be made to both the SPM resource measure and SPM thresholds.” However, he does not follow up with specific recommendations for doing so. To fill the gap, I offer some ideas of my own in the next section.

Toward a balanced approach to micro-level poverty measurement

To transform the PPM from a tracking tool into one suitable for determining individual households’ eligibility for specific public assistance programs would require several steps.

1. Finalize a package of basic needs. For the PPM, those would include food, clothing, telephone, internet, housing needs based on fair market rents, basic health insurance, and childcare. The NASEM report recommends calibrating costs based on a percentage of median expenditures, but conceptually, they could instead be set in absolute terms, either based on costs in a given year or averaged over a fixed period leading up to the date of implementation of the new approach.

2. Convert the package of needs into thresholds. Thresholds would vary according to the size and composition of the household unit. They could also vary geographically, although there is room for debate on how fine the calibration should be. Thresholds would be updated to reflect changes in the cost of living as measured by changes in median expenditures or (in the absolute version) changes in price levels.

3. Finalize a list of resources. These would include cash income plus noncash benefits that households can use to meet food, clothing, and telecommunication needs; plus childcare subsidies, health insurance benefits and subsidies, rent subsidies, and (for homeowners) imputed rental income; minus tax payments net of refundable tax credits; minus work expenses, nonpremium out-of-pocket medical expenses, homeowner costs if applicable, and child support paid to another household. 

4. Centralize collection of data on resources. Determining eligibility of individual households for specific programs would require assembling data from many sources. It would be highly beneficial to centralize the reporting of total resources as much as possible, so that all resource data would be available from a central PPM database. The IRS could provide data on cash income other than public benefits (wages, salaries, interest, dividends, etc.) and payments made to households through refundable tax credits such as the EITC. The federal or state agencies that administer SNAP, Medicaid and other programs could provide household-by-household data directly to the PPM database. Employers could report the cash-equivalent value of employer-provided health benefits along with earnings and taxable benefits. 

5. Devise a uniform format for reporting eligible deductions from resources. Individual households applying for benefits from specific programs would be responsible for reporting applicable deductions from resources, such as work expenses, out-of-pocket medical expenses, homeowner costs and so on. A uniform format should be developed for reporting those deductions along with uniform standards for documenting them so that the information could be submitted to multiple benefit programs without undue administrative burden.

6. Use net total resources as the basis for all program eligibility. Decisions on eligibility for individual programs should use net total resources, as determined by steps (4) and (5), to determine eligibility for individual programs. 

7. Harmonization of program phase-outs. It would be possible to implement steps (1) through (6) without making major changes to the phase-in and phaseout rules for individual programs. However, once a household-by-household measure of net total resources became available, it would be highly desirable to use it to compute benefit phaseouts from all programs in a harmonized fashion. At present, for example, a household just above the poverty line might face a phaseout rate of 24 percent for SNAP and 21 percent for EITC, giving it an effective marginal tax rate of 45 percent, not counting other programs or income and payroll taxes. As explained in this previous commentary, such high effective marginal tax rates impose severe work disincentives, especially on families that are just making the transition from poverty to self-sufficiency. Replacing the phase-outs for individual programs with a single harmonized “tax” on total resources as computed by the PPM formula could significantly mitigate work disincentives. 

Implementing all of these steps would clearly be a major administrative and legislative undertaking. However, the result would be a public assistance system that was ultimately simpler, less prone to error, and less administratively burdensome both for government agencies at all levels and for poor households.

Conclusion

In a recent commentary for the Foundation for Research on Equal Opportunity, Michael Tanner points to the potentially far-reaching significance of proposed revisions to the way poverty is measured. “Congress should use this opportunity to debate even bigger questions,” he writes, such as “what is poverty, and how should policymakers measure it?” Ultimately, he continues, “attempts to develop a statistical measure of poverty venture beyond science, reflecting value judgments and biases. Such measures cannot explain everything about how the poor really live, how easily they can improve their situation, or how policymakers can best help them.”

The “beyond science” caveat is worth keeping in mind for all discussions of poverty measurement. A case in point is the issue of whether to use an absolute or relative approach in defining needs. It is not that one is right and the other wrong. Rather, they reflect fundamentally different philosophies as to what poverty is. For example, as noted earlier, Burkhauser et al. compute both absolute and relative versions of their full-income poverty measure. The absolute version is the right one for answering the historical question they pose about the success or failure of the original War on Poverty, but for purposes of policy today, the choice is not so clear. Some might see an absolute measure, when used in a micro context, as producing too many false negatives, that is, failures to help those truly in need. Others might see the relative approach as producing too many false positives, spending hard-earned taxpayer funds on people who could get by on their own if they made the effort. The choice is more a matter of values than of science.

The choice of a price index for adjusting poverty measures over time also involves values as well as science. Should the index be one based on average consumption patterns for all households, such as the PCE or chained CPI-U, or should it be a special index based on the consumption patterns of low-income households? Should the index be descriptive, that is, based on observed consumption patterns for the group in question? Or should it be prescriptive, that is, based on a subjective estimate of “basics needed to live and work in the modern economy” as is the approach taken by the ALICE Essentials Index?

In closing, I would like to call attention to four additional reasons why conservative critics of existing poverty policy should welcome the proposed PPM, even in its unfinished state, as a major step in the right direction. 

The PPM is inherently less prone to error. Bruce Meyer is concerned that “the NAS[EM]-proposed changes to poverty measurement would produce a measure of poverty that does a worse job identifying the most disadvantaged, calling poor those who are better off and not including others suffering more deprivation.” In fact, many features of the PPM make it inherently less prone than either the OPM or the SPM to both false positives and false negatives. The most obvious is its move toward a full-income definition of resources. That avoids one of the most glaring flaws in the OPM, namely, the identification of families as poor who in fact receive sufficient resources in the form of in-kind transfers or tax credits. The PPM also addresses some of the flaws of the SPM that Meyer singles out in his testimony, most notably in the treatment of healthcare.  Furthermore, by placing greater emphasis on administrative data sources and less on surveys, the PPM would mitigate underreporting of income and benefits, which Burkhauser et al. identify as a key weakness of the SPM. (See NASEM Recommendations 6.2 and 6.3).

The PPM offers a pathway toward consistency and standardization in poverty policy. In his critique of the PPM, Tanner suggests that “Congress should decouple program eligibility from any single poverty measure, and adopt a broader definition of poverty that examines the totality of the circumstances that low-income people face and their potential to rise above them.” I see that kind of decoupling as exactly the wrong approach. Our existing welfare system is already a  clumsy accretion of mutually inconsistent means-tested programs – as many as 80 of them, by one count. It is massively wasteful and daunting to navigate. In large part that is precisely because each component represents a different view of the “totality of circumstances” of the poor as seen by different policymakers at different times. What we need is not decoupling, but standardization and consistency. The proposed system-wide redefinition of poverty offers a perfect opportunity to make real progress in that direction.

The PPM would be more family-friendly. One pro-family feature of the PPM is its recognition of childcare costs on both the needs and the resources sides of the poverty equation. In addition, by moving toward households (defined by resource-sharing) rather than families (defined by legal relationships), the PPM would mitigate the marriage penalties that are built into some of today’s OPM-based poverty programs.

Properly implemented, the PPM would be more work-friendlyAs noted above, the benefit cliffs, disincentive deserts, and high effective marginal tax rates of existing OPM-based poverty programs create formidable work disincentives. Moving toward a harmonized phaseout system based on the PPM’s full-income approach to resources could greatly reduce work disincentives, especially for households just above the poverty line that are struggling to take the last steps toward self-sufficiency.

In short, it is wrong to view the proposed PPM as part of a progressive plot to raid the government budget for the benefit of the undeserving, as some conservative critics seem to have done. Rather, both conservatives and progressives should embrace the PPM as a promising step forward and direct their efforts toward making sure it is properly implemented. 

Based on a version originally published by Niskanen Center


 

Read More

Continue Reading

Government

Glimpse Of Sanity: Dartmouth Returns Standardized Testing For Admission After Failed Experiment

Glimpse Of Sanity: Dartmouth Returns Standardized Testing For Admission After Failed Experiment

In response to the virus pandemic and nationwide…

Published

on

Glimpse Of Sanity: Dartmouth Returns Standardized Testing For Admission After Failed Experiment

In response to the virus pandemic and nationwide Black Lives Matter riots in the summer of 2020, some elite colleges and universities shredded testing requirements for admission. Several years later, the test-optional admission has yet to produce the promising results for racial and class-based equity that many woke academic institutions wished.

The failure of test-optional admission policies has forced Dartmouth College to reinstate standardized test scores for admission starting next year. This should never have been eliminated, as merit will always prevail. 

"Nearly four years later, having studied the role of testing in our admissions process as well as its value as a predictor of student success at Dartmouth, we are removing the extended pause and reactivating the standardized testing requirement for undergraduate admission, effective with the Class of 2029," Dartmouth wrote in a press release Monday morning. 

"For Dartmouth, the evidence supporting our reactivation of a required testing policy is clear. Our bottom line is simple: we believe a standardized testing requirement will improve—not detract from—our ability to bring the most promising and diverse students to our campus," the elite college said. 

Who would've thought eliminating standardized tests for admission because a fringe minority said they were instruments of racism and a biased system was ever a good idea? 

Also, it doesn't take a rocket scientist to figure this out. More from Dartmouth, who commissioned the research: 

They also found that test scores represent an especially valuable tool to identify high-achieving applicants from low and middle-income backgrounds; who are first-generation college-bound; as well as students from urban and rural backgrounds.

All the colleges and universities that quickly adopted test-optional admissions in 2020 experienced a surge in applications. Perhaps the push for test-optional was under the guise of woke equality but was nothing more than protecting the bottom line for these institutions. 

A glimpse of sanity returns to woke schools: Admit qualified kids. Next up is corporate America and all tiers of the US government. 

Tyler Durden Mon, 02/05/2024 - 17:20

Read More

Continue Reading

Trending