Connect with us

Government

Using AI to Improve Chronic Disease Outcomes

By Bob Matthews This article reports the results of a study that follows a multi-year, pragmatic clinical trial in a real world, community based primary…

Published

on

By Bob Matthews

This article reports the results of a study that follows a multi-year, pragmatic clinical trial in a real world, community based primary care. What started as a quality project evolved to include the development and deployment of Artificial Intelligence (AI) decision support to guide medication choices when treating hypertension (HTN). Results show that primary care physicians significantly improved HTN outcomes as compared to the national average of success.  All patients with a hypertension diagnosis were tracked across three years–including the COVID pandemic period. Of the 13, 441 HTN patients 94% had a blood pressure “at goal” (i.e. less than 140/90)–as of their last clinician visit. The last published study of US blood pressure control which occurred prior to the pandemic was 44%.

Because the use of AI in primary care is novel as of this writing, the concept of AI is often unfamiliar to many practicing clinicians and medical group leaders. This paper defines a threshold between simpler versions of decision support for clinicians versus AI level decision support. The paper also distinguishes between sources of AI including machine learning and other forms of AI that do not involve machine learning.

AI in medicine

AI is suited to a wide variety of use cases in healthcare (e.g., basic science research, genetics, epidemiology, pharmacological innovation, diagnosis and treatment). This project involves AI as a clinical decision support.

Not all decision support qualifies as AI. To date, most computer-based decision support in primary care has been based upon basic computations and “if/then” logic. Electronic Health Records (EHRs) are programmed to remind practitioners that an A1c or colonoscopy is overdue, for example. An EHR may point out that a patient with hypertension has not yet had an ACE/ARB prescribed. In some EHRs, a person must create the reminder and in other instances, the computer system is programmed to track one or more variable and send a message about the result.  Generally speaking, basic decision support is not dynamic.

AI begins when decision support is dynamic in that its actions or recommendations are based upon multiple–potentially hundreds or more–variables, each with potentially many different values. Changes in a value in even one variable can result in changes to the entire decision model and result. Thus, the relationship of the variables is dynamic – as one changes, the entire solution can change. For example, the AI solution for recommending precise HTN medications includes over 300 million permutations, and for Heart Failure with Reduced Ejection Fraction (HFrEF) solution there are over 850 million permutations. This is far beyond the capabilities of “if/then” logic in part because no one could map all those decision points.

Credit: Nikita John Creagh / Getty Images

Machine learning vs. algorithm based

Many believe that all AI involves machine learning but that is not so. In business and industry there are many problems for which vast amounts of data are available. Data scientists write programs directing computers to process through large numbers of data fields, each of which may have very large data sets, searching for patterns that reveal meanings that were not previously obvious. These could be “cause and effect” meanings, image recognition, predictive analytics or it can include creative efforts to identify elements of a solution opportunity.

Such an approach is called machine learning in that the computer programmer includes instructions to the application to learn how to improve its analysis through “experience.” The human operator may or may not have had a hypothesis at the start of the project.

All machine learning is AI but not all AI is machine learning. In some use cases, either there isn’t enough data available for machine learning and/or the data is so fraught with error, missing elements, inconsistencies, etc. that machine learning cannot reliably work. Medicine has this problem as is evidenced by the vast amounts of errors in EHR data.

In these instances where machine learning is not yet an option, the alternative is to build sophisticated mathematical algorithms to gather and organize the data and to define parameters and other “cognitive” processes to perform specific analyses as required to improve the precision of solutions. The AI tool described in this paper–which is called the MedsEngine–is an algorithm-based AI application.

Managing chronic diseases

Primary care is organized around three aspects of care: (1) acute care, (2) wellness and preventive care and (3) the management of chronic diseases.

While all three of these are important, the timely diagnosis and effective management of chronic disease has a special bearing on the individual health status of patients and the total cost of care across the US health system. It is well documented that the downstream “costs” to patient mortality and morbidity from poorly controlled hypertension, diabetes, lipids, heart failure, etc. can be devastating to the patient and expensive to the system.  Eighty-five percent (85%) of all US health spending is for patients with chronic diseases1.

Worse, as the COVID pandemic revealed, minority, poor and other underserved patients have less well controlled chronic diseases2 which, in turn, results in greater instances of severe illness and death from viral and other co-morbid threats.

The most common chronic diseases (i.e., hypertension, cholesterol, diabetes, CKD, vascular diseases, asthma, heart failure, COPD, anxiety, depression, arthritis and osteoporosis) are most often managed by primary care providers who may be physicians, nurse practitioners or physicians’ assistants. The goals for treatment are evidence-based standards (EBS) for care. In addition to defining “controlled,” some EBS prescribe a very specific treatment pathway to control. Others have a generalized pathway with various options, and some provide suggestions to consider when selecting therapies.

Given the ubiquity of EHRs, it is now possible to calculate the percent of patients who achieve control for most diseases.

Unfortunately, it is not common practice in most medical groups to do so. There are national studies measuring the success of chronic disease care for some diagnoses. In general, the results are disheartening.

Hypertension remains US healthcare’s biggest failure

The Centers for Disease Control and Prevention (CDC)3 finds that hypertension (HTN) is by far the most common chronic disease, affecting 47% of US adults–116 million Americans. Only 24% of these patients have achieved control. HTN’s sequelae include many debilitating and costly medical problems including death, heart attack, stroke, renal damage or failure, atrial fibrillation, heart failure, etc. Hypertension’s costs are estimated to be $131 billion to $198 billion per year4.

For over ten years there have been a host of efforts to improve HTN outcomes including the CDC’s Million Hearts campaign, the Surgeon General’s Call to Action to Control Hypertension, the National Roundtable to Control Hypertension, various programs sponsored by the American Heart Association, the American Medical Association and the American College of Cardiology. Despite all these, blood pressure control rates are declining, not improving.

In 2013-14, CDC data showed that 53.9% of HTN patients had a blood pressure of <140/90. An analysis of the NHANES5 data reveals that in 2018 44% of patients with hypertension had a BP of <140/90. Considering that the American College of Cardiology (ACC) and American Heart Association (AHA) propose a goal of 130/80, this is sad.

America is confronting significant evidence of racial, ethnic and economic inequity in healthcare. Ogunniyi, et al6 just published very damning data showing that African American and other minorities are more likely to have HTN, less likely to receive effective treatment, less likely to have their HTN controlled and, therefore, more likely to die or have serious health degradations.

The chronic disease outcomes problem is not limited to HTN.  Only 26% of patients with diabetes7 had blood pressure, LDL and A1c simultaneously controlled on their last visit. The CHAMP–HF study8 showed an amazing finding–only 1.1% of heart failure with reduced ejection fraction (HFrEF) patients were prescribed effective doses of all three or four of the key therapeutic agents as recommended by the Heart Failure Society of America and the American College of Cardiology (ACC). There is little reason to hope that patients with COPD, asthma or other chronic diseases are properly classified and on the correct medications per the EBS.

The management of hypertension–or any other chronic disease –varies enormously by physician. So, too, do the percent of patients successfully treated to the EBS goal. Physician group leadership may educate and promote the use of decision trees, but success has been elusive.

Defining the problem and solution approach

It is axiomatic among quality experts that the more complex the work is, the greater the need for standard processes to achieve high success rates. Conversely, without solid processes, high levels of quality cannot be maintained, especially across large numbers of operators. There is no reason to believe that physicians are the exception.

Historically quality improvement efforts were measured on two output goals: (1) reliably achieving the quality target or goal metric(s) over time and (2) reducing the variability between operators (in this instance, doctors or other providers).  COVID has raised a third goal in our consciousness: (3) improvements should be effective in minority patients and socio-economically challenged populations.

David Nash9,10 has written extensively about using quality theory and tools in medical care. Nash defines unwarranted variability in how physicians treat patients as a core problem obstructing improvements in healthcare quality. Brett James11, a pioneer in integrating quality theory and practice into clinical medicine, also finds that variability between operators is a source of error in healthcare.

This study measures the results of a quality improvement project which, over time, expanded to an AI effort. The parties are PriMED Physicians, a community based, independent physician group in Dayton, Ohio with 50 physicians and MediSync, the Cincinnati, Ohio company that has provided comprehensive management to PriMED for 25 years. This study is about HTN but the AI solution extends to diabetes, cholesterol and HFrEF. At the outset PriMED set a target that 90% of all patients with a given chronic disease would achieve the evidence-based defined outcome. For HTN, the goal is a BP of <140/90 but that goal is now under review and may lowered to BP <130/80 for at least some patients.

An analysis was conducted to identify reasons why blood pressures fail to meet the desired goals. As is common in quality theory and practice, the analysis included both the positive requirements–what must go right–and the negative–what could go wrong?

In quality theory, identifying the most significant sources of error organizes the search for solutions. Obviously, the medical challenges or complexity vary by disease. Some common problems occur across diseases, but one stands out. Precise advice about medication selection always involves gathering and processing a huge number of variables. Given the known limits of human cognition, this complexity is a major obstacle to improvement.

For example, for HTN the medication recommendations now include 13 classes of drugs. The 13 classes are not used equally but all are used regularly, at least in some specific circumstances. In interviews with physicians most use 3 to 6 classes of anti-hypertensive agents–often referred to as their “go to” or “favorites.” Not only were the additional classes not used, their mechanism of operation and effect on the pathophysiology of disease is also unfamiliar to many physicians and APPs. It was determined that the “complexity” had to be solved to help doctors and APPs make better medication choices in order to get better chronic outcomes12.

Seeking focus, two essential steps to improving chronic outcomes were developed:

  1. Assist the provider to identify the best medication option(s) with precision and, when multiple medications are indicated, a specific order; and 2. Assist in engaging patients in a manner that increases their participation in therapy, including filling prescriptions and regularly taking medicines, lifestyle accommodations, etc. This paper addresses the first task–assisting providers to select the precise medications–because that is where AI makes its greatest contribution.

Deconstructing blood pressure management

In quality practice, an “outcome metrics” is the measure of quality that occurs at the conclusion of a process. Thus, outcome metrics are, in turned, the result of other, “upstream” processes or variables which are called driver metrics. High blood pressure is an outcome metric in that it manifests other, upstream or “driver” problem(s). The search into causes of high blood pressure in the hopes of finding additional levers to improve control.

The upstream variables that cause high blood pressure are vasoconstriction, high heart rate, high stroke volume and elevated fluid levels13. Hypertension can be caused by any one of these “drivers” or by a combination of two, three or all four of them, which are call “mixed hemodynamic.” When these hemodynamic parameters are properly controlled, the blood pressure is most often controlled as well.

There is rich literature about the effects of various pharmaceutical agents on vasoconstriction, heart rate, stroke volume and fluid status. Using a relatively inexpensive, FDA approved, in-office test called Impedance Cardiography (ICG) provided quantification of the hemodynamic parameters (i.e., vasoconstriction, rate, stroke volume and intravascular fluid status). Based on the ICG hemodynamic data, physician focus can be guided to the medications best able to treat each patient’s high blood pressure. The first generation, “paper and pencil” solution mapped the suitability of each major class of medication to the hemodynamic factors underlying blood pressure but was limited to a static approach.

From quality improvement process to AI

Over time the HTN Process was improved by adding additional clinical variables to the selection of best medications for a patient. For example, an algorithm was created to measure the effects of 26 factors to include demographics (i.e., age, African American) and co-morbid diagnoses (diabetes, BPH, CKD, etc.) to understand how each condition might affect the potential use of a given drug or drug class when treating blood pressure. Co-morbid conditions can increase, decrease, prevent, change the order of use of drug class(es) as well as indicate when multiple drugs are needed and alert to unusual circumstances.

At this point, paper and pencil process tools were no longer helpful as the number of permutations became astronomical. In the era of EHRs physicians complained that the paper and pencil decision tools were a burden. This led to the development of the AI application, the MedsEngine.

The shift to AI added additional opportunities to add precision. For example, in earlier solutions physicians used an “eyeball” evaluation of graphs showing the ICG results. With AI it was possible to feed the raw ICG data and develop a mathematical model to measure each ICG parameter against an ideal and against each other. This provided better insight into the many patients whose hypertension involved a mixture of hemodynamic causes.

The current AI driven precision medication recommendation for hypertension includes over 300 million permutations, a level of complexity that is known to be beyond human cognition, especially when physicians and APPs are working in 15-to-30-minute time slots.

How it works in daily practice

Because the EHRs available today use legacy technologies that are not capable of the kinds of computation that the MedsEngine performs, the AI was developed as a cloud technology using Microsoft’s Azure environment. Microsoft Azure is widely used in other economic sectors and in applications by some of the world’s biggest companies (i.e., Boeing, Verizon, BMW, etc.).

As a Cloud technology, no processors or databases need be installed onsite. Rather, a medical groups’ EHR is connected to the MedsEngine using the federally mandated inter-operability standards known as FHIR and Smart on FHIR. Because CMS required that all EHRs must be FHIR and Smart on FHIR enabled, the MedsEngine is EHR agnostic. The FHIR standards now make it possible to link our technology to any EHR in hours, a task that would have taken months prior to FHIR.

Clinicians simply hit a button inside the EHR screen environment and, without manually signing out or in to any technology, a vast amount of data specific only to the patient in question is raised to the MedsEngine including diagnosed problems, drug list, test and lab results, allergies, past medical and surgical history, demographics, etc. The MedsEngine presents a “validation” screen where the provider can click to amend any patient information that is either missing or incorrect from the EHR.  Then the MedsEngine processes the data and the medication recommendations are returned within 1-2 seconds inside the EHR. The physician, of course, determines whether to follow MedsEngine’s recommendations.

Results

In quality theory there are two key measures of success: (1) achieving the target goal and (2) reducing variability among operators–physicians and licensed providers, in this instance.  Decreases in variability can be measured by reductions in the standard deviations of success rates across operators. Process capability is the ability of the process to achieve its stated goal(s).  Experienced quality experts believe that it is easier to improve the success rate of a process than to reduce variability across providers but the two must work in tandem. A high rate of variability precludes a high rate of success.

As stated earlier, the goal for each chronic disease state is that 90% of all diagnosed patients achieve the evidence-based standard for “control.” In blood pressure this is currently a BP of <140/90 using the National Quality Foundation (NQF) measurement procedures adopted by CMS and NCQA.

Addressing racial and socio-economic disparities of care

COVID has further revealed significantly inferior chronic disease outcomes due to racial and economic disparities in care and life circumstances. By contrast, PriMED’s success rate for controlling blood pressure in African American HTN patients is currently 92.2%.

Isolating a family practice physician whose practice is limited to an underserved African American community that is 85% African American with 59% of patients have Medicaid, Medicare or are uninsured, the following percent of well controlled HTN is remarkable. (Note that this physician’s results are also included in Figures 1 and 2.)

Figure 1. Group-wide percent of all 13,441 HTN patients
Figure 1. Group-wide percent of all 13,441 HTN patients with BP of <140/90 at last visit by month from January 2019 through November 30, 2021. Note that this period includes the COVID pandemic. These outcomes are exactly 50 percentage points higher–more than double – the last published national average of 44% HTN patients with BP of <140/90.
Figure 2. Reduced variability: Each physician is represented by a trend line reflecting the percent of that physician’s HTN patients with BP of <140/9 as of the last visit by month. Early in 2019 there is significant variability among physicians’ outcomes. This tightens up into late 2019 and early 2020. As the COVID pandemic unfolds, the variability increases notably only to tighten again towards the latter half of 2021, despite the fact that COVID is an ongoing event. As of November 30, 2021, all but one PriMED physician was above the group mean or within 5% of the group mean. The standard deviation of individual physician outcomes is 3.9%.
Figure 3. The distribution of systolic blood pressures
Figure 3. The shows the distribution of systolic blood pressures for 31,975 HTN patient encounters from January 1 through November 30, 2021.
Figure 4. Percent blood pressure-controlled patients
Figure 4. Percent blood pressure-controlled patients in a socio-economically disadvantaged neighborhood of whom 85% are African American and 59% have Medicaid, Medicare, or are uninsured.

Discussion

Many medical groups who have made hypertension improvement efforts have become frustrated at the difficulty of achieving results higher than 70% success across the HTN population.

Studies have found that physicians and APPs are writing incorrect medications when treating chronic diseases like hypertension. And no wonder. It is long past time for US healthcare to adopt and deploy contemporary quality theory, insist upon the expansive use of processes to master complexity, stop relying upon human beings to do impossible calculations and manage lengthy decision trees from memory and to embed processes in AI applications.

This work shows how a combination of processes embedded in an AI technology, such as the MedsEngine, helps physicians achieve nationally remarkable outcomes, do so consistently and with significantly reduced variation between providers. As of November 30, 2021, PriMED’s blood pressure control rate for the entire HTN patient population was 94%. Getting the medications right matters.

blood pressure illustration
Credit: Irina Strelnikova / Getty Images

National efforts convened by the CDC and other organizations to improve blood pressure outcomes have not been successful to date. Even before the pandemic, hypertension patients’ blood pressures of <140/90 had declined from 53.9% success to 44% success from 2014 to 2018. Recent studies14 find that COVID has increased average blood pressures in HTN patients and, thus reduced the percent of HTN patients whose blood pressure is controlled. Based upon the literature, it should be expected that well controlled hypertension patients have fewer co-morbid complications, successfully maintain a higher standard of health across the population and have a lower total cost of care. In fact, PriMED’s total cost of care is approximately 20% less than our regional average total cost per patient.

This paper demonstrates that in a real-world clinical environment, AI technologies improved physician decision making, improved patient outcomes, and lowered total costs – all key ingredients to achieving the triple aim.

 

References
1. Holman HR. The Relation of the Chronic Disease Epidemic to the Health Care
Crisis. ACR Open Rheumatol. 2020;2(3):167-173. doi:10.1002/acr2.11114.
2. Fouad MN, Ruffin J, Vickers SM. COVID-19 Is Disproportionately High in
African Americans. This Will Come as No Surprise…. Am J Med. 2020;133(10):e544-e545. doi:10.1016/j.amjmed.2020.04.008.
3. The Centers For Disease Control and Prevention, High Blood Pressure, Hypertension Statistics and Maps. Accessed December 30, 2021
4. CDC, Associate Director for Policy and Strategy, POLARIS, Health Topics – High Blood Pressure. Accessed December 30, 2021.
5 Muntner P, Hardy ST, Fine LJ, et al. Trends in Blood Pressure Control Among US Adults With Hypertension, 1999-2000 to 2017-2018. JAMA. 2020;324(12):1190–1200. doi:10.1001/jama.2020.14545.
6. Ogunniyi MO, Commodore-Mensah Y, Ferdinand KC. Race, Ethnicity, Hypertension, and Heart Disease: J Am Coll Cardiol. 2021; 78(24):2460-2470,  doi.org/10.1016/jacc.2021.06.017.).
7. Chen Y, Rolka D, Xie H, Saydah S. Imputed State-Level Prevalence of Achieving Goals To Prevent Complications of Diabetes in Adults with Self-Reported Diabetes – United States, 2017-2018. MMWR Morb Mortal Wkly Rep 2020;69:1665-1670. DOI: doi.org/10.15585/mmwr.mm6945a1.
8. Greene SJ, Butler J, Albert NM, et al. Medical Therapy for Heart Failure With
Reduced Ejection Fraction: The CHAMP-HF Registry. J Am Coll Cardiol. 2018 Jul 24;72(4):351-366. doi: 10.1016/j.jacc.2018.04.070. PMID: 30025570.
9. Nash DB, Joshi M, Ransom ER, Ransom SB. (eds)The Healthcare Quality Book—
Vision, Strategy and Tools. 4th ed, Health Administration Press, Chicago, IL 2019.
10. Kumar S., Nash DB. Demand Better, Second River Healthcare Press, Bozeman MT,
2010.
11. Goitein, L, James, Brent. Standardized Best Practices and Individual Craft-Based
Medicine: A Conversation About Quality. JAMA Internal Medicine. 176. 10.1001/ja
mainternmed. 2016.1641. 2016.
12. The complexity problem is common to all chronic disease management. For
example, the 2018 AHA and ACC guidelines for cholesterol management, the
American Diabetes Association guidelines for diabetes management and the GOLD standard for COPD management will show hundreds of pages of text, decision trees that go on for up to 17 pages, numerous variables, formulae, etc. It is not possible to memorize or apply these complex standards from memory and achieve very high levels of success..
13. Hector O. Ventura, Sandra J. Taler, John E. Strobeck, Hypertension as a hemod
ynamic disease: The role of impedance cardiography in diagnostic, prognostic, and therapeutic decision making, American Journal of Hypertension, Volume 18, Issue
S2, February 2005, Pages 26S–43S, https://doi.org/10.1016/j.amjhyper.2004.11.002.
14. Laffin LJ, Kaufman HW, Chen Z, et al. Rise in Blood Pressure Observed Among US
Adults During the COVID-19 Pandemic. Circulation. 2021 Dec 6. doi: 10.1161/
CIRCULATIONAHA.121.057075. Epub ahead of print. PMID: 34865499.

 

Bob Matthews is a leader of physician groups and is Black Belt trained in the Six Sigma quality methods which he and his team use to create new methods and processes to help patient’s achieve better health outcomes at a lower total cost of care. Bob co-leads a team creating AI solutions to help physicians achieve outstanding chronic disease outcomes.

The post Using AI to Improve Chronic Disease Outcomes appeared first on Inside Precision Medicine.

Read More

Continue Reading

Government

Low Iron Levels In Blood Could Trigger Long COVID: Study

Low Iron Levels In Blood Could Trigger Long COVID: Study

Authored by Amie Dahnke via The Epoch Times (emphasis ours),

People with inadequate…

Published

on

Low Iron Levels In Blood Could Trigger Long COVID: Study

Authored by Amie Dahnke via The Epoch Times (emphasis ours),

People with inadequate iron levels in their blood due to a COVID-19 infection could be at greater risk of long COVID.

(Shutterstock)

A new study indicates that problems with iron levels in the bloodstream likely trigger chronic inflammation and other conditions associated with the post-COVID phenomenon. The findings, published on March 1 in Nature Immunology, could offer new ways to treat or prevent the condition.

Long COVID Patients Have Low Iron Levels

Researchers at the University of Cambridge pinpointed low iron as a potential link to long-COVID symptoms thanks to a study they initiated shortly after the start of the pandemic. They recruited people who tested positive for the virus to provide blood samples for analysis over a year, which allowed the researchers to look for post-infection changes in the blood. The researchers looked at 214 samples and found that 45 percent of patients reported symptoms of long COVID that lasted between three and 10 months.

In analyzing the blood samples, the research team noticed that people experiencing long COVID had low iron levels, contributing to anemia and low red blood cell production, just two weeks after they were diagnosed with COVID-19. This was true for patients regardless of age, sex, or the initial severity of their infection.

According to one of the study co-authors, the removal of iron from the bloodstream is a natural process and defense mechanism of the body.

But it can jeopardize a person’s recovery.

When the body has an infection, it responds by removing iron from the bloodstream. This protects us from potentially lethal bacteria that capture the iron in the bloodstream and grow rapidly. It’s an evolutionary response that redistributes iron in the body, and the blood plasma becomes an iron desert,” University of Oxford professor Hal Drakesmith said in a press release. “However, if this goes on for a long time, there is less iron for red blood cells, so oxygen is transported less efficiently affecting metabolism and energy production, and for white blood cells, which need iron to work properly. The protective mechanism ends up becoming a problem.”

The research team believes that consistently low iron levels could explain why individuals with long COVID continue to experience fatigue and difficulty exercising. As such, the researchers suggested iron supplementation to help regulate and prevent the often debilitating symptoms associated with long COVID.

It isn’t necessarily the case that individuals don’t have enough iron in their body, it’s just that it’s trapped in the wrong place,” Aimee Hanson, a postdoctoral researcher at the University of Cambridge who worked on the study, said in the press release. “What we need is a way to remobilize the iron and pull it back into the bloodstream, where it becomes more useful to the red blood cells.”

The research team pointed out that iron supplementation isn’t always straightforward. Achieving the right level of iron varies from person to person. Too much iron can cause stomach issues, ranging from constipation, nausea, and abdominal pain to gastritis and gastric lesions.

1 in 5 Still Affected by Long COVID

COVID-19 has affected nearly 40 percent of Americans, with one in five of those still suffering from symptoms of long COVID, according to the U.S. Centers for Disease Control and Prevention (CDC). Long COVID is marked by health issues that continue at least four weeks after an individual was initially diagnosed with COVID-19. Symptoms can last for days, weeks, months, or years and may include fatigue, cough or chest pain, headache, brain fog, depression or anxiety, digestive issues, and joint or muscle pain.

Tyler Durden Sat, 03/09/2024 - 12:50

Read More

Continue Reading

Government

Walmart joins Costco in sharing key pricing news

The massive retailers have both shared information that some retailers keep very close to the vest.

Published

on

As we head toward a presidential election, the presumed candidates for both parties will look for issues that rally undecided voters. 

The economy will be a key issue, with Democrats pointing to job creation and lowering prices while Republicans will cite the layoffs at Big Tech companies, high housing prices, and of course, sticky inflation.

The covid pandemic created a perfect storm for inflation and higher prices. It became harder to get many items because people getting sick slowed down, or even stopped, production at some factories.

Related: Popular mall retailer shuts down abruptly after bankruptcy filing

It was also a period where demand increased while shipping, trucking and delivery systems were all strained or thrown out of whack. The combination led to product shortages and higher prices.

You might have gone to the grocery store and not been able to buy your favorite paper towel brand or find toilet paper at all. That happened partly because of the supply chain and partly due to increased demand, but at the end of the day, it led to higher prices, which some consumers blamed on President Joe Biden's administration.

Biden, of course, was blamed for the price increases, but as inflation has dropped and grocery prices have fallen, few companies have been up front about it. That's probably not a political choice in most cases. Instead, some companies have chosen to lower prices more slowly than they raised them.

However, two major retailers, Walmart (WMT) and Costco, have been very honest about inflation. Walmart Chief Executive Doug McMillon's most recent comments validate what Biden's administration has been saying about the state of the economy. And they contrast with the economic picture being painted by Republicans who support their presumptive nominee, Donald Trump.

Walmart has seen inflation drop in many key areas.

Image source&colon; Joe Raedle&sol;Getty Images

Walmart sees lower prices

McMillon does not talk about lower prices to make a political statement. He's communicating with customers and potential customers through the analysts who cover the company's quarterly-earnings calls.

During Walmart's fiscal-fourth-quarter-earnings call, McMillon was clear that prices are going down.

"I'm excited about the omnichannel net promoter score trends the team is driving. Across countries, we continue to see a customer that's resilient but looking for value. As always, we're working hard to deliver that for them, including through our rollbacks on food pricing in Walmart U.S. Those were up significantly in Q4 versus last year, following a big increase in Q3," he said.

He was specific about where the chain has seen prices go down.

"Our general merchandise prices are lower than a year ago and even two years ago in some categories, which means our customers are finding value in areas like apparel and hard lines," he said. "In food, prices are lower than a year ago in places like eggs, apples, and deli snacks, but higher in other places like asparagus and blackberries."

McMillon said that in other areas prices were still up but have been falling.

"Dry grocery and consumables categories like paper goods and cleaning supplies are up mid-single digits versus last year and high teens versus two years ago. Private-brand penetration is up in many of the countries where we operate, including the United States," he said.

Costco sees almost no inflation impact

McMillon avoided the word inflation in his comments. Costco  (COST)  Chief Financial Officer Richard Galanti, who steps down on March 15, has been very transparent on the topic.

The CFO commented on inflation during his company's fiscal-first-quarter-earnings call.

"Most recently, in the last fourth-quarter discussion, we had estimated that year-over-year inflation was in the 1% to 2% range. Our estimate for the quarter just ended, that inflation was in the 0% to 1% range," he said.

Galanti made clear that inflation (and even deflation) varied by category.

"A bigger deflation in some big and bulky items like furniture sets due to lower freight costs year over year, as well as on things like domestics, bulky lower-priced items, again, where the freight cost is significant. Some deflationary items were as much as 20% to 30% and, again, mostly freight-related," he added.

Read More

Continue Reading

Government

Walmart has really good news for shoppers (and Joe Biden)

The giant retailer joins Costco in making a statement that has political overtones, even if that’s not the intent.

Published

on

As we head toward a presidential election, the presumed candidates for both parties will look for issues that rally undecided voters. 

The economy will be a key issue, with Democrats pointing to job creation and lowering prices while Republicans will cite the layoffs at Big Tech companies, high housing prices, and of course, sticky inflation.

The covid pandemic created a perfect storm for inflation and higher prices. It became harder to get many items because people getting sick slowed down, or even stopped, production at some factories.

Related: Popular mall retailer shuts down abruptly after bankruptcy filing

It was also a period where demand increased while shipping, trucking and delivery systems were all strained or thrown out of whack. The combination led to product shortages and higher prices.

You might have gone to the grocery store and not been able to buy your favorite paper towel brand or find toilet paper at all. That happened partly because of the supply chain and partly due to increased demand, but at the end of the day, it led to higher prices, which some consumers blamed on President Joe Biden's administration.

Biden, of course, was blamed for the price increases, but as inflation has dropped and grocery prices have fallen, few companies have been up front about it. That's probably not a political choice in most cases. Instead, some companies have chosen to lower prices more slowly than they raised them.

However, two major retailers, Walmart (WMT) and Costco, have been very honest about inflation. Walmart Chief Executive Doug McMillon's most recent comments validate what Biden's administration has been saying about the state of the economy. And they contrast with the economic picture being painted by Republicans who support their presumptive nominee, Donald Trump.

Walmart has seen inflation drop in many key areas.

Image source&colon; Joe Raedle&sol;Getty Images

Walmart sees lower prices

McMillon does not talk about lower prices to make a political statement. He's communicating with customers and potential customers through the analysts who cover the company's quarterly-earnings calls.

During Walmart's fiscal-fourth-quarter-earnings call, McMillon was clear that prices are going down.

"I'm excited about the omnichannel net promoter score trends the team is driving. Across countries, we continue to see a customer that's resilient but looking for value. As always, we're working hard to deliver that for them, including through our rollbacks on food pricing in Walmart U.S. Those were up significantly in Q4 versus last year, following a big increase in Q3," he said.

He was specific about where the chain has seen prices go down.

"Our general merchandise prices are lower than a year ago and even two years ago in some categories, which means our customers are finding value in areas like apparel and hard lines," he said. "In food, prices are lower than a year ago in places like eggs, apples, and deli snacks, but higher in other places like asparagus and blackberries."

McMillon said that in other areas prices were still up but have been falling.

"Dry grocery and consumables categories like paper goods and cleaning supplies are up mid-single digits versus last year and high teens versus two years ago. Private-brand penetration is up in many of the countries where we operate, including the United States," he said.

Costco sees almost no inflation impact

McMillon avoided the word inflation in his comments. Costco  (COST)  Chief Financial Officer Richard Galanti, who steps down on March 15, has been very transparent on the topic.

The CFO commented on inflation during his company's fiscal-first-quarter-earnings call.

"Most recently, in the last fourth-quarter discussion, we had estimated that year-over-year inflation was in the 1% to 2% range. Our estimate for the quarter just ended, that inflation was in the 0% to 1% range," he said.

Galanti made clear that inflation (and even deflation) varied by category.

"A bigger deflation in some big and bulky items like furniture sets due to lower freight costs year over year, as well as on things like domestics, bulky lower-priced items, again, where the freight cost is significant. Some deflationary items were as much as 20% to 30% and, again, mostly freight-related," he added.

Read More

Continue Reading

Trending