Connect with us

Government

Robot Generals: Will They Make Better Decisions Than Humans Or Worse?

Robot Generals: Will They Make Better Decisions Than Humans Or Worse?

Published

on

Robot Generals: Will They Make Better Decisions Than Humans Or Worse? Tyler Durden Thu, 08/27/2020 - 21:45

Authored by Michael Klare via TomDispatch.com,

With Covid-19 incapacitating startling numbers of U.S. service members and modern weapons proving increasingly lethal, the American military is relying ever more frequently on intelligent robots to conduct hazardous combat operations. Such devices, known in the military as “autonomous weapons systems,” include robotic sentries, battlefield-surveillance drones, and autonomous submarines. So far, in other words, robotic devices are merely replacing standard weaponry on conventional battlefields.

Now, however, in a giant leap of faith, the Pentagon is seeking to take this process to an entirely new level — by replacing not just ordinary soldiers and their weapons, but potentially admirals and generals with robotic systems.

Admittedly, those systems are still in the development stage, but the Pentagon is now rushing their future deployment as a matter of national urgency. Every component of a modern general staff — including battle planning, intelligence-gathering, logistics, communications, and decision-making — is, according to the Pentagon’s latest plans, to be turned over to complex arrangements of sensors, computers, and software. All these will then be integrated into a “system of systems,” now dubbed the Joint All-Domain Command-and-Control, or JADC2 (since acronyms remain the essence of military life). Eventually, that amalgam of systems may indeed assume most of the functions currently performed by American generals and their senior staff officers.

The notion of using machines to make command-level decisions is not, of course, an entirely new one. It has, in truth, been a long time coming. During the Cold War, following the introduction of intercontinental ballistic missiles (ICBMs) with extremely short flight times, both military strategists and science-fiction writers began to imagine mechanical systems that would control such nuclear weaponry in the event of human incapacity.

In Stanley Kubrick’s satiric 1964 movie Dr. Strangelove, for example, the fictional Russian leader Dimitri Kissov reveals that the Soviet Union has installed a “doomsday machine” capable of obliterating all human life that would detonate automatically should the country come under attack by American nuclear forces. Efforts by crazed anti-Soviet U.S. Air Force officers to provoke a war with Moscow then succeed in triggering that machine and so bring about human annihilation. In reality, fearing that they might experience a surprise attack of just this sort, the Soviets later did install a semi-automatic retaliatory system they dubbed “Perimeter,” designed to launch Soviet ICBMs in the event that sensors detected nuclear explosions and all communications from Moscow had been silenced. Some analysts believe that an upgraded version of Perimeter is still in operation, leaving us in an all-too-real version of a Strangelovian world.

In yet another sci-fi version of such automated command systems, the 1983 film War Games, starring Matthew Broderick as a teenage hacker, portrayed a supercomputer called the War Operations Plan Response, or WOPR (pronounced “whopper”) installed at the North American Aerospace Command (NORAD) headquarters in Colorado. When the Broderick character hacks into it and starts playing what he believes is a game called “World War III,” the computer concludes an actual Soviet attack is underway and launches a nuclear retaliatory response. Although fictitious, the movie accurately depicts many aspects of the U.S. nuclear command-control-and-communications (NC3) system, which was then and still remains highly automated.

Such devices, both real and imagined, were relatively primitive by today’s standards, being capable solely of determining that a nuclear attack was under way and ordering a catastrophic response. Now, as a result of vast improvements in artificial intelligence (AI) and machine learning, machines can collect and assess massive amounts of sensor data, swiftly detect key trends and patterns, and potentially issue orders to combat units as to where to attack and when.

Time Compression and Human Fallibility

The substitution of intelligent machines for humans at senior command levels is becoming essential, U.S. strategists argue, because an exponential growth in sensor information combined with the increasing speed of warfare is making it nearly impossible for humans to keep track of crucial battlefield developments. If future scenarios prove accurate, battles that once unfolded over days or weeks could transpire in the space of hours, or even minutes, while battlefield information will be pouring in as multitudinous data points, overwhelming staff officers. Only advanced computers, it is claimed, could process so much information and make informed combat decisions within the necessary timeframe.

Such time compression and the expansion of sensor data may apply to any form of combat, but especially to the most terrifying of them all, nuclear war. When ICBMs were the principal means of such combat, decisionmakers had up to 30 minutes between the time a missile was launched and the moment of detonation in which to determine whether a potential attack was real or merely a false satellite reading (as did sometimes occur during the Cold War). Now, that may not sound like much time, but with the recent introduction of hypersonic missiles, such assessment times could shrink to as little as five minutes. Under such circumstances, it’s a lot to expect even the most alert decision-makers to reach an informed judgment on the nature of a potential attack. Hence the appeal (to some) of automated decision-making systems.

“Attack-time compression has placed America’s senior leadership in a situation where the existing NC3 system may not act rapidly enough,” military analysts Adam Lowther and Curtis McGiffin argued at War on the Rocks, a security-oriented website.

“Thus, it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

This notion, that an artificial intelligence-powered device — in essence, a more intelligent version of the doomsday machine or the WOPR — should be empowered to assess enemy behavior and then, on the basis of “predetermined response options,” decide humanity’s fate, has naturally produced some unease in the community of military analysts (as it should for the rest of us as well). Nevertheless, American strategists continue to argue that battlefield assessment and decision-making — for both conventional and nuclear warfare — should increasingly be delegated to machines.

“AI-powered intelligence systems may provide the ability to integrate and sort through large troves of data from different sources and geographic locations to identify patterns and highlight useful information,” the Congressional Research Service noted in a November 2019 summary of Pentagon thinking.

“As the complexity of AI systems matures,” it added, “AI algorithms may also be capable of providing commanders with a menu of viable courses of action based on real-time analysis of the battlespace, in turn enabling faster adaptation to complex events.”

The key wording there is “a menu of viable courses of action based on real-time analysis of the battlespace.” This might leave the impression that human generals and admirals (not to speak of their commander-in-chief) will still be making the ultimate life-and-death decisions for both their own forces and the planet. Given such anticipated attack-time compression in future high-intensity combat with China and/or Russia, however, humans may no longer have the time or ability to analyze the battlespace themselves and so will come to rely on AI algorithms for such assessments. As a result, human commanders may simply find themselves endorsing decisions made by machines — and so, in the end, become superfluous.

Creating Robot Generals

Despite whatever misgivings they may have about their future job security, America’s top generals are moving swiftly to develop and deploy that JADC2 automated command mechanism. Overseen by the Air Force, it’s proving to be a computer-driven amalgam of devices for collecting real-time intelligence on enemy forces from vast numbers of sensor devices (satellites, ground radars, electronic listening posts, and so on), processing that data into actionable combat information, and providing precise attack instructions to every combat unit and weapons system engaged in a conflict — whether belonging to the Army, Navy, Air Force, Marine Corps, or the newly formed Space Force and Cyber Command.

What, exactly, the JADC2 will consist of is not widely known, partly because many of its component systems are still shrouded in secrecy and partly because much of the essential technology is still in the development stage. Delegated with responsibility for overseeing the project, the Air Force is working with Lockheed Martin and other large defense contractors to design and develop key elements of the system.

One such building block is its Advanced Battle Management System (ABMS), a data-collection and distribution system intended to provide fighter pilots with up-to-the-minute data on enemy positions and help guide their combat moves. Another key component is the Army’s Integrated Air and Missile Defense Battle Command System (IBCS), designed to connect radar systems to anti-aircraft and missile-defense launchers and provide them with precise firing instructions. Over time, the Air Force and its multiple contractors will seek to integrate ABMS and IBCS into a giant network of systems connecting every sensor, shooter, and commander in the country’s armed forces — a military “internet of things,” as some have put it.

To test this concept and provide an example of how it might operate in the future, the Army conducted a live-fire artillery exercise this August in Germany using components (or facsimiles) of the future JADC2 system. In the first stage of the test, satellite images of (presumed) Russian troop positions were sent to an Army ground terminal, where an AI software program called Prometheus combed through the data to select enemy targets. Next, another AI program called SHOT computed the optimal match of available Army weaponry to those intended targets and sent this information, along with precise firing coordinates, to the Army’s Advanced Field Artillery Tactical Data System (AFATDS) for immediate action, where human commanders could choose to implement it or not. In the exercise, those human commanders had the mental space to give the matter a moment’s thought; in a shooting war, they might just leave everything to the machines, as the system’s designers clearly intend them to do.

In the future, the Army is planning even more ambitious tests of this evolving technology under an initiative called Project Convergence. From what’s been said publicly about it, Convergence will undertake ever more complex exercises involving satellites, Air Force fighters equipped with the ABMS system, Army helicopters, drones, artillery pieces, and tactical vehicles. Eventually, all of this will form the underlying “architecture” of the JADC2, linking every military sensor system to every combat unit and weapons system — leaving the generals with little to do but sit by and watch.

Why Robot Generals Could Get It Wrong

Given the complexity of modern warfare and the challenge of time compression in future combat, the urge of American strategists to replace human commanders with robotic ones is certainly understandable. Robot generals and admirals might theoretically be able to process staggering amounts of information in brief periods of time, while keeping track of both friendly and enemy forces and devising optimal ways to counter enemy moves on a future battlefield. But there are many good reasons to doubt the reliability of robot decision-makers and the wisdom of using them in place of human officers.

To begin with, many of these technologies are still in their infancy, and almost all are prone to malfunctions that can neither be easily anticipated nor understood. And don’t forget that even advanced algorithms can be fooled, or “spoofed,” by skilled professionals.

In addition, unlike humans, AI-enabled decision-making systems will lack an ability to assess intent or context. Does a sudden enemy troop deployment, for example, indicate an imminent attack, a bluff, or just a normal rotation of forces? Human analysts can use their understanding of the current political moment and the actors involved to help guide their assessment of the situation. Machines lack that ability and may assume the worst, initiating military action that could have been avoided.

Such a problem will only be compounded by the “training” such decision-making algorithms will undergo as they are adapted to military situations. Just as facial recognition software has proved to be tainted by an over-reliance on images of white males in the training process — making them less adept at recognizing, say, African-American women — military decision-making algorithms are likely to be distorted by an over-reliance on the combat-oriented scenarios selected by American military professionals for training purposes. “Worst-case thinking” is a natural inclination of such officers — after all, who wants to be caught unprepared for a possible enemy surprise attack? — and such biases will undoubtedly become part of the “menus of viable courses of action” provided by decision-making robots.

Once integrated into decision-making algorithms, such biases could, in turn, prove exceedingly dangerous in any future encounters between U.S. and Russian troops in Europe or American and Chinese forces in Asia. A clash of this sort might, after all, arise at any time, thanks to some misunderstanding or local incident that rapidly gains momentum — a sudden clash between U.S. and Chinese warships off Taiwan, for example, or between American and Russian patrols in one of the Baltic states. Neither side may have intended to ignite a full-scale conflict and leaders on both sides might normally move to negotiate a cease-fire. But remember, these will no longer simply be human conflicts. In the wake of such an incident, the JADC2 could detect some enemy move that it determines poses an imminent risk to allied forces and so immediately launch an all-out attack by American planes, missiles, and artillery, escalating the conflict and foreclosing any chance of an early negotiated settlement.

Such prospects become truly frightening when what’s at stake is the onset of nuclear war. It’s hard to imagine any conflict among the major powers starting out as a nuclear war, but it’s far easier to envision a scenario in which the great powers — after having become embroiled in a conventional conflict — reach a point where one side or the other considers the use of atomic arms to stave off defeat. American military doctrine, in fact, has always held out the possibility of using so-called tactical nuclear weapons in response to a massive Soviet (now Russian) assault in Europe. Russian military doctrine, it is widely assumed, incorporates similar options. Under such circumstances, a future JADC2 could misinterpret enemy moves as signaling preparation for a nuclear launch and order a pre-emptive strike by U.S. nuclear forces, thereby igniting World War III.

War is a nasty, brutal activity and, given almost two decades of failed conflicts that have gone under the label of “the war on terror,” causing thousands of American casualties (both physical and mental), it’s easy to understand why robot enthusiasts are so eager to see another kind of mentality take over American war-making. As a start, they contend, especially in a pandemic world, that it’s only humane to replace human soldiers on the battlefield with robots and so diminish human casualties (at least among combatants). This claim does not, of course, address the argument that robot soldiers and drone aircraft lack the ability to distinguish between combatants and non-combatants on the battlefield and so cannot be trusted to comply with the laws of war or international humanitarian law — which, at least theoretically, protect civilians from unnecessary harm — and so should be banned.

Fraught as all of that may be on future battlefields, replacing generals and admirals with robots is another matter altogether. Not only do legal and moral arguments arise with a vengeance, as the survival of major civilian populations could be put at risk by computer-derived combat decisions, but there’s no guarantee that American GIs would suffer fewer casualties in the battles that ensued. Maybe it’s time, then, for Congress to ask some tough questions about the advisability of automating combat decision-making before this country pours billions of additional taxpayer dollars into an enterprise that could, in fact, lead to the end of the world as we know it.

Maybe it’s time as well for the leaders of China, Russia, and this country to limit or ban the deployment of hypersonic missiles and other weaponry that will compress life-and-death decisions for humanity into just a few minutes, thereby justifying the automation of such fateful judgments.

Read More

Continue Reading

Government

Are Voters Recoiling Against Disorder?

Are Voters Recoiling Against Disorder?

Authored by Michael Barone via The Epoch Times (emphasis ours),

The headlines coming out of the Super…

Published

on

Are Voters Recoiling Against Disorder?

Authored by Michael Barone via The Epoch Times (emphasis ours),

The headlines coming out of the Super Tuesday primaries have got it right. Barring cataclysmic changes, Donald Trump and Joe Biden will be the Republican and Democratic nominees for president in 2024.

(Left) President Joe Biden delivers remarks on canceling student debt at Culver City Julian Dixon Library in Culver City, Calif., on Feb. 21, 2024. (Right) Republican presidential candidate and former U.S. President Donald Trump stands on stage during a campaign event at Big League Dreams Las Vegas in Las Vegas, Nev., on Jan. 27, 2024. (Mario Tama/Getty Images; David Becker/Getty Images)

With Nikki Haley’s withdrawal, there will be no more significantly contested primaries or caucuses—the earliest both parties’ races have been over since something like the current primary-dominated system was put in place in 1972.

The primary results have spotlighted some of both nominees’ weaknesses.

Donald Trump lost high-income, high-educated constituencies, including the entire metro area—aka the Swamp. Many but by no means all Haley votes there were cast by Biden Democrats. Mr. Trump can’t afford to lose too many of the others in target states like Pennsylvania and Michigan.

Majorities and large minorities of voters in overwhelmingly Latino counties in Texas’s Rio Grande Valley and some in Houston voted against Joe Biden, and even more against Senate nominee Rep. Colin Allred (D-Texas).

Returns from Hispanic precincts in New Hampshire and Massachusetts show the same thing. Mr. Biden can’t afford to lose too many Latino votes in target states like Arizona and Georgia.

When Mr. Trump rode down that escalator in 2015, commentators assumed he’d repel Latinos. Instead, Latino voters nationally, and especially the closest eyewitnesses of Biden’s open-border policy, have been trending heavily Republican.

High-income liberal Democrats may sport lawn signs proclaiming, “In this house, we believe ... no human is illegal.” The logical consequence of that belief is an open border. But modest-income folks in border counties know that flows of illegal immigrants result in disorder, disease, and crime.

There is plenty of impatience with increased disorder in election returns below the presidential level. Consider Los Angeles County, America’s largest county, with nearly 10 million people, more people than 40 of the 50 states. It voted 71 percent for Mr. Biden in 2020.

Current returns show county District Attorney George Gascon winning only 21 percent of the vote in the nonpartisan primary. He’ll apparently face Republican Nathan Hochman, a critic of his liberal policies, in November.

Gascon, elected after the May 2020 death of counterfeit-passing suspect George Floyd in Minneapolis, is one of many county prosecutors supported by billionaire George Soros. His policies include not charging juveniles as adults, not seeking higher penalties for gang membership or use of firearms, and bringing fewer misdemeanor cases.

The predictable result has been increased car thefts, burglaries, and personal robberies. Some 120 assistant district attorneys have left the office, and there’s a backlog of 10,000 unprosecuted cases.

More than a dozen other Soros-backed and similarly liberal prosecutors have faced strong opposition or have left office.

St. Louis prosecutor Kim Gardner resigned last May amid lawsuits seeking her removal, Milwaukee’s John Chisholm retired in January, and Baltimore’s Marilyn Mosby was defeated in July 2022 and convicted of perjury in September 2023. Last November, Loudoun County, Virginia, voters (62 percent Biden) ousted liberal Buta Biberaj, who declined to prosecute a transgender student for assault, and in June 2022 voters in San Francisco (85 percent Biden) recalled famed radical Chesa Boudin.

Similarly, this Tuesday, voters in San Francisco passed ballot measures strengthening police powers and requiring treatment of drug-addicted welfare recipients.

In retrospect, it appears the Floyd video, appearing after three months of COVID-19 confinement, sparked a frenzied, even crazed reaction, especially among the highly educated and articulate. One fatal incident was seen as proof that America’s “systemic racism” was worse than ever and that police forces should be defunded and perhaps abolished.

2020 was “the year America went crazy,” I wrote in January 2021, a year in which police funding was actually cut by Democrats in New York, Los Angeles, San Francisco, Seattle, and Denver. A year in which young New York Times (NYT) staffers claimed they were endangered by the publication of Sen. Tom Cotton’s (R-Ark.) opinion article advocating calling in military forces if necessary to stop rioting, as had been done in Detroit in 1967 and Los Angeles in 1992. A craven NYT publisher even fired the editorial page editor for running the article.

Evidence of visible and tangible discontent with increasing violence and its consequences—barren and locked shelves in Manhattan chain drugstores, skyrocketing carjackings in Washington, D.C.—is as unmistakable in polls and election results as it is in daily life in large metropolitan areas. Maybe 2024 will turn out to be the year even liberal America stopped acting crazy.

Chaos and disorder work against incumbents, as they did in 1968 when Democrats saw their party’s popular vote fall from 61 percent to 43 percent.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times or ZeroHedge.

Tyler Durden Sat, 03/09/2024 - 23:20

Read More

Continue Reading

Government

Veterans Affairs Kept COVID-19 Vaccine Mandate In Place Without Evidence

Veterans Affairs Kept COVID-19 Vaccine Mandate In Place Without Evidence

Authored by Zachary Stieber via The Epoch Times (emphasis ours),

The…

Published

on

Veterans Affairs Kept COVID-19 Vaccine Mandate In Place Without Evidence

Authored by Zachary Stieber via The Epoch Times (emphasis ours),

The U.S. Department of Veterans Affairs (VA) reviewed no data when deciding in 2023 to keep its COVID-19 vaccine mandate in place.

Doses of a COVID-19 vaccine in Washington in a file image. (Jacquelyn Martin/Pool/AFP via Getty Images)

VA Secretary Denis McDonough said on May 1, 2023, that the end of many other federal mandates “will not impact current policies at the Department of Veterans Affairs.”

He said the mandate was remaining for VA health care personnel “to ensure the safety of veterans and our colleagues.”

Mr. McDonough did not cite any studies or other data. A VA spokesperson declined to provide any data that was reviewed when deciding not to rescind the mandate. The Epoch Times submitted a Freedom of Information Act for “all documents outlining which data was relied upon when establishing the mandate when deciding to keep the mandate in place.”

The agency searched for such data and did not find any.

The VA does not even attempt to justify its policies with science, because it can’t,” Leslie Manookian, president and founder of the Health Freedom Defense Fund, told The Epoch Times.

“The VA just trusts that the process and cost of challenging its unfounded policies is so onerous, most people are dissuaded from even trying,” she added.

The VA’s mandate remains in place to this day.

The VA’s website claims that vaccines “help protect you from getting severe illness” and “offer good protection against most COVID-19 variants,” pointing in part to observational data from the U.S. Centers for Disease Control and Prevention (CDC) that estimate the vaccines provide poor protection against symptomatic infection and transient shielding against hospitalization.

There have also been increasing concerns among outside scientists about confirmed side effects like heart inflammation—the VA hid a safety signal it detected for the inflammation—and possible side effects such as tinnitus, which shift the benefit-risk calculus.

President Joe Biden imposed a slate of COVID-19 vaccine mandates in 2021. The VA was the first federal agency to implement a mandate.

President Biden rescinded the mandates in May 2023, citing a drop in COVID-19 cases and hospitalizations. His administration maintains the choice to require vaccines was the right one and saved lives.

“Our administration’s vaccination requirements helped ensure the safety of workers in critical workforces including those in the healthcare and education sectors, protecting themselves and the populations they serve, and strengthening their ability to provide services without disruptions to operations,” the White House said.

Some experts said requiring vaccination meant many younger people were forced to get a vaccine despite the risks potentially outweighing the benefits, leaving fewer doses for older adults.

By mandating the vaccines to younger people and those with natural immunity from having had COVID, older people in the U.S. and other countries did not have access to them, and many people might have died because of that,” Martin Kulldorff, a professor of medicine on leave from Harvard Medical School, told The Epoch Times previously.

The VA was one of just a handful of agencies to keep its mandate in place following the removal of many federal mandates.

“At this time, the vaccine requirement will remain in effect for VA health care personnel, including VA psychologists, pharmacists, social workers, nursing assistants, physical therapists, respiratory therapists, peer specialists, medical support assistants, engineers, housekeepers, and other clinical, administrative, and infrastructure support employees,” Mr. McDonough wrote to VA employees at the time.

This also includes VA volunteers and contractors. Effectively, this means that any Veterans Health Administration (VHA) employee, volunteer, or contractor who works in VHA facilities, visits VHA facilities, or provides direct care to those we serve will still be subject to the vaccine requirement at this time,” he said. “We continue to monitor and discuss this requirement, and we will provide more information about the vaccination requirements for VA health care employees soon. As always, we will process requests for vaccination exceptions in accordance with applicable laws, regulations, and policies.”

The version of the shots cleared in the fall of 2022, and available through the fall of 2023, did not have any clinical trial data supporting them.

A new version was approved in the fall of 2023 because there were indications that the shots not only offered temporary protection but also that the level of protection was lower than what was observed during earlier stages of the pandemic.

Ms. Manookian, whose group has challenged several of the federal mandates, said that the mandate “illustrates the dangers of the administrative state and how these federal agencies have become a law unto themselves.”

Tyler Durden Sat, 03/09/2024 - 22:10

Read More

Continue Reading

Government

Low Iron Levels In Blood Could Trigger Long COVID: Study

Low Iron Levels In Blood Could Trigger Long COVID: Study

Authored by Amie Dahnke via The Epoch Times (emphasis ours),

People with inadequate…

Published

on

Low Iron Levels In Blood Could Trigger Long COVID: Study

Authored by Amie Dahnke via The Epoch Times (emphasis ours),

People with inadequate iron levels in their blood due to a COVID-19 infection could be at greater risk of long COVID.

(Shutterstock)

A new study indicates that problems with iron levels in the bloodstream likely trigger chronic inflammation and other conditions associated with the post-COVID phenomenon. The findings, published on March 1 in Nature Immunology, could offer new ways to treat or prevent the condition.

Long COVID Patients Have Low Iron Levels

Researchers at the University of Cambridge pinpointed low iron as a potential link to long-COVID symptoms thanks to a study they initiated shortly after the start of the pandemic. They recruited people who tested positive for the virus to provide blood samples for analysis over a year, which allowed the researchers to look for post-infection changes in the blood. The researchers looked at 214 samples and found that 45 percent of patients reported symptoms of long COVID that lasted between three and 10 months.

In analyzing the blood samples, the research team noticed that people experiencing long COVID had low iron levels, contributing to anemia and low red blood cell production, just two weeks after they were diagnosed with COVID-19. This was true for patients regardless of age, sex, or the initial severity of their infection.

According to one of the study co-authors, the removal of iron from the bloodstream is a natural process and defense mechanism of the body.

But it can jeopardize a person’s recovery.

When the body has an infection, it responds by removing iron from the bloodstream. This protects us from potentially lethal bacteria that capture the iron in the bloodstream and grow rapidly. It’s an evolutionary response that redistributes iron in the body, and the blood plasma becomes an iron desert,” University of Oxford professor Hal Drakesmith said in a press release. “However, if this goes on for a long time, there is less iron for red blood cells, so oxygen is transported less efficiently affecting metabolism and energy production, and for white blood cells, which need iron to work properly. The protective mechanism ends up becoming a problem.”

The research team believes that consistently low iron levels could explain why individuals with long COVID continue to experience fatigue and difficulty exercising. As such, the researchers suggested iron supplementation to help regulate and prevent the often debilitating symptoms associated with long COVID.

It isn’t necessarily the case that individuals don’t have enough iron in their body, it’s just that it’s trapped in the wrong place,” Aimee Hanson, a postdoctoral researcher at the University of Cambridge who worked on the study, said in the press release. “What we need is a way to remobilize the iron and pull it back into the bloodstream, where it becomes more useful to the red blood cells.”

The research team pointed out that iron supplementation isn’t always straightforward. Achieving the right level of iron varies from person to person. Too much iron can cause stomach issues, ranging from constipation, nausea, and abdominal pain to gastritis and gastric lesions.

1 in 5 Still Affected by Long COVID

COVID-19 has affected nearly 40 percent of Americans, with one in five of those still suffering from symptoms of long COVID, according to the U.S. Centers for Disease Control and Prevention (CDC). Long COVID is marked by health issues that continue at least four weeks after an individual was initially diagnosed with COVID-19. Symptoms can last for days, weeks, months, or years and may include fatigue, cough or chest pain, headache, brain fog, depression or anxiety, digestive issues, and joint or muscle pain.

Tyler Durden Sat, 03/09/2024 - 12:50

Read More

Continue Reading

Trending