Connect with us

Spread & Containment

From Hype to Utility

Artificial intelligence is emerging from its overhyped past to provide analytical muscle in precision medicine.
The post From Hype to Utility appeared first on Clinical OMICs – Molecular Diagnostics in Precision Medicine.

Published

on

Artificial intelligence (AI) was perhaps overhyped when it was first applied for improving diagnostics, clinical decision support, and therapeutic target identification. Its reputation as a game-changer was likely sullied by the promise—and subsequent lack of meaningful results—of IBM’s Watson. Nevertheless, AI today is used in myriad ways to improve diagnostic and therapeutic pathways for patients and its uses are expanding. But can it really help to achieve widespread application of precision medicine and how far are we from reaching that goal?

We have more sources of information today than ever before. This is particularly true in medicine. In the past a clinician saw a patient, talked to them, and wrote down any symptoms that they were experiencing, storing this information in a paper medical record.

In the post-genomic era, in addition to standard symptom data, blood biomarkers, tissue histology, and more traditional imaging data, a patient may also have genomic, metabolomic, proteomic, and transcriptomic information available for analysis by their doctor, among other measures. This information glut is difficult, if not impossible, for individual clinicians to analyze and use without some form of automated analysis. This is where artificial intelligence (AI) comes in.

“There is a huge amount of data out there. We can learn much more about patients than we ever could even when we imagined the Human Genome Project 20 years ago. The challenge is how best to use it to benefit patients and lower the cost of health care and that is AI, it’s analytics, it’s new methodologies around that,” Steve Gardner, CEO of PrecisionLife, a company applying AI and advanced mathematical models to the problem of treating complex diseases, told Clinical Omics.

Use of AI for medical applications such as digital pathology, prognosis prediction, and drug design has gradually crept into the clinic over the last decade, but its use is still far from mainstream. For the use of this technology to become routine, there are a number of challenges that still need to be overcome.

Nathan Buchbinder, co-founder, Proscia

“The technology is at a point today where there is massive potential for the technology as it stands to make a huge impact… even without some of the incredible advances that we’re expecting to see in AI and machine learning over the next decade,” commented Nathan Buchbinder, co-founder and chief product officer at Proscia, which has a focus on AI and digital pathology.

“What needs to happen more urgently than anything else is demonstrating that what we have today works in practice and scale it up from there,” he added.

Realizing AI’s potential

AI as a field of study first began in the 1950s. It progressed slowly, with many techniques attempted and failures recorded, leading to a so-called ‘AI Winter’ of low interest and funding that began in the early 90’s and lasted nearly 20 years.

Since 2010, interest and funding in AI has increased dramatically. Significant advances in AI methods have been developed, such as advanced image recognition, that can be applied to improve patient outcomes. Machine learning, which was originally derived from AI and allows programs and software to ‘learn’ over time, has also improved dramatically.

Over the last few years, several related technologies have advanced rapidly, making it possible for companies and researchers to start using AI in medicine to greater effect.

“The technology that’s powering these computational applications has improved over the last three years. Even more recently than that, it’s opening up a whole new avenue of approaches that you can take to build technology that finds hidden patterns that weren’t detectable before,” said Buchbinder.

Other advances, such as high-scale and affordable cloud computing, and high-throughput, low-cost biological data acquisition technologies, like next-generation sequencing, have also come together to boost AI-based medical approaches and make them more achievable for companies to develop.

Last year was a bumper year for AI in medicine and digital health more generally, which had an increase in investment of 45% compared with 2019.  The technology also advanced significantly. For example, at the end of 2020 Google’s AI Spinoff DeepMind had achieved a breakthrough and used AI to determine a protein’s 3D shape from its amino-acid sequence.

Arie Baak
Arie Baak, co-founder and president, Euretos

“Not so long ago, this was seen as one of the major challenges in computational biology,” commented Arie Baak, co-founder and president of Euretos, an AI company helping biotech and pharma companies design better drugs.

Most approaches using AI in medicine fall into those that improve patient diagnosis and outcome prediction, and those that improve drug development by making it easier to find suitable targets and to predict how specific individuals will respond to therapy.

Digital pathology is a good example of a diagnostic area using AI that has advanced dramatically in the last few years. Proscia and companies such as Paige.ai, Path.ai, and others are making good use of advances in image recognition to improve diagnoses for patients with cancer and other conditions that require pathology assessment.

“What we saw when we started the company was that the field of pathology, for the last 150 years, really hadn’t changed substantially, with pathologists looking at tissue embedded on glass slides under the microscope,” said Buchbinder.

“While that practice has fueled an amazing amount of innovation and discovery, there’s a whole opportunity that comes once slides are digital, once you overcome some of the limitations that you get working with physical tissue.”

Use of this kind of imaging tool can help pathologists to pick up cellular changes hard to see by eye alone. Use of AI means that millions of reference images can be scanned almost instantly to look for similarities or differences to an image from a specific patient, which has the potential to improve and speed up diagnoses while also to improving treatment for diseases like cancer.

Greater access to advanced single-cell technology has led to cell-based approaches being adopted. For example, Cytoreason focuses on cell-level models of human disease and claims to have one of the world’s largest libraries of human molecular data. The company has partnerships with many big pharma companies like Roche, Sanofi and Pfizer, which are using its models to develop better and more targeted drugs.

Deepcell, in contrast, uses microfluidic single-cell technology and machine learning to analyze and sort cells without bias both for diagnostic and research purposes. “The first problem we are tackling is the lack of adequate markers for specific applications. Using supervised learning techniques, Deepcell can train AI models to identify specific cells and cell types at high performance by just imaging them,” the company’s Chief Medical Officer, Tom Musci, told Clinical Omics.

“A second problem we are solving is the pressing need to improve the cost and complexity of single cell analysis,” Musci added. “There are known technical issues associated with staining and labeling cells, as well as inevitable errors in human judgment.”

AI is being used extensively by companies such as Exscientia, Sema4, and Euretos, among others, to develop new therapeutics through a variety of methods. These include finding new drug targets, repurposing drugs originally developed for other purposes, and improving the precision medicine approach by developing advanced models of how patients respond to different diseases and available treatments. Exscientia has already brought three drugs to clinical testing using its technology, including a treatment for Alzheimer’s disease psychosis, which recently entered Phase I clinical trials in the U.S.

Some areas of medicine lend themselves to AI-based applications more than others. For example, oncologists are already applying precision medicine technologies, such as genetically targeted therapies, more than any other specialty. The large amount of data already available and the enormous need for new drugs and diagnostic tools means many AI-based approaches are already being applied in this space.

Precision Life is taking on the challenge of creating models of complex, chronic diseases such as dementia, schizophrenia and metabolic disease to analyze disease risk and help predict therapy response.

“We’re trying to take the premise of precision medicine out of being something that is aligned to oncology and rare diseases and apply it to all of those more complex chronic diseases that actually cost the health systems so much money to treat, and where there are still huge pools of unmet medical need within the patient populations,” said Gardner.

Overcoming challenges

Although AI is already helping to make precision medicine a reality, there are several challenges that need to be overcome before AI-based technology can become more widespread in clinics and medical laboratories.

A problem with any system using AI is that it is only as good as the data is uses but also how it has been designed to use that data. AI works well if it can access a high volume of good quality data in a well-designed, unbiased model, but that is not always the case.

“Labs only really started going digital a few years ago and so there isn’t that massive trove of pathology information sitting at every site, or every laboratory that’s already digital,” said Buchbinder. “What that means is that if you’re looking to develop a new AI application, you have to find sites that are already digital, or that can digitize very quickly.”

Proscia recognized this shortcoming and is addressing it by helping pathology labs to digitize their collections.

Helping clinicians and life scientists accept and understand AI-based tools is another potential challenge to overcome, as AI is complex and persuading people to trust the technology has proved difficult in the past.

“There are cultural issues. If you are going to present a tool to clinicians, it has to be able to explain how it’s come to a decision,” notes Gardner.

Genome-wide association studies and, to a related extent, polygenic risk scores have a known problem with diversity, as most of the original populations that were tested were from a white, European background. This can also be a problem with AI-based applications, depending on where the data is sourced from.

“One of the tendencies of traditional AI and machine learning approaches is to replicate signals that they’ve seen before, and that can lead to unintentional biases in the models,” said Gardner. “So they work better in one population, or one gender, than they do in others.”

Many clinicians are used to relying on medical regulatory agencies such as the FDA and EMA to judge whether a drug or device is of good enough quality to guide their treatment decisions, but the fast speed of AI and other advanced computing technology development has tended to run ahead of the regulators.

However, over the last couple of years the FDA has made a commitment to research and investigate how these kinds of AI-based tools can be regulated. Scott Gottlieb, the former commissioner of the FDA, formed a set of principles to help assess and evaluate AI-based applications—including those that learn over time—before he left the agency.

“It’s something which is a radical departure from the traditional regulatory thinking. But which today is an absolute must have,” said Buchbinder.

David Harel
David Harel, CEO, Cytoreason

David Harel, CEO of Cytoreason, agrees and adds that the COVID-19 pandemic has also helped to make regulatory agencies more open to the use of AI-based tools.

“CytoReason built a model of acute respiratory distress syndrome (ARDS) in the lung and we took all the drugs of our customers and we gave them the option to match the response of their anti-inflammatories on ARDS for free to see if there was a match,” he explained.

“Six months before COVID-19, nobody believed that the FDA would approve something using efficacy data that was generated in a computer simulation. We got an emergency approval, and a clinical trial was initiated based on that data.”

Another hurdle for companies developing AI-based tools is that although the technology has the potential to save a lot of money in the long run, it can be, at least initially, an expensive investment for laboratories or drug developers.

When it comes to reimbursement from the insurance companies, that can also be problematic for tools or tests that have a preventative purpose.

“The reimbursement model pays for a test and the test is oriented to a treatment. The incentives are not really well aligned to prevent disease and provide early interventions that might lead actually keep people well,” says Gardner.

Can AI help realize better precision medicine?

Despite the challenges to widespread adoption, there is little doubt that AI in medicine is here to stay.

“In the last three years, we’ve gone from bench to bedside, we’ve gone from applications that were in discovery to applications being evaluated and deployed into routine clinical practice. And this is a huge deal for the industry,” says Buchbinder.

AI Deep
AI has the ability to identify multiple characteristics in a pathology slide and tag them accordingly using a color code. [Source: Proscial]
It’s important to see through the hype and understand that AI can be a useful tool to help healthcare providers, researchers and drug developers to do their jobs better and faster, but that it’s not a replacement for their work.

“My take is that this is moving in the right direction,” added Harel. “But let’s not let’s not rush it because there’s so much work to be done on the R&D side.”

A report published in June week by the World Health Organization, entitled the Ethics and governance of artificial intelligence for health, advises that AI “holds great promise” for improving healthcare around the world, as long as ethics and human rights are taken into account in its design.

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said WHO Director-General Tedros Adhanom Ghebreyesus, in a press statement about the report.

Looking to the future, it seems likely AI will play a role in more integrated healthcare where multiple datasets are incorporated into connected electronic patient records and diagnoses and treatment decisions are made based on analysis of this data.

“That’s where I think AI has potentially the biggest impact to play,” emphasized Buchbinder. “Because you start to find new trends. When you combine a pathology image, a radiology image, and other imaging and non-imaging data, you find new trends that no individual specialist is trained to look at.”

AI certainly has potential to make patient diagnoses more precise, both by making them more accurate and avoiding false positives.

Applying AI in reading pathology slides promises to significantly improve the efficiency and accuracy of pathology labs.

“Take the case of mammograms,” said Musci. “False-positive mammograms can cause anxiety and often lead to unnecessary tests such as MRIs, ultrasounds, and interventions.  About half of the women getting annual mammograms over a 10-year period will have a false-positive finding at some point. Reducing this would definitely save time, money, and alleviate anxiety.”

In drug development, we are just at the beginning. “There is a whole raft of novel drug discovery, programs that can be initiated, because we’ve identified novel targets,” said Gardner. “There are lots of different areas that it will impact. I think in five- or 10-years’ time, we’ll be asking ‘why are we still using tools that aren’t using some level of AI in precision medicine?”

The post From Hype to Utility appeared first on Clinical OMICs - Molecular Diagnostics in Precision Medicine.

Read More

Continue Reading

Government

“I Can’t Even Save”: Americans Are Getting Absolutely Crushed Under Enormous Debt Load

"I Can’t Even Save": Americans Are Getting Absolutely Crushed Under Enormous Debt Load

While Joe Biden insists that Americans are doing great…

Published

on

"I Can't Even Save": Americans Are Getting Absolutely Crushed Under Enormous Debt Load

While Joe Biden insists that Americans are doing great - suggesting in his State of the Union Address last week that "our economy is the envy of the world," Americans are being absolutely crushed by inflation (which the Biden admin blames on 'shrinkflation' and 'corporate greed'), and of course - crippling debt.

The signs are obvious. Last week we noted that banks' charge-offs are accelerating, and are now above pre-pandemic levels.

...and leading this increase are credit card loans - with delinquencies that haven't been this high since Q3 2011.

On top of that, while credit cards and nonfarm, nonresidential commercial real estate loans drove the quarterly increase in the noncurrent rate, residential mortgages drove the quarterly increase in the share of loans 30-89 days past due.

And while Biden and crew can spin all they want, an average of polls from RealClear Politics shows that just 40% of people approve of Biden's handling of the economy.

Crushed

On Friday, Bloomberg dug deeper into the effects of Biden's "envious" economy on Americans - specifically, how massive debt loads (credit cards and auto loans especially) are absolutely crushing people.

Two years after the Federal Reserve began hiking interest rates to tame prices, delinquency rates on credit cards and auto loans are the highest in more than a decade. For the first time on record, interest payments on those and other non-mortgage debts are as big a financial burden for US households as mortgage interest payments.

According to the report, this presents a difficult reality for millions of consumers who drive the US economy - "The era of high borrowing costs — however necessary to slow price increases — has a sting of its own that many families may feel for years to come, especially the ones that haven’t locked in cheap home loans."

The Fed, meanwhile, doesn't appear poised to cut rates until later this year.

According to a February paper from IMF and Harvard, the recent high cost of borrowing - something which isn't reflected in inflation figures, is at the heart of lackluster consumer sentiment despite inflation having moderated and a job market which has recovered (thanks to job gains almost entirely enjoyed by immigrants).

In short, the debt burden has made life under President Biden a constant struggle throughout America.

"I’m making the most money I've ever made, and I’m still living paycheck to paycheck," 40-year-old Denver resident Nikki Cimino told Bloomberg. Cimino is carrying a monthly mortgage of $1,650, and has $4,000 in credit card debt following a 2020 divorce.

Nikki CiminoPhotographer: Rachel Woolf/Bloomberg

"There's this wild disconnect between what people are experiencing and what economists are experiencing."

What's more, according to Wells Fargo, families have taken on debt at a comparatively fast rate - no doubt to sustain the same lifestyle as low rates and pandemic-era stimmies provided. In fact, it only took four years for households to set a record new debt level after paying down borrowings in 2021 when interest rates were near zero. 

Meanwhile, that increased debt load is exacerbated by credit card interest rates that have climbed to a record 22%, according to the Fed.

[P]art of the reason some Americans were able to take on a substantial load of non-mortgage debt is because they’d locked in home loans at ultra-low rates, leaving room on their balance sheets for other types of borrowing. The effective rate of interest on US mortgage debt was just 3.8% at the end of last year.

Yet the loans and interest payments can be a significant strain that shapes families’ spending choices. -Bloomberg

And of course, the highest-interest debt (credit cards) is hurting lower-income households the most, as tends to be the case.

The lowest earners also understandably had the biggest increase in credit card delinquencies.

"Many consumers are levered to the hilt — maxed out on debt and barely keeping their heads above water," Allan Schweitzer, a portfolio manager at credit-focused investment firm Beach Point Capital Management told Bloomberg. "They can dog paddle, if you will, but any uptick in unemployment or worsening of the economy could drive a pretty significant spike in defaults."

"We had more money when Trump was president," said Denise Nierzwicki, 69. She and her 72-year-old husband Paul have around $20,000 in debt spread across multiple cards - all of which have interest rates above 20%.

Denise and Paul Nierzwicki blame Biden for what they see as a gloomy economy and plan to vote for the Republican candidate in November.
Photographer: Jon Cherry/Bloomberg

During the pandemic, Denise lost her job and a business deal for a bar they owned in their hometown of Lexington, Kentucky. While they applied for Social Security to ease the pain, Denise is now working 50 hours a week at a restaurant. Despite this, they're barely scraping enough money together to service their debt.

The couple blames Biden for what they see as a gloomy economy and plans to vote for the Republican candidate in November. Denise routinely voted for Democrats up until about 2010, when she grew dissatisfied with Barack Obama’s economic stances, she said. Now, she supports Donald Trump because he lowered taxes and because of his policies on immigration. -Bloomberg

Meanwhile there's student loans - which are not able to be discharged in bankruptcy.

"I can't even save, I don't have a savings account," said 29-year-old in Columbus, Ohio resident Brittany Walling - who has around $80,000 in federal student loans, $20,000 in private debt from her undergraduate and graduate degrees, and $6,000 in credit card debt she accumulated over a six-month stretch in 2022 while she was unemployed.

"I just know that a lot of people are struggling, and things need to change," she told the outlet.

The only silver lining of note, according to Bloomberg, is that broad wage gains resulting in large paychecks has made it easier for people to throw money at credit card bills.

Yet, according to Wells Fargo economist Shannon Grein, "As rates rose in 2023, we avoided a slowdown due to spending that was very much tied to easy access to credit ... Now, credit has become harder to come by and more expensive."

According to Grein, the change has posed "a significant headwind to consumption."

Then there's the election

"Maybe the Fed is done hiking, but as long as rates stay on hold, you still have a passive tightening effect flowing down to the consumer and being exerted on the economy," she continued. "Those household dynamics are going to be a factor in the election this year."

Meanwhile, swing-state voters in a February Bloomberg/Morning Consult poll said they trust Trump more than Biden on interest rates and personal debt.

Reverberations

These 'headwinds' have M3 Partners' Moshin Meghji concerned.

"Any tightening there immediately hits the top line of companies," he said, noting that for heavily indebted companies that took on debt during years of easy borrowing, "there's no easy fix."

Tyler Durden Fri, 03/15/2024 - 18:00

Read More

Continue Reading

Spread & Containment

Sylvester researchers, collaborators call for greater investment in bereavement care

MIAMI, FLORIDA (March 15, 2024) – The public health toll from bereavement is well-documented in the medical literature, with bereaved persons at greater…

Published

on

MIAMI, FLORIDA (March 15, 2024) – The public health toll from bereavement is well-documented in the medical literature, with bereaved persons at greater risk for many adverse outcomes, including mental health challenges, decreased quality of life, health care neglect, cancer, heart disease, suicide, and death. Now, in a paper published in The Lancet Public Health, researchers sound a clarion call for greater investment, at both the community and institutional level, in establishing support for grief-related suffering.

Credit: Photo courtesy of Memorial Sloan Kettering Comprehensive Cancer Center

MIAMI, FLORIDA (March 15, 2024) – The public health toll from bereavement is well-documented in the medical literature, with bereaved persons at greater risk for many adverse outcomes, including mental health challenges, decreased quality of life, health care neglect, cancer, heart disease, suicide, and death. Now, in a paper published in The Lancet Public Health, researchers sound a clarion call for greater investment, at both the community and institutional level, in establishing support for grief-related suffering.

The authors emphasized that increased mortality worldwide caused by the COVID-19 pandemic, suicide, drug overdose, homicide, armed conflict, and terrorism have accelerated the urgency for national- and global-level frameworks to strengthen the provision of sustainable and accessible bereavement care. Unfortunately, current national and global investment in bereavement support services is woefully inadequate to address this growing public health crisis, said researchers with Sylvester Comprehensive Cancer Center at the University of Miami Miller School of Medicine and collaborating organizations.  

They proposed a model for transitional care that involves firmly establishing bereavement support services within healthcare organizations to ensure continuity of family-centered care while bolstering community-based support through development of “compassionate communities” and a grief-informed workforce. The model highlights the responsibility of the health system to build bridges to the community that can help grievers feel held as they transition.   

The Center for the Advancement of Bereavement Care at Sylvester is advocating for precisely this model of transitional care. Wendy G. Lichtenthal, PhD, FT, FAPOS, who is Founding Director of the new Center and associate professor of public health sciences at the Miller School, noted, “We need a paradigm shift in how healthcare professionals, institutions, and systems view bereavement care. Sylvester is leading the way by investing in the establishment of this Center, which is the first to focus on bringing the transitional bereavement care model to life.”

What further distinguishes the Center is its roots in bereavement science, advancing care approaches that are both grounded in research and community-engaged.  

The authors focused on palliative care, which strives to provide a holistic approach to minimize suffering for seriously ill patients and their families, as one area where improvements are critically needed. They referenced groundbreaking reports of the Lancet Commissions on the value of global access to palliative care and pain relief that highlighted the “undeniable need for improved bereavement care delivery infrastructure.” One of those reports acknowledged that bereavement has been overlooked and called for reprioritizing social determinants of death, dying, and grief.

“Palliative care should culminate with bereavement care, both in theory and in practice,” explained Lichtenthal, who is the article’s corresponding author. “Yet, bereavement care often is under-resourced and beset with access inequities.”

Transitional bereavement care model

So, how do health systems and communities prioritize bereavement services to ensure that no bereaved individual goes without needed support? The transitional bereavement care model offers a roadmap.

“We must reposition bereavement care from an afterthought to a public health priority. Transitional bereavement care is necessary to bridge the gap in offerings between healthcare organizations and community-based bereavement services,” Lichtenthal said. “Our model calls for health systems to shore up the quality and availability of their offerings, but also recognizes that resources for bereavement care within a given healthcare institution are finite, emphasizing the need to help build communities’ capacity to support grievers.”

Key to the model, she added, is the bolstering of community-based support through development of “compassionate communities” and “upskilling” of professional services to assist those with more substantial bereavement-support needs.

The model contains these pillars:

  • Preventive bereavement care –healthcare teams engage in bereavement-conscious practices, and compassionate communities are mindful of the emotional and practical needs of dying patients’ families.
  • Ownership of bereavement care – institutions provide bereavement education for staff, risk screenings for families, outreach and counseling or grief support. Communities establish bereavement centers and “champions” to provide bereavement care at workplaces, schools, places of worship or care facilities.
  • Resource allocation for bereavement care – dedicated personnel offer universal outreach, and bereaved stakeholders provide input to identify community barriers and needed resources.
  • Upskilling of support providers – Bereavement education is integrated into training programs for health professionals, and institutions offer dedicated grief specialists. Communities have trained, accessible bereavement specialists who provide support and are educated in how to best support bereaved individuals, increasing their grief literacy.
  • Evidence-based care – bereavement care is evidence-based and features effective grief assessments, interventions, and training programs. Compassionate communities remain mindful of bereavement care needs.

Lichtenthal said the new Center will strive to materialize these pillars and aims to serve as a global model for other health organizations. She hopes the paper’s recommendations “will cultivate a bereavement-conscious and grief-informed workforce as well as grief-literate, compassionate communities and health systems that prioritize bereavement as a vital part of ethical healthcare.”

“This paper is calling for healthcare institutions to respond to their duty to care for the family beyond patients’ deaths. By investing in the creation of the Center for the Advancement of Bereavement Care, Sylvester is answering this call,” Lichtenthal said.

Follow @SylvesterCancer on X for the latest news on Sylvester’s research and care.

# # #

Article Title: Investing in bereavement care as a public health priority

DOI: 10.1016/S2468-2667(24)00030-6

Authors: The complete list of authors is included in the paper.

Funding: The authors received funding from the National Cancer Institute (P30 CA240139 Nimer) and P30 CA008748 Vickers).

Disclosures: The authors declared no competing interests.

# # #


Read More

Continue Reading

Spread & Containment

Separating Information From Disinformation: Threats From The AI Revolution

Separating Information From Disinformation: Threats From The AI Revolution

Authored by Per Bylund via The Mises Institute,

Artificial intelligence…

Published

on

Separating Information From Disinformation: Threats From The AI Revolution

Authored by Per Bylund via The Mises Institute,

Artificial intelligence (AI) cannot distinguish fact from fiction. It also isn’t creative or can create novel content but repeats, repackages, and reformulates what has already been said (but perhaps in new ways).

I am sure someone will disagree with the latter, perhaps pointing to the fact that AI can clearly generate, for example, new songs and lyrics. I agree with this, but it misses the point. AI produces a “new” song lyric only by drawing from the data of previous song lyrics and then uses that information (the inductively uncovered patterns in it) to generate what to us appears to be a new song (and may very well be one). However, there is no artistry in it, no creativity. It’s only a structural rehashing of what exists.

Of course, we can debate to what extent humans can think truly novel thoughts and whether human learning may be based solely or primarily on mimicry. However, even if we would—for the sake of argument—agree that all we know and do is mere reproduction, humans have limited capacity to remember exactly and will make errors. We also fill in gaps with what subjectively (not objectively) makes sense to us (Rorschach test, anyone?). Even in this very limited scenario, which I disagree with, humans generate novelty beyond what AI is able to do.

Both the inability to distinguish fact from fiction and the inductive tether to existent data patterns are problems that can be alleviated programmatically—but are open for manipulation.

Manipulation and Propaganda

When Google launched its Gemini AI in February, it immediately became clear that the AI had a woke agenda. Among other things, the AI pushed woke diversity ideals into every conceivable response and, among other things, refused to show images of white people (including when asked to produce images of the Founding Fathers).

Tech guru and Silicon Valley investor Marc Andreessen summarized it on X (formerly Twitter): “I know it’s hard to believe, but Big Tech AI generates the output it does because it is precisely executing the specific ideological, radical, biased agenda of its creators. The apparently bizarre output is 100% intended. It is working as designed.”

There is indeed a design to these AIs beyond the basic categorization and generation engines. The responses are not perfectly inductive or generative. In part, this is necessary in order to make the AI useful: filters and rules are applied to make sure that the responses that the AI generates are appropriate, fit with user expectations, and are accurate and respectful. Given the legal situation, creators of AI must also make sure that the AI does not, for example, violate intellectual property laws or engage in hate speech. AI is also designed (directed) so that it does not go haywire or offend its users (remember Tay?).

However, because such filters are applied and the “behavior” of the AI is already directed, it is easy to take it a little further. After all, when is a response too offensive versus offensive but within the limits of allowable discourse? It is a fine and difficult line that must be specified programmatically.

It also opens the possibility for steering the generated responses beyond mere quality assurance. With filters already in place, it is easy to make the AI make statements of a specific type or that nudges the user in a certain direction (in terms of selected facts, interpretations, and worldviews). It can also be used to give the AI an agenda, as Andreessen suggests, such as making it relentlessly woke.

Thus, AI can be used as an effective propaganda tool, which both the corporations creating them and the governments and agencies regulating them have recognized.

Misinformation and Error

States have long refused to admit that they benefit from and use propaganda to steer and control their subjects. This is in part because they want to maintain a veneer of legitimacy as democratic governments that govern based on (rather than shape) people’s opinions. Propaganda has a bad ring to it; it’s a means of control.

However, the state’s enemies—both domestic and foreign—are said to understand the power of propaganda and do not hesitate to use it to cause chaos in our otherwise untainted democratic society. The government must save us from such manipulation, they claim. Of course, rarely does it stop at mere defense. We saw this clearly during the covid pandemic, in which the government together with social media companies in effect outlawed expressing opinions that were not the official line (see Murthy v. Missouri).

AI is just as easy to manipulate for propaganda purposes as social media algorithms but with the added bonus that it isn’t only people’s opinions and that users tend to trust that what the AI reports is true. As we saw in the previous article on the AI revolution, this is not a valid assumption, but it is nevertheless a widely held view.

If the AI then can be instructed to not comment on certain things that the creators (or regulators) do not want people to see or learn, then it is effectively “memory holed.” This type of “unwanted” information will not spread as people will not be exposed to it—such as showing only diverse representations of the Founding Fathers (as Google’s Gemini) or presenting, for example, only Keynesian macroeconomic truths to make it appear like there is no other perspective. People don’t know what they don’t know.

Of course, nothing is to say that what is presented to the user is true. In fact, the AI itself cannot distinguish fact from truth but only generates responses according to direction and only based on whatever the AI has been fed. This leaves plenty of scope for the misrepresentation of the truth and can make the world believe outright lies. AI, therefore, can easily be used to impose control, whether it is upon a state, the subjects under its rule, or even a foreign power.

The Real Threat of AI

What, then, is the real threat of AI? As we saw in the first article, large language models will not (cannot) evolve into artificial general intelligence as there is nothing about inductive sifting through large troves of (humanly) created information that will give rise to consciousness. To be frank, we haven’t even figured out what consciousness is, so to think that we will create it (or that it will somehow emerge from algorithms discovering statistical language correlations in existing texts) is quite hyperbolic. Artificial general intelligence is still hypothetical.

As we saw in the second article, there is also no economic threat from AI. It will not make humans economically superfluous and cause mass unemployment. AI is productive capital, which therefore has value to the extent that it serves consumers by contributing to the satisfaction of their wants. Misused AI is as valuable as a misused factory—it will tend to its scrap value. However, this doesn’t mean that AI will have no impact on the economy. It will, and already has, but it is not as big in the short-term as some fear, and it is likely bigger in the long-term than we expect.

No, the real threat is AI’s impact on information. This is in part because induction is an inappropriate source of knowledge—truth and fact are not a matter of frequency or statistical probabilities. The evidence and theories of Nicolaus Copernicus and Galileo Galilei would get weeded out as improbable (false) by an AI trained on all the (best and brightest) writings on geocentrism at the time. There is no progress and no learning of new truths if we trust only historical theories and presentations of fact.

However, this problem can probably be overcome by clever programming (meaning implementing rules—and fact-based limitations—to the induction problem), at least to some extent. The greater problem is the corruption of what AI presents: the misinformation, disinformation, and malinformation that its creators and administrators, as well as governments and pressure groups, direct it to create as a means of controlling or steering public opinion or knowledge.

This is the real danger that the now-famous open letter, signed by Elon Musk, Steve Wozniak, and others, pointed to:

“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

Other than the economically illiterate reference to “automat[ing] away all the jobs,” the warning is well-taken. AI will not Terminator-like start to hate us and attempt to exterminate mankind. It will not make us all into biological batteries, as in The Matrix. However, it will—especially when corrupted—misinform and mislead us, create chaos, and potentially make our lives “solitary, poor, nasty, brutish and short.”

Tyler Durden Fri, 03/15/2024 - 06:30

Read More

Continue Reading

Trending