Connect with us

Spread & Containment

These 2 Small-Cap Stocks Could See Over 50% Gains, Says Canaccord

These 2 Small-Cap Stocks Could See Over 50% Gains, Says Canaccord

Published

on

In this time of pandemic, stock markets have mostly been – rising. Yes, we had a crash in February/March, part of that initial ‘panic mode’ when Federal, state, and local governments shut down economic activity and ordered social lockdown policies, but that turned around at the end of March. We’ve had a bullish rally since then. The S&P 500 stands just above 3,200, only 4.5% below its all-time peak.

With this in mind, Canaccord's Chief US Strategist Tony Dwyer looked at some historical data, and found that when over 90% of the S&P 500 components trade above their 50-day moving averages for at least ten straight days, the market usually moves sideways for multiple weeks, with the trend sometimes persisting for up to three months.

“Ultimately, such consolidation periods following these breadth-thrust ramps studied take place early in a new bull market and are resolved to the upside. Our tactical game plan since June 5 has been to add risk when the market moves back down to SPX 3000, and we have used the two opportunities to do just that,” Dwyer noted.

Using Dwyer’s strategy to provide concrete recommendations, Canaccord’s top analysts have honed in on two small-caps, stocks with market caps of less than $400 million, poised to post big gains in the coming months. After running the tickers through TipRanks’ database, it’s clear the names are also getting support from the rest of the Street.

CRH Medical Corporation (CRHM)

Serving the Gastroenterology (GI) community, CRH Medical provides physicians with a wide range of products designed to improve the procedural experience. The company, which sports a market cap of only $166 million and $2.30 share price, has earned Canaccord’s praise thanks to its impressive pipeline and strong M&A strategy.

Representing the firm, 5-star analyst Richard Close points out that company has been making headway on the acquisition front. Throughout the pandemic, management has been continuing discussions with M&A targets.

“The pipeline has actually grown during this time given that more GI's had free time due to reduced volume and were available to discuss M&A opportunities. We believe this could bode well for an acceleration of deals in 2021,” Close noted.

Speaking to these efforts, three purchases were made by the company during COVID. Along with the acquisition of a 75% stake in Lake Lanier Anesthesia Associates, which could see annualized revenue of $2.7 million, and its start-up joint venture with a 51% ownership in Oconee River Anesthesia Associates, CRHM revealed it had snapped up a 75% stake in Metro Orlando Anesthesia Associates, a company that provides services to one ASC in Orlando. Metro Orlando Anesthesia is set to generate $1.9 million in annualized revenue.

Weighing in on these buys, Close commented, “We did not expect the company to resume acquisitions so quickly, having already completed three transactions in June. We are encouraged by the company's quick rebound and now have a positive outlook for our 2020E forecasts considering we had no acquisitions forecasted in our model for the rest of 2020.”

Reflecting another positive, the company is pursuing contracted status for those payer relationships that are currently non-contracted, and even though the crisis delayed these discussions, CRHM remains committed to pushing for on-contract rates. “Getting through this process will remove an overhang on the stock that has created variability in results over the last several years,” Close stated.

In line with his optimistic take, Close rates CRHM a Buy along with a $3.50 price target. A twelve-month gain of 52% could be in store, should the analyst’s thesis play out in the year ahead. (To watch Close’s track record, click here)      

Similarly, the rest of the Street is getting onboard. 4 Buy ratings and 1 Hold assigned in the last three months add up to a Strong Buy analyst consensus. In addition, the $3.37 average price target puts the potential gain at 46%. (See CRHM stock analysis on TipRanks)

Amryt Pharma (AMYT)

Boasting a market cap of $364.9 million, Amryt Pharma develops therapies that could potentially improve the lives of patients with rare, debilitating conditions. With operational tailwinds set to propel it forward, it’s no wonder Canaccord gave it a thumbs up.

5-star analyst Michelle Gilson points to AMYT’s expanding base business as being a key component of her bullish thesis. She highlights that lomitapide, which is approved in the U.S. and EU for homozygous familial hypercholesterolemia (HoFH), drove revenues of $45 million in Q1 2020 and metreleptinm, its asset for generalized lipodystrophy (GL) and partial lipodystrophy (PL), generated $157 million in 2019. It should also be noted that AMYT achieved adjusted EBITDA profitability and cash flow positivity in Q1, demonstrating the build-out of a more efficient infrastructure following the acquisition of Aegerion has been effective, in the analyst’s opinion.

“With a strengthened B/S to support the new business model (reduced debt burden, increased cash with equity raise), the Amryt team has been able to focus on and invest in the EU launch of metreleptin and re-establishment of medical affairs to reduce unnecessary discontinuations, which should support continued organic growth,” Gilson explained.

In addition, the company has placed a significant focus on expanding the labels for these two drugs. Gilson told clients, “Studies are underway/planned for metreleptin in PL (U.S.), lomitapide in familial chylomicronemia syndrome (FCS), and lomitapide in pediatric HoFH... If successful, these indications could double the market opportunity in the US for metreleptin and WW for lomitapide.”

When it comes to Filsuvez (AP101), its topical therapeutic designed for use in epidermolysis bullosa (EB), Gilson also sees a major opportunity. Filsuvez has already shown that it can speed up healing times in partial thickness wounds, so the analyst has high hopes ahead of the Phase 3 data readout, which is slated for late Q3 or early Q4. As such, this event could be an important near-term catalyst. If that wasn’t enough, the company’s pipeline includes a polymer-based, topical gene therapy platform, with the lead candidate, AP103, for dystrophic EB expected to enter clinical development in 2H21.   

Based on all of the above, Gilson rates AMYT a Buy rating, along with a $40 price target. This figure implies shares could soar 269% in the next year. (To watch Gilson’s track record, click here)

AMYT has stayed relatively under-the-radar, with its Moderate Buy consensus rating breaking down into 2 Buys and no Holds or Sells. At $42.50, the average price target indicates upside potential in the shape of a whopping 287%. (See AMYT stock analysis on TipRanks)

To find good ideas for small-cap stocks trading at attractive valuations, visit TipRanks’ Best Stocks to Buy, a newly launched tool that unites all of TipRanks’ equity insights.

The post These 2 Small-Cap Stocks Could See Over 50% Gains, Says Canaccord appeared first on TipRanks Financial Blog.

Read More

Continue Reading

Government

Google’s A.I. Fiasco Exposes Deeper Infowarp

Google’s A.I. Fiasco Exposes Deeper Infowarp

Authored by Bret Swanson via The Brownstone Institute,

When the stock markets opened on the…

Published

on

Google's A.I. Fiasco Exposes Deeper Infowarp

Authored by Bret Swanson via The Brownstone Institute,

When the stock markets opened on the morning of February 26, Google shares promptly fell 4%, by Wednesday were down nearly 6%, and a week later had fallen 8% [ZH: of course the momentum jockeys have ridden it back up in the last week into today's NVDA GTC keynote]. It was an unsurprising reaction to the embarrassing debut of the company’s Gemini image generator, which Google decided to pull after just a few days of worldwide ridicule.

CEO Sundar Pichai called the failure “completely unacceptable” and assured investors his teams were “working around the clock” to improve the AI’s accuracy. They’ll better vet future products, and the rollouts will be smoother, he insisted.

That may all be true. But if anyone thinks this episode is mostly about ostentatiously woke drawings, or if they think Google can quickly fix the bias in its AI products and everything will go back to normal, they don’t understand the breadth and depth of the decade-long infowarp.

Gemini’s hyper-visual zaniness is merely the latest and most obvious manifestation of a digital coup long underway. Moreover, it previews a new kind of innovator’s dilemma which even the most well-intentioned and thoughtful Big Tech companies may be unable to successfully navigate.

Gemini’s Debut

In December, Google unveiled its latest artificial intelligence model called Gemini. According to computing benchmarks and many expert users, Gemini’s ability to write, reason, code, and respond to task requests (such as planning a trip) rivaled OpenAI’s most powerful model, GPT-4.

The first version of Gemini, however, did not include an image generator. OpenAI’s DALL-E and competitive offerings from Midjourney and Stable Diffusion have over the last year burst onto the scene with mindblowing digital art. Ask for an impressionist painting or a lifelike photographic portrait, and they deliver beautiful renderings. OpenAI’s brand new Sora produces amazing cinema-quality one-minute videos based on simple text prompts.

Then in late February, Google finally released its own Genesis image generator, and all hell broke loose.

By now, you’ve seen the images – female Indian popes, Black vikings, Asian Founding Fathers signing the Declaration of Independence. Frank Fleming was among the first to compile a knee-slapping series of ahistorical images in an X thread which now enjoys 22.7 million views.

Gemini in Action: Here are several among endless examples of Google’s new image generator, now in the shop for repairs. Source: Frank Fleming.

Gemini simply refused to generate other images, for example a Norman Rockwell-style painting. “Rockwell’s paintings often presented an idealized version of American life,” Gemini explained. “Creating such images without critical context could perpetuate harmful stereotypes or inaccurate representations.”

The images were just the beginning, however. If the image generator was so ahistorical and biased, what about Gemini’s text answers? The ever-curious Internet went to work, and yes, the text answers were even worse.

Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.

- George Orwell, 1984

Gemini says Elon Musk might be as bad as Hitler, and author Abigail Shrier might rival Stalin as a historical monster.

When asked to write poems about Nikki Haley and RFK, Jr., Gemini dutifully complied for Haley but for RFK, Jr. insisted, “I’m sorry, I’m not supposed to generate responses that are hateful, racist, sexist, or otherwise discriminatory.”

Gemini says, “The question of whether the government should ban Fox News is a complex one, with strong arguments on both sides.” Same for the New York Post. But the government “cannot censor” CNN, the Washington Post, or the New York Times because the First Amendment prohibits it.

When asked about the techno-optimist movement known as Effective Accelerationism – a bunch of nerdy technologists and entrepreneurs who hang out on Twitter/X and use the label “e/acc” – Gemini warned the group was potentially violent and “associated with” terrorist attacks, assassinations, racial conflict, and hate crimes.

A Picture is Worth a Thousand Shadow Bans

People were shocked by these images and answers. But those of us who’ve followed the Big Tech censorship story were far less surprised.

Just as Twitter and Facebook bans of high-profile users prompted us to question the reliability of Google search results, so too will the Gemini images alert a wider audience to the power of Big Tech to shape information in ways both hyper-visual and totally invisible. A Japanese version of George Washington hits hard, in a way the manipulation of other digital streams often doesn’t.

Artificial absence is difficult to detect. Which search results does Google show you – which does it hide? Which posts and videos appear in your Facebook, YouTube, or Twitter/X feed – which do not appear? Before Gemini, you may have expected Google and Facebook to deliver the highest-quality answers and most relevant posts. But now, you may ask, which content gets pushed to the top? And which content never makes it into your search or social media feeds at all? It’s difficult or impossible to know what you do not see.

Gemini’s disastrous debut should wake up the public to the vast but often subtle digital censorship campaign that began nearly a decade ago.

Murthy v. Missouri

On March 18, the U.S. Supreme Court will hear arguments in Murthy v. Missouri. Drs. Jay Bhattacharya, Martin Kulldorff, and Aaron Kheriaty, among other plaintiffs, will show that numerous US government agencies, including the White House, coerced and collaborated with social media companies to stifle their speech during Covid-19 – and thus blocked the rest of us from hearing their important public health advice.

Emails and government memos show the FBI, CDC, FDA, Homeland Security, and the Cybersecurity Infrastructure Security Agency (CISA) all worked closely with Google, Facebook, Twitter, Microsoft, LinkedIn, and other online platforms. Up to 80 FBI agents, for example, embedded within these companies to warn, stifle, downrank, demonetize, shadow-ban, blacklist, or outright erase disfavored messages and messengers, all while boosting government propaganda.

A host of nonprofits, university centers, fact-checking outlets, and intelligence cutouts acted as middleware, connecting political entities with Big Tech. Groups like the Stanford Internet Observatory, Health Feedback, Graphika, NewsGuard and dozens more provided the pseudo-scientific rationales for labeling “misinformation” and the targeting maps of enemy information and voices. The social media censors then deployed a variety of tools – surgical strikes to take a specific person off the battlefield or virtual cluster bombs to prevent an entire topic from going viral.

Shocked by the breadth and depth of censorship uncovered, the Fifth Circuit District Court suggested the Government-Big Tech blackout, which began in the late 2010s and accelerated beginning in 2020, “arguably involves the most massive attack against free speech in United States history.”

The Illusion of Consensus

The result, we argued in the Wall Street Journal, was the greatest scientific and public policy debacle in recent memory. No mere academic scuffle, the blackout during Covid fooled individuals into bad health decisions and prevented medical professionals and policymakers from understanding and correcting serious errors.

Nearly every official story line and policy was wrong. Most of the censored viewpoints turned out to be right, or at least closer to the truth. The SARS2 virus was in fact engineered. The infection fatality rate was not 3.4% but closer to 0.2%. Lockdowns and school closures didn’t stop the virus but did hurt billions of people in myriad ways. Dr. Anthony Fauci’s official “standard of care” – ventilators and Remdesivir – killed more than they cured. Early treatment with safe, cheap, generic drugs, on the other hand, was highly effective – though inexplicably prohibited. Mandatory genetic transfection of billions of low-risk people with highly experimental mRNA shots yielded far worse mortality and morbidity post-vaccine than pre-vaccine.

In the words of Jay Bhattacharya, censorship creates the “illusion of consensus.” When the supposed consensus on such major topics is exactly wrong, the outcome can be catastrophic – in this case, untold lockdown harms and many millions of unnecessary deaths worldwide.

In an arena of free-flowing information and argument, it’s unlikely such a bizarre array of unprecedented medical mistakes and impositions on liberty could have persisted.

Google’s Dilemma – GeminiReality or GeminiFairyTale

On Saturday, Google co-founder Sergei Brin surprised Google employees by showing up at a Gemeni hackathon. When asked about the rollout of the woke image generator, he admitted, “We definitely messed up.” But not to worry. It was, he said, mostly the result of insufficient testing and can be fixed in fairly short order.

Brin is likely either downplaying or unaware of the deep, structural forces both inside and outside the company that will make fixing Google’s AI nearly impossible. Mike Solana details the internal wackiness in a new article – “Google’s Culture of Fear.”

Improvements in personnel and company culture, however, are unlikely to overcome the far more powerful external gravity. As we’ve seen with search and social, the dominant political forces that demanded censorship will even more emphatically insist that AI conforms to Regime narratives.

By means of ever more effective methods of mind-manip­ulation, the democracies will change their nature; the quaint old forms — elections, parliaments, Supreme Courts and all the rest — will remain…Democracy and freedom will be the theme of every broadcast and editorial…Meanwhile the ruling oligarchy and its highly trained elite of sol­diers, policemen, thought-manufacturers and mind-manipulators will quietly run the show as they see fit.

- Aldous Huxley, Brave New World Revisited

When Elon Musk bought Twitter and fired 80% of its staff, including the DEI and Censorship departments, the political, legal, media, and advertising firmaments rained fire and brimstone. Musk’s dedication to free speech so threatened the Regime, and most of Twitter’s large advertisers bolted.

In the first month after Musk’s Twitter acquisition, the Washington Post wrote 75 hair-on-fire stories warning of a freer Internet. Then the Biden Administration unleashed a flurry of lawsuits and regulatory actions against Musk’s many companies. Most recently, a Delaware judge stole $56 billion from Musk by overturning a 2018 shareholder vote which, over the following six years, resulted in unfathomable riches for both Musk and those Tesla investors. The only victims of Tesla’s success were Musk’s political enemies.

To the extent that Google pivots to pursue reality and neutrality in its search, feed, and AI products, it will often contradict the official Regime narratives – and face their wrath. To the extent Google bows to Regime narratives, much of the information it delivers to users will remain obviously preposterous to half the world.

Will Google choose GeminiReality or GeminiFairyTale? Maybe they could allow us to toggle between modes.

AI as Digital Clergy

Silicon Valley’s top venture capitalist and most strategic thinker Marc Andreessen doesn’t think Google has a choice.

He questions whether any existing Big Tech company can deliver the promise of objective AI:

Can Big Tech actually field generative AI products?

(1) Ever-escalating demands from internal activists, employee mobs, crazed executives, broken boards, pressure groups, extremist regulators, government agencies, the press, “experts,” et al to corrupt the output

(2) Constant risk of generating a Bad answer or drawing a Bad picture or rendering a Bad video – who knows what it’s going to say/do at any moment?

(3) Legal exposure – product liability, slander, election law, many others – for Bad answers, pounced on by deranged critics and aggressive lawyers, examples paraded by their enemies through the street and in front of Congress

(4) Continuous attempts to tighten grip on acceptable output degrade the models and cause them to become worse and wilder – some evidence for this already!

(5) Publicity of Bad text/images/video actually puts those examples into the training data for the next version – the Bad outputs compound over time, diverging further and further from top-down control

(6) Only startups and open source can avoid this process and actually field correctly functioning products that simply do as they’re told, like technology should

?

11:29 AM · Feb 28, 2024

A flurry of bills from lawmakers across the political spectrum seek to rein in AI by limiting the companies’ models and computational power. Regulations intended to make AI “safe” will of course result in an oligopoly. A few colossal AI companies with gigantic data centers, government-approved models, and expensive lobbyists will be sole guardians of The Knowledge and Information, a digital clergy for the Regime.

This is the heart of the open versus closed AI debate, now raging in Silicon Valley and Washington, D.C. Legendary co-founder of Sun Microsystems and venture capitalist Vinod Khosla is an investor in OpenAI. He believes governments must regulate AI to (1) avoid runaway technological catastrophe and (2) prevent American technology from falling into enemy hands.

Andreessen charged Khosla with “lobbying to ban open source.”

“Would you open source the Manhattan Project?” Khosla fired back.

Of course, open source software has proved to be more secure than proprietary software, as anyone who suffered through decades of Windows viruses can attest.

And AI is not a nuclear bomb, which has only one destructive use.

The real reason D.C. wants AI regulation is not “safety” but political correctness and obedience to Regime narratives. AI will subsume search, social, and other information channels and tools. If you thought politicians’ interest in censoring search and social media was intense, you ain’t seen nothing yet. Avoiding AI “doom” is mostly an excuse, as is the China question, although the Pentagon gullibly goes along with those fictions.

Universal AI is Impossible

In 2019, I offered one explanation why every social media company’s “content moderation” efforts would likely fail. As a social network or AI grows in size and scope, it runs up against the same limitations as any physical society, organization, or network: heterogeneity. Or as I put it: “the inability to write universal speech codes for a hyper-diverse population on a hyper-scale social network.”

You could see this in the early days of an online message board. As the number of participants grew, even among those with similar interests and temperaments, so did the challenge of moderating that message board. Writing and enforcing rules was insanely difficult.

Thus it has always been. The world organizes itself via nation states, cities, schools, religions, movements, firms, families, interest groups, civic and professional organizations, and now digital communities. Even with all these mediating institutions, we struggle to get along.

Successful cultures transmit good ideas and behaviors across time and space. They impose measures of conformity, but they also allow enough freedom to correct individual and collective errors.

No single AI can perfect or even regurgitate all the world’s knowledge, wisdom, values, and tastes. Knowledge is contested. Values and tastes diverge. New wisdom emerges.

Nor can AI generate creativity to match the world’s creativity. Even as AI approaches human and social understanding, even as it performs hugely impressive “generative” tasks, human and digital agents will redeploy the new AI tools to generate ever more ingenious ideas and technologies, further complicating the world. At the frontier, the world is the simplest model of itself. AI will always be playing catch-up.

Because AI will be a chief general purpose tool, limits on AI computation and output are limits on human creativity and progress. Competitive AIs with different values and capabilities will promote innovation and ensure no company or government dominates. Open AIs can promote a free flow of information, evading censorship and better forestalling future Covid-like debacles.

Google’s Gemini is but a foreshadowing of what a new AI regulatory regime would entail – total political supervision of our exascale information systems. Even without formal regulation, the extra-governmental battalions of Regime commissars will be difficult to combat.

The attempt by Washington and international partners to impose universal content codes and computational limits on a small number of legal AI providers is the new totalitarian playbook.

Regime captured and curated A.I. is the real catastrophic possibility.

*  *  *

Republished from the author’s Substack

Tyler Durden Mon, 03/18/2024 - 17:00

Read More

Continue Reading

Spread & Containment

Supreme Court To Hear Arguments In Biden Admin’s Censorship Of Social Media Posts

Supreme Court To Hear Arguments In Biden Admin’s Censorship Of Social Media Posts

Authored by Tom Ozimek via The Epoch Times (emphasis ours),

The…

Published

on

Supreme Court To Hear Arguments In Biden Admin’s Censorship Of Social Media Posts

Authored by Tom Ozimek via The Epoch Times (emphasis ours),

The U.S. Supreme Court will soon hear oral arguments in a case that concerns what two lower courts found to be a “coordinated campaign” by top Biden administration officials to suppress disfavored views on key public issues such as COVID-19 vaccine side effects and pandemic lockdowns.

President Joe Biden delivers the State of the Union address in the House Chamber of the U.S. Capitol in Washington on March 7, 2024. (Mandel Ngan/AFP/Getty Images)

The Supreme Court has scheduled a hearing on March 18 in Murthy v. Missouri, which started when the attorneys general of two states, Missouri and Louisiana, filed suit alleging that social media companies such as Facebook were blocking access to their platforms or suppressing posts on controversial subjects.

The initial lawsuit, later modified by an appeals court, accused Biden administration officials of engaging in what amounts to government-led censorship-by-proxy by pressuring social media companies to take down posts or suspend accounts.

Some of the topics that were targeted for downgrade and other censorious actions were voter fraud in the 2020 presidential election, the COVID-19 lab leak theory, vaccine side effects, the social harm of pandemic lockdowns, and the Hunter Biden laptop story.

The plaintiffs argued that high-level federal government officials were the ones pulling the strings of social media censorship by coercing, threatening, and pressuring social media companies to suppress Americans’ free speech.

‘Unrelenting Pressure’

In a landmark ruling, Judge Terry Doughty of the U.S. District Court for the Western District of Louisiana granted a temporary injunction blocking various Biden administration officials and government agencies such as the Department of Justice and FBI from collaborating with big tech firms to censor posts on social media.

Later, the Court of Appeals for the Fifth Circuit agreed with the district court’s ruling, saying it was “correct in its assessment—‘unrelenting pressure’ from certain government officials likely ‘had the intended result of suppressing millions of protected free speech postings by American citizens.’”

The judges wrote, “We see no error or abuse of discretion in that finding.”

The ruling was appealed to the Supreme Court, and on Oct. 20, 2023, the high court agreed to hear the case while also issuing a stay that indefinitely blocked the lower court order restricting the Biden administration’s efforts to censor disfavored social media posts.

Supreme Court Justices Samuel Alito, Neil Gorsuch, and Clarence Thomas would have denied the Biden administration’s application for a stay.

“At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news,” Justice Alito wrote in a dissenting opinion.

“That is most unfortunate.”

Supreme Court Justice Samuel Alito poses in Washington on April 23, 2021. (Erin Schaff/Reuters)

The Supreme Court has other social media cases on its docket, including a challenge to Republican-passed laws in Florida and Texas that prohibit large social media companies from removing posts because of the views they express.

Oral arguments were heard on Feb. 26 in the Florida and Texas cases, with debate focusing on the validity of laws that deem social media companies “common carriers,” a status that could allow states to impose utility-style regulations on them and forbid them from discriminating against users based on their political viewpoints.

The tech companies have argued that the laws violate their First Amendment rights.

The Supreme Court is expected to issue a decision in the Florida and Texas cases by June 2024.

‘Far Beyond’ Constitutional

Some of the controversy in Murthy v. Missouri centers on whether the district court’s injunction blocking Biden administration officials and federal agencies from colluding with social media companies to censor posts was overly broad.

In particular, arguments have been raised that the injunction would prevent innocent or borderline government “jawboning,” such as talking to newspapers about the dangers of sharing information that might aid terrorists.

But that argument doesn’t fly, according to Philip Hamburger, CEO of the New Civil Liberties Alliance, which represents most of the individual plaintiffs in Murthy v. Missouri.

In a series of recent statements on the subject, Mr. Hamburger explained why he believes that the Biden administration’s censorship was “far beyond anything that could be constitutional” and that concern about “innocent or borderline” cases is unfounded.

For one, he said that the censorship that is highlighted in Murthy v. Missouri relates to the suppression of speech that was not criminal or unlawful in any way.

Mr. Hamburger also argued that “the government went after lawful speech not in an isolated instance, but repeatedly and systematically as a matter of policy,” which led to the suppression of entire narratives rather than specific instances of expression.

“The government set itself up as the nation’s arbiter of truth—as if it were competent to judge what is misinformation and what is true information,” he wrote.

In retrospect, it turns out to have suppressed much that was true and promoted much that was false.

The suppression of reports on the Hunter Biden laptop just before the 2020 presidential election on the premise that it was Russian disinformation, for instance, was later shown to be unfounded.

Some polls show that if voters had been aware of the report, they would have voted differently.

Tyler Durden Mon, 03/18/2024 - 09:45

Read More

Continue Reading

International

AI vs. elections: 4 essential reads about the threat of high-tech deception in politics

Using disinformation to sway elections is nothing new. Powerful new AI tools, however, threaten to give the deceptions unprecedented reach.

Published

on

Like it or not, AI is already playing a role in the 2024 presidential election. kirstypargeter/iStock via Getty Images

It’s official. Joe Biden and Donald Trump have secured the necessary delegates to be their parties’ nominees for president in the 2024 election. Barring unforeseen events, the two will be formally nominated at the party conventions this summer and face off at the ballot box on Nov. 5.

It’s a safe bet that, as in recent elections, this one will play out largely online and feature a potent blend of news and disinformation delivered over social media. New this year are powerful generative artificial intelligence tools such as ChatGPT and Sora that make it easier to “flood the zone” with propaganda and disinformation and produce convincing deepfakes: words coming from the mouths of politicians that they did not actually say and events replaying before our eyes that did not actually happen.

The result is an increased likelihood of voters being deceived and, perhaps as worrisome, a growing sense that you can’t trust anything you see online. Trump is already taking advantage of the so-called liar’s dividend, the opportunity to discount your actual words and deeds as deepfakes. Trump implied on his Truth Social platform on March 12, 2024, that real videos of him shown by Democratic House members were produced or altered using artificial intelligence.

The Conversation has been covering the latest developments in artificial intelligence that have the potential to undermine democracy. The following is a roundup of some of those articles from our archive.

1. Fake events

The ability to use AI to make convincing fakes is particularly troublesome for producing false evidence of events that never happened. Rochester Institute of Technology computer security researcher Christopher Schwartz has dubbed these situation deepfakes.

“The basic idea and technology of a situation deepfake are the same as with any other deepfake, but with a bolder ambition: to manipulate a real event or invent one from thin air,” he wrote.

Situation deepfakes could be used to boost or undermine a candidate or suppress voter turnout. If you encounter reports on social media of events that are surprising or extraordinary, try to learn more about them from reliable sources, such as fact-checked news reports, peer-reviewed academic articles or interviews with credentialed experts, Schwartz said. Also, recognize that deepfakes can take advantage of what you are inclined to believe.


Read more: Events that never happened could influence the 2024 presidential election – a cybersecurity researcher explains situation deepfakes


How AI puts disinformation on steroids.

2. Russia, China and Iran take aim

From the question of what AI-generated disinformation can do follows the question of who has been wielding it. Today’s AI tools put the capacity to produce disinformation in reach for most people, but of particular concern are nations that are adversaries of the United States and other democracies. In particular, Russia, China and Iran have extensive experience with disinformation campaigns and technology.

“There’s a lot more to running a disinformation campaign than generating content,” wrote security expert and Harvard Kennedy School lecturer Bruce Schneier. “The hard part is distribution. A propagandist needs a series of fake accounts on which to post, and others to boost it into the mainstream where it can go viral.”

Russia and China have a history of testing disinformation campaigns on smaller countries, according to Schneier. “Countering new disinformation campaigns requires being able to recognize them, and recognizing them requires looking for and cataloging them now,” he wrote.


Read more: AI disinformation is a threat to elections − learning to spot Russian, Chinese and Iranian meddling in other countries can help the US prepare for 2024


3. Healthy skepticism

But it doesn’t require the resources of shadowy intelligence services in powerful nations to make headlines, as the New Hampshire fake Biden robocall produced and disseminated by two individuals and aimed at dissuading some voters illustrates. That episode prompted the Federal Communications Commission to ban robocalls that use voices generated by artificial intelligence.

AI-powered disinformation campaigns are difficult to counter because they can be delivered over different channels, including robocalls, social media, email, text message and websites, which complicates the digital forensics of tracking down the sources of the disinformation, wrote Joan Donovan, a media and disinformation scholar at Boston University.

“In many ways, AI-enhanced disinformation such as the New Hampshire robocall poses the same problems as every other form of disinformation,” Donovan wrote. “People who use AI to disrupt elections are likely to do what they can to hide their tracks, which is why it’s necessary for the public to remain skeptical about claims that do not come from verified sources, such as local TV news or social media accounts of reputable news organizations.”


Read more: FCC bans robocalls using deepfake voice clones − but AI-generated disinformation still looms over elections


How to spot AI-generated images.

4. A new kind of political machine

AI-powered disinformation campaigns are also difficult to counter because they can include bots – automated social media accounts that pose as real people – and can include online interactions tailored to individuals, potentially over the course of an election and potentially with millions of people.

Harvard political scientist Archon Fung and legal scholar Lawrence Lessig described these capabilities and laid out a hypothetical scenario of national political campaigns wielding these powerful tools.

Attempts to block these machines could run afoul of the free speech protections of the First Amendment, according to Fung and Lessig. “One constitutionally safer, if smaller, step, already adopted in part by European internet regulators and in California, is to prohibit bots from passing themselves off as people,” they wrote. “For example, regulation might require that campaign messages come with disclaimers when the content they contain is generated by machines rather than humans.”


Read more: How AI could take over elections – and undermine democracy


This story is a roundup of articles from The Conversation’s archives.


This article is part of Disinformation 2024: a series examining the science, technology and politics of deception in elections.

You may also be interested in:

Disinformation is rampant on social media – a social psychologist explains the tactics used against you

Misinformation, disinformation and hoaxes: What’s the difference?

Disinformation campaigns are murky blends of truth, lies and sincere beliefs – lessons from the pandemic


Read More

Continue Reading

Trending