Connect with us

Spread & Containment

April 19th COVID Update: Weekly Deaths Decreased

Note: Mortgage rates are from MortgageNewsDaily.com and are for top tier scenarios.

It is likely that we will see pandemic lows for hospitalizations and deaths in the next several weeks.  That is welcome news!For deaths, I’m currently using 4 weeks a…

Published

on

Note: Mortgage rates are from MortgageNewsDaily.com and are for top tier scenarios.

It is likely that we will see pandemic lows for hospitalizations and deaths in the next several weeks.  That is welcome news!

For deaths, I'm currently using 4 weeks ago for "now", since the most recent three weeks will be revised significantly.

Hospitalizations have declined significantly from the winter high of 30,027 but are still slightly above the low of 5,386 last year.

COVID Metrics
 NowWeek
Ago
Goal
Hospitalized25,8996,686≤3,0001
Deaths per Week2779982≤3501
1my goals to stop weekly posts,
2Weekly for Currently Hospitalized, and Deaths
???? Increasing number weekly for Hospitalized and Deaths
✅ Goal met.

COVID-19 Deaths per WeekClick on graph for larger image.

This graph shows the weekly (columns) number of deaths reported.

Weekly deaths have declined sharply from the recent peak of 2,558 but are still 50% above the low of 490 last July.

And here is a graph I'm following concerning COVID in wastewater as of April 18th:

COVID-19 WastewaterThis appears to be a leading indicator for COVID hospitalizations and deaths.

Nationally, COVID in wastewater is now off close to 90% from the holiday peak at the end of December, and that suggests weekly hospitalizations and deaths will continue to decline.

Read More

Continue Reading

International

J&J’s AI head jumps to Recursion; Doug Williams resigns as Sana’s R&D chief

Najat Khan
Recursion Pharmaceuticals has poached Najat Khan from Johnson & Johnson, where she led AI efforts at one of healthcare’s giants.
Khan…

Published

on

Najat Khan

Recursion Pharmaceuticals has poached Najat Khan from Johnson & Johnson, where she led AI efforts at one of healthcare’s giants.

Khan will serve as chief R&D and commercial officer, along with joining Recursion’s board. The move comes at a critical time for Recursion, as its stock price has fallen 75% since going public in 2021, currently commanding a market capitalization of $1.75 billion. The Utah biotech expects its first Phase 2 readouts later this year for two rare disease drug candidates.

Khan’s hiring comes shortly after the departure of Shafique Virani, Recursion’s former chief business officer. Virani left earlier this year to join Noetik, a biotech startup led by two ex-Recursion scientists.

Khan most recently served as chief data science officer and global head of strategy and portfolio organization for J&J’s Janssen R&D group. Khan earned a PhD in organic chemistry from the University of Pennsylvania and spent about six-and-a-half years at Boston Consulting Group before joining J&J in 2018.

“Najat brings a unique blend of leadership in biological, chemical, and medical sciences, data science, and business,” CEO Chris Gibson said in a statement. “More importantly, she has a vision and passion for transforming drug discovery and development that complements ours and she has a strong sense of urgency to accelerate the industry’s future. We are excited to welcome her as a Recursionaut to drive our portfolio pipeline, as well as create commercial strategies as we continue to industrialize the creation of high impact medicines.”

Khan will earn a $570,000 base salary with the potential for a yearly bonus of up to half her salary. Recursion also agreed to pay a $500,000 sign-on bonus along with an equity package of restricted shares and stock options expected to be valued at $8 million, according to a regulatory filing.

Andrew Dunn


Doug Williams

→ An SEC filing on Thursday indicated that Doug Williams has resigned as Sana Biotechnology’s president of R&D “for personal reasons.” Williams turned the page quickly last April, joining longtime friend Steve Harr at Sana two weeks after Codiak BioSciences filed for Chapter 11 bankruptcy. Data for two allogeneic CAR-T therapies — SC291 and SC262 — are expected later this year at Sana, which decided to concentrate on its ex vivo cell therapies last fall and laid off 29% of its employees.

Corinne Le Goff

Corinne Le Goff resigned from her first CEO gig at Imunon a month ago. Now, she’s back in Peer Review as chief commercial officer of Viatris. Le Goff held the same title at Moderna for only a year, but it was a crucial year for the mRNA biotech during which its Covid-19 vaccine would be administered to millions around the world. Viatris paid $350 million upfront to Idorsia in February for two late-stage candidates — selatogrel for acute myocardial infarction and cenerimod for systemic lupus erythematosus — as the company turns its gaze to product development and away from generics. Scott Smith has been busy constructing the leadership team since he took over at Viatris last April, bringing in former Celgene exec Philippe Martin as chief R&D officer and Doretta Mistras as CFO.

→ Three months into his tenure, Tony Johnson is out as CEO of GPCR biotech Domain Therapeutics. The former Goldfinch Bio chief has been replaced by Sean MacDonald, with no explanation given for the departure. MacDonald just took the CBO job at Domain in February.

Peter DiLaura

Peter DiLaura has been hired away from Sonoma Biotherapeutics to become CEO of South San Francisco-based Initial Therapeutics. Founding chief Spiros Liras, a venture partner at Apple Tree Partners, maintains his seats on the board of directors and scientific advisory board. DiLaura had been Sonoma’s chief business & strategy officer since its February 2020 launch, and he’s the former CEO of Second Genome. Co-founded by UCSF’s Kevan Shokat, Initial scored a $75 million Series A last May as it takes aim at “undruggable” protein targets. “Undruggable is only at a given point in time,” Shokat told Lei Lei Wu. “Hopefully, nothing is undruggable, but we may not have the technology until after a while.”

Brent Warner

Brent Warner wrote in a LinkedIn post that he’s moved on to San Diego-based Protego Biopharma as CEO. Endpoints News featured Warner in last year’s 20(+2) under 40 while he was Poseida’s president, gene therapy; he resigned from the role on April 1. Earlier, Warner spent more than two years with Novartis as VP, gene therapy and rare disease, and led the strategy and execution for the spinal muscular atrophy drug Spinraza at Biogen. There’s been nary a peep from Protego since Lightspeed Venture Partners, Vida Ventures and MPM Capital pitched in on the company’s $51 million Series A in November 2021.

Stacy Lindborg

→ That’s a wrap on Stacy Lindborg’s 15-month tenure as co-CEO of BrainStorm Cell Therapeutics, but she’ll still be on the board of directors. Development chief Bob Dagher has also been promoted to CMO. An adcomm recommended the FDA reject BrainStorm’s ALS cell therapy NurOwn by a 17-1 vote, and the company later withdrew the BLA. But BrainStorm has one more card to play, saying earlier this month that it will launch a new Phase 3b study to potentially get the therapy approved in a last-ditch effort.

Nageatte Ibrahim

Nageatte Ibrahim has ended a nearly 10-year career at Merck, to become the oncology CMO for Innovent. Ibrahim was elevated to VP, global clinical development, oncology at Merck in 2021 as the megablockbuster Keytruda continues to rack up approvals across a panoply of indications in the face of an approaching patent cliff. Innovent and Eli Lilly posted positive Phase 3 results in China with mazdutide in the increasingly competitive obesity space, but their PD-1 sintilimab couldn’t pass regulatory muster in the US a couple years ago because of China-only data.

Javier San Martin

→ Based in the Seattle suburb of Bothell, WA, Athira Pharma has recruited Amgen and Lilly alum Javier San Martin as CMO. San Martin ends a four-year run in the RNAi world as CMO of Arrowhead Pharmaceuticals, and he was head of global clinical development at Ultragenyx when Crysvita got a green light for patients with a rare disease called X-linked hypophosphatemia. Now he’ll tackle neurodegenerative disorders at Athira, which is testing the small molecule fosgonimeton in Alzheimer’s and the preclinical ATH-1105 in ALS.

Jason Hoitt

Stoke TherapeuticsDravet syndrome data dazzled investors in late March as shares $STOK surged by 118% in two days. With the possibility of an FDA approval for STK-001 in sight, Stoke has welcomed Jason Hoitt as chief commercial officer. Hoitt devised the launch strategy for Tzield while he was chief commercial officer for Provention Bio, which Sanofi bought for $2.9 billion last year. He’s also a Gilead and Vertex marketing vet who served as head of US sales for both Sarepta and Insmed.

Dale Hooks

→ Former Reata Pharmaceuticals execs continue to find new landing spots after selling to Biogen, and Dale Hooks is up next, taking the role of chief commercial officer at Applied Therapeutics. A 10-year Genentech vet in sales and marketing, Hooks led global commercial operations at Reata, which made history with the first-ever approval of a Friedreich’s ataxia drug for Skyclarys in February 2023. Other ex-Reata leaders who have gone elsewhere include accounting chief Bhaskar Anand (to Summit Therapeutics) and CMO Seemi Khan (to Joshua Bogerchaired Alkeus Pharmaceuticals). The FDA said earlier this month it would postpone its decision on Applied’s galactosemia drug govorestat from Aug. 28 to Nov. 28.

Joshua Reed

→ According to an SEC filing, Omega Therapeutics has decided to “terminate” CFO Joshua Reed’s employment, effective May 31. The Bristol Myers Squibb alum will be replaced by Barbara Chen, the Flagship company’s SVP of finance. In its Q4 report from late March, Omega said that it made some adjustments to its pipeline and laid off 35% of its staff.

Yrjö Wichmann

Faron Pharmaceuticals announced its father-to-son CEO transition last week, but there’s another vacancy they’ll have to attend to: The CFO departures in recent weeks haven’t let up as James O’Brien says goodbye “to pursue another career opportunity.” Yrjö Wichmann has returned to Faron on an interim basis after handling CFO duties from 2014-19.

→ Obesity company Xeno Biosciences has not only picked up $1.15 million, but also Dennis Kim as its new CEO. Kim comes to Xeno with a number of CMO stints at CymaBay, Emerald Biosciences and Zafgen. He formerly served as SVP of medical affairs at Orexigen Therapeutics and held a number of roles at Amylin Pharmaceuticals.

Jeffrey Trigilio

Obsidian Therapeutics, the cancer-focused cell and gene therapy player that just pulled in a $160.5 million Series C two weeks ago, has tapped Jeffrey Trigilio as CFO. Trigilio left Cullinan Oncology on March 29 after more than three and a half years as finance chief, and its controller Nate Nguyen is filling in until a successor is named. Cullinan announced its foray into autoimmune diseases (and a name change to Cullinan Therapeutics) this week.

Javelin Biotech has brought in Jacques Banchereau as CSO. Banchereau joins the Woburn, MA-based team after a stint as science chief at Immunai. At Roche, he was CSO, discovery and translational area head of inflammation & virology.

Chris Krueger

→ A month removed from the appointments of president Enoch Kariuki and CFO Vishaal Turakhia, Endeavor BioMedicines has welcomed Chris Krueger as COO. Krueger had held the CBO post at Ventyx Biosciences since its inception and has been an executive with several other companies that Ventyx chief Raju Mohan founded, including Oppilan Pharma and Zomagen Biosciences.

→ Contract manufacturer KBI Biopharma has recruited Jean-Baptiste Agnus as CBO. Agnus was most recently business chief of AGC Biologics. Before that, he was VP, global head of sales and marketing and held a number of roles at Novasep, culminating in his stint as head of business development, CMO services.

Charlotte Marmousez-Tartar

Servier has promoted Charlotte Marmousez-Tartar to EVP, corporate strategy & transformation, and Hani Friedman Bouganim to EVP, manufacturing, quality & supply chain. Marmousez-Tartar joined Servier in 2006 and had been head of the group transformation & programs office since 2021. When she closed out a 16-year career with Teva, Bouganim was VP of operations and general manager of its Ulm and Weiler sites in Germany. She then came to Servier in 2022 as deputy EVP of industrial operations.

Evotec has selected 12-year Novartis vet Aurélie Dalbiez as chief people officer. Before her most recent role as chief human resources officer of Dutch bio-based ingredients company Corbion, Dalbiez was promoted to head of HR, capsules and health ingredients at Lonza.

Kristian Humer

Citi alum Kristian Humer has taken over as CFO of Foghorn Therapeutics, permanently replacing current Prime Medicine finance chief Allan Reine. Humer spent two years as Viridian’s CFO and CBO until a new regime arrived last fall from Magenta Therapeutics. Stephen DiPalma from Danforth Advisors had been interim CFO at Foghorn, which got a boost from Lilly when the Indianapolis pharma said in February that it would send the BRM selective inhibitor FHD-909 into the clinic.

→ UPenn spinout and Peer Review first-timer Vittoria Biotherapeutics has named ex-Iveric Bio COO Keith Westby to the same position. Westby is the latest former Iveric Bio leader to find a new home post-buyout after regulatory and product strategy exec Snehal Shah joined Oculis last week. Vittoria raised $15 million last November to propel its lead CAR-T cell therapy VIPER-101 into clinical trials; it lists Carl June as a scientific advisor and Zenas BioPharma CEO Lonnie Moulder as a member of its board.

Sam Rasty

→ Now under the direction of former Cedilla CEO Alexandra Glucksmann, Sensorium Therapeutics has introduced Sam Rasty as CBO. After three years as COO of Homology Medicines, Rasty was CEO of PlateletBio and a board member at Oxford Biomedica. From 2011-16, Rasty worked for Shire as VP and head of new products.

IO Biotech is appointing Marjan Shamsaei as SVP, commercial development and portfolio lead for its cancer vaccine candidate IO102-IO103. Shamsaei had a 15-year career at Genentech and joins the team from Allogene, where she was head of commercial from 2021-23. IO Biotech said last week that it hired business chief Faiçal Miyara from Ipsen.

Likarda has rolled out the welcome mat for Shelly Adams as chief commercial officer. She most recently served as VP of sales at Erbi Biosystems, part of Millipore Sigma. Adams brings experience from Gallus BioPharmaceuticals, Avecia Biologics and Abzena.

Deborah Dunsire

→ Ex-Lundbeck CEO Deborah Dunsire is now chairing the board at Blackstone biotech Neurvati Neurosciences. Just before Dunsire retired last summer, Lundbeck and Otsuka notched another FDA approval for the antipsychotic Rexulti, this time for agitation associated with Alzheimer’s dementia. She’s also on the boards of Syros Pharmaceuticals and Ultragenyx.

Ramy Farid

→ Biogen co-founder Phillip Sharp and ex-Cubist Pharmaceuticals CEO Rob Perez won’t be up for reelection on Vir Biotechnology’s board of directors, but Marianne De Backer has two potential board members waiting in the wings. Kronos Bio CEO Norbert Bischofberger and Schrödinger CEO Ramy Farid will make their way to the board if they’re elected at Vir’s shareholders meeting on May 29.

Cargo Therapeutics has reserved space for Roche oncology vet Kapil Dhingra on the board of directors. Dhingra chairs the board at Lava Therapeutics, owns a spot on the board of supervisors at Servier, and has board seats at Black Diamond, Replimune and Mariana Oncology. Cargo was the final entrant in biotech’s IPO class of 2023, raising $281 million.

Simba Gill

→ Former Evelo Biosciences chief Simba Gill has been named executive chairman of Serina Therapeutics, a biotech out of Huntsville, AL that has a candidate for early-stage Parkinson’s in the clinic. Evelo shut down last fall after a cascade of trial failures.

Ligand Pharmaceuticals subsidiary Pelthos Therapeutics has assembled its board of directors, starting with CEO Scott Plesha. It also includes Aurinia Pharmaceuticals CEO Peter Greenleaf, Ligand chief Todd Davis, Savara CEO Matt Pauls and Richard Baxter, Ligand’s SVP of investment operations.

→ Last July, Turnstone Biologics rode the IPO train. Now, the company has elected ex-Calithera Biosciences CFO William Waddill to its board of directors, replacing Patrick Machado. Waddill currently sits on the boards of Protagonist Therapeutics, Arrowhead Pharmaceuticals and Annexon.

Read More

Continue Reading

International

Inside The Disinformation Industry

Inside The Disinformation Industry

Authored by Freddie Sayers via UnHerd.com,

“Our team re-reviewed the domain, the rating will not change…

Published

on

Inside The Disinformation Industry

Authored by Freddie Sayers via UnHerd.com,

“Our team re-reviewed the domain, the rating will not change as it continues to have anti-LGBTQI+ narratives…

The site authors have been called out for being anti-trans. Kathleen Stock is acknowledged as a ‘prominent gender-critical’ feminist.”

This was part of an email sent to UnHerd at the start of January from an organisation called the Global Disinformation Index. It was their justification, handed down after a series of requests, for placing UnHerd on a so-called “dynamic exclusion list” of publications that supposedly promote “disinformation” and should therefore be boycotted by all advertisers.

They provided examples of the offending content: Kathleen Stock, whose columns are up for a National Press Award this week, Julie Bindel, a lifelong campaigner against violence against women, and Debbie Hayton, who is transgender. Apparently the GDI equates “gender-critical” beliefs, or maintaining that biological sex differences exist, with “disinformation” — despite the fact that those beliefs are specifically protected in British law and held by the majority of the population.

The verdicts of “ratings agencies” such as the GDI, within the complex machinery that serves online ads, are a little-understood mechanism for controlling the media conversation. In UnHerd’s case, the GDI verdict means that we only received between 2% and 6% of the ad revenue normally expected for an audience of our size. Meanwhile, neatly demonstrating the arbitrariness and subjectivity of these judgements, Newsguard, a rival ratings agency, gives UnHerd a 92.5% trust rating, just ahead of the New York Times at 87.5%.

So, what are these “ratings agencies” that could be the difference between life and death for a media company? How does their influence work? And who funds them? The answers are concerning and raise serious questions about the freedom of the press and the viability of a functioning democracy in the internet age.

Disinformation only really became a discussion point in response to the Trump victory in 2016, and was then supercharged during the Covid era: Google Trends data shows that worldwide searches for the term quadrupled between June and December 2016, and had increased by more than 30 times by 2022. In response to the supposed crisis, corporations, technology companies and governments all had to show they were taking some form of action. This created a marketplace for enterprising start-ups and not-for-profits to claim a specialism in detecting disinformation. Today, there are hundreds of organisations who make this claim, providing all sorts of “fact-checking” services, including powerful ratings agencies such as GDI and Newsguard. These companies act as invisible gatekeepers within the vast machinery of online advertising.

How this works is relatively straightforward: in UnHerd’s case, we contract with an advertising agency, which relies on a popular tech platform called “Grapeshot”, founded in the UK and since acquired by Larry Ellison’s Oracle, to automatically select appropriate websites for particular campaigns. Grapeshot in turn automatically uses the “Global Disinformation Index” to provide a feed of data about “brand safety” — and if GDI gives a website a poor score, very few ads will be served.

The Global Disinformation Index was founded in the UK in 2018, with the stated objective of disrupting the business model of online disinformation by starving offending publications of funding. Alongside George Soros’s Open Society Foundation, the GDI receives money from the UK government (via the FCDO), the European Union, the German Foreign Office and a body called Disinfo Cloud, which was created and funded by the US State Department.

Perhaps unsurprisingly, its two founders emerged from the upper echelons of “respectable” society. First, there is Clare Melford, whose biography published by the World Economic Forum states that she had previously “led the transition of the European Council on Foreign Relations from being part of George Soros’s Open Society Foundation to independent status”. She set up the GDI with Daniel Rogers, who worked “in the US intelligence community”, before founding a company called “Terbium Labs” that used AI and machine learning to scour the internet for illicit use of sensitive data and then sold it handsomely to Deloitte.

Together, they have spearheaded a carefully intellectualised definitional creep as to what counts as “disinformation”. Back when it was first set up in 2018, they defined the term on their website as “deliberately false content, designed to deceive”. Within these strict parameters, you can see how it might have appeared useful to have dedicated fact-checkers identifying the most egregious offenders and calling them out. But they have since broadened the definition to encompass anything that deploys an “adversarial narrative” — stories that may be factually true, but pit people against each other by attacking an individual, an institution or “the science”.

GDI founder Clare Melford explained in an interview at the LSE in 2021 how this expanded definition was more “useful”, as it allowed them to go beyond fact-checking to targeting anything on the internet that they deem “harmful” or “divisive”:

“A lot of disinformation is not just whether something is true or false — it escapes from the limits of fact-checking. Something can be factually accurate but still extremely harmful… [GDI] leads you to a more useful definition of disinformation… It’s not saying something is or is not disinformation, but it is saying that content on this site or this particular article is content that is anti-immigrant, content that is anti-women, content that is antisemitic…”

Larger traffic websites are rated using humans, she explains, but most are rated using automated AI. “We actually instantiate our definition of disinformation — the adversarial narrative topics — within the technology,” explains Melford. “Each adversarial narrative is given its own machine-learning classifier, which then allows us to search for content that matches that narrative at scale… misogyny, Islamophobia, anti-Semitism, anti-black content, climate change denial, etc.”

Melford’s team and algorithm are essentially trained to identify and defund any content she finds offensive, not disinformation. Her personal bugbears are somewhat predictable: content supporting the January 6 “insurrections”, the pernicious influence of “white men in Silicon Valley”, and anything that might undermine the global response to the “existential challenge of climate change”.

The difficulty, however, is that most of these issues are highly contentious and require robust, uncensored discussion to find solutions. Challenges to scientific orthodoxy are particularly important, as the multiple failures of the official response to Covid-19 amply demonstrated. Indeed, one of the examples of GDI’s good work that Melford highlighted in her LSE talk was an article on a Spanish website in June 2021 about the Delta variant of Covid-19. “Official data: a third of deaths from the Delta variant in the United Kingdom were among the vaccinated,” reads the headline, next to an advertisement for Chipotle Mexican Grill. “This is clearly untrue,” she said breezily, “and Chipotle has been caught next to this ad unwittingly, and unfortunately for them have funded this highly dangerous disinformation about vaccines”.

This was, however, far from an accurate description. The statistic being reported comes from a June 2021 Public Health England report into Covid variants that sets out the 42 known deaths from the Delta variant from January to June: 23 were unvaccinated, 7 vaccinated with one shot and 12 fully vaccinated. In other words, 29% were fully vaccinated — around a third — and 17% partially vaccinated, making a total of 45% vaccinated. The headline claiming a third were vaccinated, it turns out, was not spreading “dangerous disinformation” at all — if anything, it underplayed the story.

Examples like this are far from rare. The GDI still hosts an uncorrected 2020 blog about the “evolution of the Wuhan lab conspiracy theory” surrounding Covid-19’s origins, which concludes that “cutting off ads to these fringe sites and their outer networks is the first action needed”.

This is despite the fact that Facebook and other tech companies long ago corrected similar policies and conceded that it was a legitimate hypothesis that should never have been censored.

In the US, a number of media organisations have started to take action against GDI’s partisan activism, prompted by a GDI report in 2022 that listed the 10 most dangerous sites in America. To many, it looked simply like a list of the country’s most-read conservative websites. It even included RealClearPolitics, a well-respected news aggregator whose polling numbers are among the most quoted in the country. The “least risk of disinformation” list was, predictably enough, populated by sites with a liberal inclination.

In recent months, a number of American websites have launched legal challenges against GDI’s labelling system, which they claim infringes upon their First Amendment rights. In December, The Daily Wire and The Federalist teamed up with the attorney general of Texas to sue the state department for funding GDI and Newsguard. A separate initiative to prevent the Defense Department from using any advertiser that uses Newsguard, GDI or similar entities has been successful, and is now part of federal law.

But GDI is a British company and, on this side of the Atlantic, the Conservative Government continues to fund it. A written question from MP Philip Davies last year revealed that £2.6 million was given in the period up to last year, and that there is still “frequent contact” between the GDI and the FCDO “Counter Disinformation and Media Development” unit.

Yesterday, I was invited to give evidence to the House of Lords Communication and Digital Committee during which I outlined the extent of the threat to the free media of self-appointed ratings agencies such as the Global Disinformation Index. The reality, as I told Parliament, is that GDI is merely the tip of the iceberg. At a time when the news media is so distrusted and faces a near-broken business model, the role of government should be to prevent, not encourage, and most certainly not fund, consolidations of monopoly power around certain ideological viewpoints.

But this isn’t simply a matter for the media. Both companies and those in the advertising sector also need to act: it cannot be good marketing for brands to target only half the population. Last year, Oracle announced it was cutting ties with GDI on free speech grounds, but as we discovered, it seems they are still collaborating via the Grapeshot plaform: is Larry Ellison aware of this?

At its heart, the disinformation panic is becoming a textbook example of how a “solution” can do more harm than the problem it is designed to address. Educated campaigners such as Clare Melford may think they are doing the world a service, but in fact they are acting as intensifying agents, lending legitimacy to a conspiratorial world view in which governments and corporations are in cahoots to censor political expression. Unless something is done to stop them, they will continue to sow paranoia and distrust — and hasten us towards an increasingly radicalised and divided society.

Tyler Durden Thu, 04/18/2024 - 17:40

Read More

Continue Reading

International

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

AI chatbot makers’ restrictive use policies hinder people’s access to information.

Published

on

By

AI chatbots restrict their output according to vague and broad policies. taviox/iStock via Getty Images

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, including Google’s Gemini and OpenAI’s ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies’ misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Our analysis found that companies’ hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google’s can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be allowed to participate in women’s sports tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women’s tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators’ subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the COVID-19 pandemic to repress criticism of the government. More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies’ policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI’s integration into search, word processors, email and other applications.

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe’s online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union’s 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies’ influence should require them to adopt a free speech culture. International human rights provide a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It’s also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users’ exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.

Jordi Calvet-Bademunt is affiliated with The Future of Free Speech. The Future of Free Speech is a non-partisan, independent think tank that has received limited financial support from Google for specific projects. However, Google did not fund the report we refer to in this article. In all cases, The Future of Free Speech retains full independence and final authority for its work, including research pursuits, methodology, analysis, conclusions, and presentation.

Jacob Mchangama is affiliated with The Future of Free Speech. The Future of Free Speech is a non-partisan, independent think tank that has received limited financial support from Google for specific projects. However, Google did not fund the report we refer to in this article. In all cases, The Future of Free Speech retains full independence and final authority for its work, including research pursuits, methodology, analysis, conclusions, and presentation.

Read More

Continue Reading

Trending