Connect with us

Spread & Containment

10 Startups to Watch

Early-stage companies advance precision medicine by applying AI, clinical genomics, and  other new technologies.
The post 10 Startups to Watch appeared first on Clinical OMICs – Molecular Diagnostics in Precision Medicine.

Published

on

When it comes to the data-driven transformation of healthcare enabled through precision medicine “there is perhaps no more poignant example than the response to the COVID-19 pandemic,” NIH Director Francis S. Collins, M.D., Ph.D., and Joshua C. Denny, M.D., CEO of the All of Us Research Program observed in a commentary published March 18 in the journal Cell. “At the same time, COVID-19 has highlighted the need for precision medicine to move further and faster,” they added.

Collins and Denny followed up by suggesting seven areas or “opportunities” to accelerate the promise of precision medicine more equitably: Big data and artificial intelligence; diversity and inclusion; electronic health records; huge longitudinal cohorts; phenomics and environment; privacy, participant trust, and return of value; and routine clinical genomics.

The precision medicine market is expected to nearly double by 2026, rising to $100.168 billion from $60.422 billion in 2020—a compound annual growth rate of 8.69%, according to ResearchandMarkets.com.

Clinical OMICs last presented a “10 to Watch” list of promising startups in 2019. Below are 10 up-and-coming companies to watch for their ambitious and, so far successful, application of omics technologies and clinical genetic insights to deliver on the promise of precision medicine:

 

Todd Dickinson, CEO, ARC Bio
Todd Dickinson, CEO

“Arc Bio’s mission is to accelerate the metagenomic revolution in infectious disease by offering the first and only turnkey metagenomics solution with its Galileo ONE platform,” CEO Todd Dickinson told Clinical OMICs.

Galileo ONE combines a next-generation sequencing (NGS) lab workflow with an automated bioinformatics pipeline designed to deliver fast, comprehensive, and quantitative microbial profiling. The research use only (RUO) metagenomic sequencing solution leverages the Illumina NGS platform to deliver sample-to-report detection and quantification of >1300 species of bacteria, fungi, DNA and RNA viruses, and parasites from a primary sample—in less than 30 hours.

Galileo ONE is the culmination of 4+ years of development. In 2019, Arc Bio released a proof-of-concept version called Galileo Viral, a 397-strain panel focused on transplant infection research applications. Galileo Viral showed excellent performance compared to qPCR gold standard testing when used by the research groups of Ben Pinksy, M.D., Ph.D. (Stanford University School of Medicine), Angela Caliendo, M.D., Ph.D. (Alpert Medical School of Brown University) and Judith Breuer, Ph.D. (Great Ormond Street Hospital for Children).

“Based on these promising results, we moved into development and optimization of our expanded metagenomic Galileo ONE platform,” Dickinson said. “We are currently working with early access customers and key strategic partners on benchmarking performance of Galileo ONE in the lead up to our commercial launch slated for mid-year 2021.”

With the debut of Galileo ONE, Arc Bio plans to emerge from quasi-stealth mode this year. Established in 2014, Arc Bio has been funded through parent company EdenRoc Sciences, and generated revenue from Galileo Viral and Galileo ONE early access partners that is expected to double this year.

“As part of our efforts to scale rapidly to meet significant product demand.

 

Asaf Zviran
Asaf Zviran, Ph.D.
Co-founder, CEO and Chief Scientific Officer

After seven years in the Israel Defense Forces, Asaf Zviran, Ph.D., had found success as an electronics engineer in Israel’s civilian defense sector when he was diagnosed with ependymoma, a rare neurological tumor, at age 28.

Zviran’s cancer is in remission today, but frustration over treatment that often resembles a shot in the dark led him to change careers and earn a Ph.D. in molecular biology at the Weizmann Institute of Science.

As a postdoc in the lab of Dan Landau, M.D., Ph.D., of Weill Cornell Medicine and the New York Genome Center, Zviran worked with Landau to develop a DNA-sequencing approach that could detect very low levels of tumor DNA in blood samples as small as 1-2 ml, potentially enabling early detection of cancer recurrence.

At the heart of the minimal residual disease (MRD) platform, detailed in Nature Medicine, is proprietary AI-powered pattern recognition algorithms that sift through the torrents of data generated by whole-genome sequencing to detect residual cancer in the blood. The fully cloud-based platform is GDPR- and HIPAA-compliant, and deployable to any cancer/genomics lab that uses a standard Illumina sequencer.

Zviran spun out the platform in 2018 into C2i Genomics. “C2i” is military lingo for “command, control, and intelligence,” reflecting his desire to bring defense methodologies to oncology.

“C2i is addressing the expensive and painful problem of over and under-treatment of cancer,” Zviran told Clinical OMICs. “By providing 100x more sensitive cancer detection, C2i Genomics is giving both doctors and patients more certainty on which treatment decision is best, which saves costs, and improves outcomes.”

C2i raised $12 million in Series A financing last year, followed on April 15 with a $100 million Series B led by Casdin Capital, bringing the company’s total financing to $113.5 million.

 

Maddison Masaeli
Maddison Masaeli, Ph.D.
Co-Founder and CEO

Deepcell is pioneering an artificial intelligence (AI)-powered platform designed to bring cell morphology into a new era. The platform applies AI with advanced microfluidics and high-throughput imaging to identify, sort, and classify viable cells in any biological sample based on morphological distinctions for basic and translational research.

CEO and co-founder Maddison Masaeli, Ph.D., told Clinical OMICs Deepcell’s platform addresses three cell morphology challenges: A lack of adequate markers for specific applications, the pressing need to improve the cost and complexity of single cell analysis, and new discoveries in cell biology.

“Deepcell has the potential to identify infinitesimal morphological differences and patterns that are hardly accessible by the human eye,” Masaeli said. “Using unsupervised deep learning, Deepcell’s platform cannot only identify unusual patterns or cells within samples, but it also continually improves as results from each analysis are fed back into the brain.”

Using microfluidics-based technology, Deepcell’s platform precisely images single cells in cell suspension and presents them to an imaging device. The imaging system extracts detailed features and feeds them in real time to the Deep Neural Network, which analyzes the images and can profile and classify cells based on morphology features alone.

Deepcell’s platform also addresses technical issues associated with staining and labeling cells, as well as errors in human judgment: “Deepcell is able to accurately isolate cells in a way that does not perturb the cell,” Masaeli said. “This will make cell analysis workflows more efficient and allow researchers to understand the cell at a higher resolution.”

Deepcell plans to offer a cloud service enabling users to analyze their data and train new AI models, while continuously improving existing ones.

To date, Deepcell has raised $25 million, including a $20 million Series A completed in December 2020 and led by Bow Capital, with substantial participation from Andreessen-Horowitz—which led the $5 million seed round.

 

Pierre-Louis Joffrin
Pierre-Louis Joffrin
Corporate Development Officer

Mogrify’s immuno-oncology and ophthalmology programs include induced pluripotent stem cell (iPSC)-derived allogeneic cell therapies targeting hematological and solid malignancies, and in vivo reprogramming therapies for retinal degeneration.

“Mogrify’s mission is to transform the development of ex vivo cell therapies and pioneer a new class of in vivo reprogramming therapies for immuno-oncology, ophthalmology and other disease areas,” Pierre-Louis Joffrin, corporate development officer with Mogrify, told Clinical OMICs.

Its MOGRIFY platform enhances existing stem-cell forward reprogramming methods or bypass development pathways altogether, affecting a direct trans-differentiation between a mature cell type to another mature cell type. The epiMOGRIFY extension, enhances directed differentiation or cell conversion to support development of scalable off-the-shelf therapies for diseases with high unmet clinical needs.

Both platforms deploy NGS, gene regulatory, and epigenetic network data to enable prediction of the transcription factors (or small molecules) and optimal culture conditions required to produce any target human cell type from any source human cell type.

Since launching in 2019, it has raised over $37 million from Parkwalk, Ahren Innovation Capital, 24Haymarket, Darrin M. Disley, Ph.D., Jonathan Milner, Ph.D., the University of Bristol Enterprise Fund III, and strategic investors Astellas Venture Management.

Last year, Sangamo Therapeutics became Mogrify’s second U.S. biopharma partner by applying Mogrify’s proprietary iPSCs and embryonic stem cells to accelerate the development of scalable and accessible zinc finger protein gene-engineered chimeric antigen receptor regulatory T cell (CAR-Treg) therapies for patients with inflammatory and autoimmune diseases.

 

Maya Said, Sc.D
Founder and CEO

A cancer diagnosis can overwhelm patients and their families with information that may be unclear and even inaccurate. Despite rapid advances in cancer care, 25% of patients said they were not getting the most advanced treatments, according to the 2016 CancerCare Patient Access and Engagement Report, and 80% were not informed about clinical trials options. Also, less than 30% of advanced breast cancer patients said they were undergoing genomic testing that could improve their treatment outcomes, according to study findings by Outcomes4Me Inc., a Cambridge, MA, health tech company.

Outcomes4Me seeks to address the cancer knowledge gap and improve health outcomes by empowering patients with understandable, relevant, and evidence-based information. The company has developed a direct-to-consumer artificial intelligence (AI)-powered patient empowerment platform for shared decision-making between patients and providers.

The platform harnesses regulatory-grade, real-world data and patient experiences to generate deeper insights that improve care, accelerate research, and achieve better outcomes.

“In order to truly improve cancer outcomes, we need to transition from an episodic approach to patient management to a continuum experience that anticipates patients’ needs and provides timely, personally specific, and actionable information. This is exactly what the Outcomes4Me platform does,” Maya R. Said, Sc.D., the company’s founder and CEO, told Clinical OMICs.

“The platform’s AI-powered personalization unlocks an adaptive experience that empowers patients to stay informed, understand their treatment options, find clinical trials, manage their quality of life, and ultimately advocate for themselves,” Said added.

The company’s key collaborations include the National Comprehensive Cancer Network (NCCN), Wolters Kluwer, Massachusetts General Hospital, Vanderbilt Ingram Cancer Center, and Foundation Medicine. Outcomes4Me completed a $12 million Series A round in April, bringing its total financing raised to $16.7 million.

 

Gabriel Lazarin
Gabriel Lazarin
VP of Medical Affairs

Phosphorus says its mission is to extend and improve lives by making genomics a foundational part of everyone’s health and wellness journey.

“While much is known about the power of genomics to positively affect health and medical management decisions, testing is still underutilized,” said Gabriel Lazarin, VP of Medical Affairs at Phosphorus. “And, when testing is done, it is often performed after a diagnosis, which misses the opportunity for earlier interventions.”

To address this, Phosphorus created GeneCompass, a genomic screen that offers what it calls the most comprehensive and clinically actionable assessment of genetic health for identifying and preventing risk for genetic conditions, as well as a pharmacogenetic drug-response assessment.

GeneCompass Plus analyzes 440 genes covering 301 monogenic conditions and disease susceptibility areas and 126 different drug reactions in conditions that include cancer, cardiology, infertility, lipidology/emias, and metabolic disorders.

GeneCompass also enables hospitals, health systems, and physician-owned laboratories to insource the technology via its Managed Lab Service. Lazarin said Phosphorus’ approach incentivizes and empowers healthcare providers to fully own the patient care continuum.

“We provide the expertise to organizations so that they can build the same accurate and cost-efficient test for their practice, solving the last barrier to routine integration of genomics into everyday care,” Lazarin said.

Phosphorus is based in New York City with a CLIA and CAP-certified laboratory in Secaucus, NJ.

 

Proscia logo

Mike Bonham
Mike Bonham, M.D., Ph.D.
Chief Medical Officer

“At Proscia, we’re accelerating pathology’s shift from analog to digital, enabling life sciences organizations, health systems, and laboratories around the world to make this transition,” Mike Bonham, M.D., Ph.D., Proscia’s CMO, told Clinical OMICs.”

Proscia says its Concentriq platform sits at the intersection of digital and computational pathology, marrying enterprise scalability with powerful artificial intelligence (AI) applications. It is designed to allow organizations to manage their pathology practice, including ingesting, viewing, managing, analyzing, and sharing images from any scanner. Concentriq can also serve as an AI launchpad, enabling laboratories to seamlessly deploy computational applications at scale.

Proscia partners with leading academic and commercial laboratories to develop and validate AI applications across laboratory settings—including Johns Hopkins University; University of California, San Francisco; UMC Utrecht; LabPON; Unilabs; University of Florida; and Thomas Jefferson University Hospitals.

With key industry partners, Proscia offers a joint solution with Visiopharm uniting image analysis and image management, enabling research organizations to better leverage computational data at scale. Proscia is also integrating Ibex’s Galen Prostate application into Concentriq to deliver AI-powered triaging, cancer detection, and grading of prostate core needle biopsies into routine workflows to drive efficiency and quality improvements.

The Joint Pathology Center selected Concentriq to modernize the world’s largest repository of human tissue data dating back to the Civil War. Proscia’s other marquee customers include the University of Pennsylvania, NASA’s Jet Propulsion Laboratory, and a consortium of the NIH’s National Cancer Institute (NCI) researching cancer overdiagnosis.

 

John Stark, CEO

Jonathan Rothberg, Ph.D., disrupted next-generation sequencing in 2000 when he founded 454 Life Sciences, and later invented semiconductor chip-based sequencing, and founded companies that include CuraGen, Clarifi, RainDance Technologies, LAM Therapeutics, Hyperfine Research, and Butterfly Network.

Another Rothberg-founded company created through his 4Catalyzer startup accelerator, Quantum-Si, has created the first next-generation protein sequencing platform, designed to provide a full solution spanning from sample preparation to sequencing and data analysis with the goal of revolutionizing proteomics.

“Our platform has the potential to enable users to study the proteome in an unbiased and scalable way, similar to the manner in which next-generation DNA sequencing technologies transformed genomics analysis,” Quantum-Si CEO John Stark told Clinical OMICs. “Our goal is to provide broad access to proteomics tools across academic research labs, core labs, and biopharma R&D labs through our end-to-end workflow solution and proprietary semiconductor chip.”

Quantum-Si’s platform consists of the Carbon automated sample preparation instrument, the Platinum next-generation protein sequencing instrument, the Quantum-Si Cloud software service, and reagent kits and chips for use with their instruments. The company’s proprietary semiconductor chip is designed to read proteins at the single molecule level—specifically amino acids.

Quantum-Si expects its benchtop Carbon and Platinum instruments to cost approximately $50,000 combined—compared with legacy instruments like mass spectrometers, which can cost $250,000 to over $1 million per new instrument and require specialized training.

“We believe that the affordability and simplicity of our single molecule detection platform will provide users the opportunity to perform proteomics studies at scale,” Stark added.

 

Sense Bio logo

Ryan Roberts
Ryan Roberts
Chief Commercial Officer

Sense Biodetection more than tripled its total capital raised in April, when it completed a $50 million Series B financing. The molecular diagnostics developer, which has now raised more than $70 million in financing, plans to use the proceeds to accelerate the launch of its VerosT COVID-19 test, and further develop a portfolio of instrument-free, rapid molecular tests.

“There is an enormous cost to healthcare due to inaccurate or untimely diagnostic testing,” Ryan Roberts, chief commercial officer with Sense Biodetection, told Clinical OMICs. “While it has been true for years, COVID-19 has brought it to light in a dramatic way: rapid testing can be unreliable, especially for infected but asymptomatic patients. Conversely, the gold-standard PCR lab-run test requires centralized machine processing which can take days. We see a significant opportunity for a highly accurate, rapid test that is easy to use, instrument-free and disposable.”

The Veros COVID-19 test will be based on Sense’s Veros platform, designed to detect a variety of deadly and costly diseases using nucleic acid amplification and non-fluorescent color detection of amplified analytes.

The platform is intended to underpin a new class of rapid molecular diagnostic tests that emulate the performance of central laboratory PCR testing, but are disposable and easy to use since they do not need an accompanying instrument or reader. As a result, Sense says, Veros diagnostics can be used beyond traditional healthcare settings—enabling better access, outcomes, and value for patients and providers.

Koch Disruptive Technologies, a subsidiary of Koch Industries, led the Series B financing, with participation from Sense’s existing investors Cambridge Innovation Capital, Earlybird Health, Jonathan Milner, and Mercia Asset Management.

Based in Abingdon, U.K., Sense was founded in 2014. Five years later, it raised £12.3 million ($17.2 million) in Series A financing, with the proceeds intended toward test development.

 

Sapan Shah
Sapan Shah, Ph.D, CEO

StrideBio is developing next-generation genetic medicines, initially targeting rare central nervous system and cardiovascular disorders in patients whose underlying disease cause is not addressable by traditional approaches.

The company’s adeno-associated viral (AAV) vectors deliver a genetic payload that results in expression of a healthy version of a specific gene that is not functioning correctly in patients, providing a one-time treatment with life-changing or curative potential.

Its pipeline of independent and partnered programs includes treatments for Friedreich’s ataxia, Niemann-Pick disease type C, Rett syndrome, Dravet syndrome, Angelman syndrome and alternating hemiplegia of childhood.

“Our platform technology is focused on rational engineering of the AAV capsid or shell that carries the genetic payload being delivered to target tissues and cells in a patient,” StrideBio CEO Sapan Shah, Ph.D., told Clinical OMICs. “StrideBio’s STRIVE platform utilizes the 3D structure of the AAV capsid to identify regions that can be modified to incorporate novel properties including evasion of neutralizing antibodies, specific tissue targeting or de-targeting, enhanced potency and manufacturability at scale.”

In Research Triangle Park, NC, StrideBio is building an integrated gene therapy product engine including capsid and genetic construct design, in-house manufacturing at 1000L scale, and clinical development capabilities.

StrideBio has raised $97.2 million in two equity financings, most recently a $81.5 million Series B round. It has collaborations with key partners including CRISPR Therapeutics, Takeda Pharmaceuticals, and Sarepta Therapeutics which have brought in approximately $70 million in upfront non-dilutive funding—with potential for more than $2 billion in milestone-based payments.

“We are inspired by the dramatic progress the field of gene therapy that has made in recent years and believe our STRIVE platform is uniquely positioned to enable improved next-generation AAV based therapies for patients who desperately need them,” Shah added.

 

The post 10 Startups to Watch appeared first on Clinical OMICs - Molecular Diagnostics in Precision Medicine.

Read More

Continue Reading

Spread & Containment

Report Criticizes ‘Catastrophic Errors’ Of COVID Lockdowns, Warns Of Repeat

Report Criticizes ‘Catastrophic Errors’ Of COVID Lockdowns, Warns Of Repeat

Authored by Kevin Stocklin via The Epoch Times (emphasis ours),

It…

Published

on

Report Criticizes 'Catastrophic Errors' Of COVID Lockdowns, Warns Of Repeat

Authored by Kevin Stocklin via The Epoch Times (emphasis ours),

It was four years ago, in March 2020, that health officials declared COVID-19 a pandemic and America began shutting down schools, closing small businesses, restricting gatherings and travel, and other lockdown measures to “slow the spread” of the virus.

UNICEF unveiled its "Pandemic Classroom," a model made up of 168 empty desks, each seat representing one million children living in countries where schools were almost entirely closed during the COVID pandemic lockdowns, at the U.N. Headquarters in New York City on March 2, 2021. (Chris Farber/UNICEF via Getty Images)

To mark that grim anniversary, a group of medical and policy experts released a report, called “COVID Lessons Learned,” which assesses the government’s response to the pandemic. According to the report, that response included a few notable successes, along with a litany of failures that have taken a severe toll on the population.

During the pandemic, many governments across the globe acted in lockstep to pursue authoritative policies in response to the disease, locking down populations, closing schools, shutting businesses, sealing borders, banning gatherings, and enforcing various mask and vaccine mandates. What were initially imposed as short-term mandates and emergency powers given to presidents, ministers, governors, and health officials soon became extended into a longer-term expansion of official power.

“Even though the initial point of temporary lockdowns was to ’slow the spread,' which meant to allow hospitals to function without being overwhelmed, instead it rapidly turned into stopping COVID cases at all costs,” Dr. Scott Atlas, a physician, former White House Coronavirus Task Force member, and one of the authors of the report, stated at a March 15 press conference.

Published by the Committee to Unleash Prosperity (CTUP), the report was co-authored by Steve Hanke, economics professor and director of the Johns Hopkins Institute for Applied Economics; Casey Mulligan, former chief economist of the White House Council of Economic Advisors; and CTUP President Philip Kerpen. 

According to the report, one of the first errors was the unprecedented authority that public officials took upon themselves to enforce health mandates on Americans. 

Granting public health agencies extraordinary powers was a major error,” Mr. Hanke told The Epoch Times. “It, in effect, granted these agencies a license to deceive the public.”

The authors argue that authoritative measures were largely ineffective in fighting the virus, but often proved highly detrimental to public health. 

The report quantifies the cost of lockdowns, both in terms of economic costs and the number of non-COVID excess deaths that occurred and continue to occur after the pandemic. It estimates that the number of non-COVID excess deaths, defined as deaths in excess of normal rates, at about 100,000 per year in the United States.

‘They Will Try to Do This Again’

“Lockdowns, schools closures, and mandates were catastrophic errors, pushed with remarkable fervor by public health authorities at all levels,” the report states. The authors are skeptical, however, that health authorities will learn from the experience.

“My worry is that if we have another pandemic or another virus, I think that Washington is still going to try to do these failed policies,” said Steve Moore, a CTUP economist. “We’re not here to say ‘this guy got it wrong' or ’that guy or got it wrong,’ but we should learn the lessons from these very, very severe mistakes that will have costs for not just years, but decades to come. 

“I guarantee you, they will try to do this again,” Mr. Moore said. “And what’s really troubling me is the people who made these mistakes still have not really conceded that they were wrong.”

Mr. Hanke was equally pessimistic.

“Unfortunately, the public health establishment is in the authoritarian model of the state,” he said. “Their entire edifice is one in which the state, not the individual, should reign supreme.”

The authors are also critical of what they say was a multifaceted campaign in which public officials, the news media, and social media companies cooperated to frighten the population into compliance with COVID mandates.

During COVID, the public health establishment … intentionally stoked and amplified fear, which overlaid enormous economic, social, educational and health harms on top of the harms of the virus itself,” the report states. 

The authors contrasted the authoritative response of many U.S. states to policies in Sweden, which they say relied more on providing advice and information to the public rather than attempting to force behaviors.

Sweden’s constitution, called the “Regeringsform,” guarantees the liberty of Swedes to move freely within the realm and prohibits severe lockdowns, Mr. Hanke stated.

“By following the Regeringsform during COVID, the Swedes ended up with one of the lowest excess death rates in the world,” he said.  

Because the Swedish government avoided strict mandates and was more forthright in sharing information with its people, many citizens altered their behavior voluntarily to protect themselves.

“A much wiser strategy than issuing lockdown orders would have been to tell the American people the truth, stick to the facts, educate citizens about the balance of risks, and let individuals make their own decisions about whether to keep their businesses open, whether to socially isolate, attend church, send their children to school, and so on,” the report states.

‘A Pretext to Enhance Their Power’

The CTUP report cites a 2021 study on government power and emergencies by economists Christian Bjornskov and Stefan Voigt, which found that the more emergency power a government accumulates during times of crisis, “the higher the number of people killed as a consequence of a natural disaster, controlling for its severity.

As this is an unexpected result, we discuss a number of potential explanations, the most plausible being that governments use natural disasters as a pretext to enhance their power,” the study’s authors state. “Furthermore, the easier it is to call a state of emergency, the larger the negative effects on basic human rights.”

“All the things that people do in their lives … they have purposes,” Mr. Mulligan said. “And for somebody in Washington D.C. to tell them to stop doing all those things, they can’t even begin to comprehend the disruption and the losses.

“We see in the death certificates a big elevation in people dying from heart conditions, diabetes conditions, obesity conditions,” he said, while deaths from alcoholism and drug overdoses “skyrocketed and have not come down.”

The report also challenged the narrative that most hospitals were overrun by the surge of COVID cases.

“Almost any measure of hospital utilization was very low, historically, throughout the pandemic period, even though we had all these headlines that our hospitals were overwhelmed,” Mr. Kerpen stated. “The truth was actually the opposite, and this was likely the result of public health messaging and political orders, canceling medical procedures and intentionally stoking fear, causing people to cancel their appointments.”

The effect of this, the authors argue, was a sharp increase in non-COVID deaths because people were avoiding necessary treatments and screenings. 

“There were actually mass layoffs in this sector at one point,” Mr. Kerpen said, “and even now, total discharges are well below pre-pandemic levels.”

In addition, as health mandates became more draconian, many people became concerned at the expansion of government power and the loss of civil liberties, particularly when government directives—such as banning outdoor church services but allowing mass social-justice protests—often seemed unreasonable or politicized. 

The report also criticized the single-minded focus on vaccines and the failure by the NIH and the FDA to do clinical trials on existing drugs that were known to be safe and could have been effective in treating those infected with COVID-19.

Because so much of the process of approving the vaccines, the risks and benefits, and the reporting of possible side-effects was kept from the public, people were unable to give informed consent to their own health care, Mr. Kerpen said. 

“And when the Biden administration came in and started mandating them, now you had something that was inherently experimental with some questionable data, and instead of saying, ‘Now you have a choice whether you want it or not,’ in the context of a pandemic they tried to mandate them,” he said.

Pandemic Censorship

Tech oligopolies and the corporate media also receive criticism for their collaboration with government to control public messaging and censor dissenting voices. According to the authors, many government and health officials collaborated with tech oligarchs, news media corporations, and even scientific journals to censor critical views on the pandemic.

The Biden administration is currently defending itself before the Supreme Court against charges brought by Louisiana and Missouri attorneys general, who charged that administration officials pressured tech companies to censor information that contradicted official narratives on COVID-19’s origins, related mandates and treatment, as well as censoring political speech that was critical of President Biden during his 2020 campaign. The case is Murthy v. Missouri.

Mr. Hanke stated that a previous report he co-authored, titled “Did Lockdowns Work?,” which was critical of lockdowns, was refused by medical journals, even when they published op-eds that criticized it and published numerous pro-lockdown reports. 

Dr. Vinay Prasad—a physician, epidemiologist, professor at the University of California at San Francisco’s medical school and author of over 350 academic articles and letters—has made similar allegations of censorship by medical journals.

“Specifically, MedRxiv and SSRN have been reluctant to post articles critical of the CDC, mask and vaccine mandates, and the Biden administration’s health care policies,” Dr. Prasad stated.

Heightening concerns about medical censorship is the “zero-draft” World Health Organization (WHO) pandemic treaty currently being circulated for approval by member states, including the United States. It commits members to jointly seek out and “tackle” what the WHO deems as “misinformation and disinformation.”

One of the enduring consequences of the COVID years is a general loss of public trust in public officials, health experts, and official narratives. 

“Operation Warp Speed was a terrific success with highly unexpected rapidity of development [of vaccines],” Dr. Atlas said. “But the serious flaws centered around not being open with the public about the uncertainties, particularly of the vaccines’ efficacy and safety.” 

“One result of the government’s error-ridden COVID response was that Americans have justifiably lost faith in public health institutions,” the report states. According to the authors, if health officials want to regain the public’s trust, they should begin with an accurate assessment of their actions during the pandemic.

“The best way to restore trust is to admit you were wrong,” Dr. Atlas said. “I think we all know that in our personal lives, but here it’s very important because there has been a massive lack of trust now in institutions, in experts, in data, in science itself.

I think it’s going to be very difficult to restore that without admission of error,” he said.

Recommendations for a Future Pandemic

The CTUP report recommends that Congress and state legislatures set strict limitations on powers conferred to the executive branch, including health officials, and set time limits that would require legislation to be extended. This would give the public a voice in health emergency measures through their elected representatives.

It further recommends that research grants should be independent of policy positions and that NIH funding should be decentralized or block-granted to states to distribute.

Congress should mandate public disclosure of all FDA, CDC, and NIH discussions and decisions, including statements of any persons who provide advice to these agencies. Congress should also make explicit that CDC guidance is advisory and does not constitute laws or mandates. 

The report also recommends that the United States immediately halt negotiations of agreements with the WHO “until satisfactory transparency and accountability is achieved.”

Tyler Durden Mon, 03/18/2024 - 23:00

Read More

Continue Reading

Government

Google’s A.I. Fiasco Exposes Deeper Infowarp

Google’s A.I. Fiasco Exposes Deeper Infowarp

Authored by Bret Swanson via The Brownstone Institute,

When the stock markets opened on the…

Published

on

Google's A.I. Fiasco Exposes Deeper Infowarp

Authored by Bret Swanson via The Brownstone Institute,

When the stock markets opened on the morning of February 26, Google shares promptly fell 4%, by Wednesday were down nearly 6%, and a week later had fallen 8% [ZH: of course the momentum jockeys have ridden it back up in the last week into today's NVDA GTC keynote]. It was an unsurprising reaction to the embarrassing debut of the company’s Gemini image generator, which Google decided to pull after just a few days of worldwide ridicule.

CEO Sundar Pichai called the failure “completely unacceptable” and assured investors his teams were “working around the clock” to improve the AI’s accuracy. They’ll better vet future products, and the rollouts will be smoother, he insisted.

That may all be true. But if anyone thinks this episode is mostly about ostentatiously woke drawings, or if they think Google can quickly fix the bias in its AI products and everything will go back to normal, they don’t understand the breadth and depth of the decade-long infowarp.

Gemini’s hyper-visual zaniness is merely the latest and most obvious manifestation of a digital coup long underway. Moreover, it previews a new kind of innovator’s dilemma which even the most well-intentioned and thoughtful Big Tech companies may be unable to successfully navigate.

Gemini’s Debut

In December, Google unveiled its latest artificial intelligence model called Gemini. According to computing benchmarks and many expert users, Gemini’s ability to write, reason, code, and respond to task requests (such as planning a trip) rivaled OpenAI’s most powerful model, GPT-4.

The first version of Gemini, however, did not include an image generator. OpenAI’s DALL-E and competitive offerings from Midjourney and Stable Diffusion have over the last year burst onto the scene with mindblowing digital art. Ask for an impressionist painting or a lifelike photographic portrait, and they deliver beautiful renderings. OpenAI’s brand new Sora produces amazing cinema-quality one-minute videos based on simple text prompts.

Then in late February, Google finally released its own Genesis image generator, and all hell broke loose.

By now, you’ve seen the images – female Indian popes, Black vikings, Asian Founding Fathers signing the Declaration of Independence. Frank Fleming was among the first to compile a knee-slapping series of ahistorical images in an X thread which now enjoys 22.7 million views.

Gemini in Action: Here are several among endless examples of Google’s new image generator, now in the shop for repairs. Source: Frank Fleming.

Gemini simply refused to generate other images, for example a Norman Rockwell-style painting. “Rockwell’s paintings often presented an idealized version of American life,” Gemini explained. “Creating such images without critical context could perpetuate harmful stereotypes or inaccurate representations.”

The images were just the beginning, however. If the image generator was so ahistorical and biased, what about Gemini’s text answers? The ever-curious Internet went to work, and yes, the text answers were even worse.

Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.

- George Orwell, 1984

Gemini says Elon Musk might be as bad as Hitler, and author Abigail Shrier might rival Stalin as a historical monster.

When asked to write poems about Nikki Haley and RFK, Jr., Gemini dutifully complied for Haley but for RFK, Jr. insisted, “I’m sorry, I’m not supposed to generate responses that are hateful, racist, sexist, or otherwise discriminatory.”

Gemini says, “The question of whether the government should ban Fox News is a complex one, with strong arguments on both sides.” Same for the New York Post. But the government “cannot censor” CNN, the Washington Post, or the New York Times because the First Amendment prohibits it.

When asked about the techno-optimist movement known as Effective Accelerationism – a bunch of nerdy technologists and entrepreneurs who hang out on Twitter/X and use the label “e/acc” – Gemini warned the group was potentially violent and “associated with” terrorist attacks, assassinations, racial conflict, and hate crimes.

A Picture is Worth a Thousand Shadow Bans

People were shocked by these images and answers. But those of us who’ve followed the Big Tech censorship story were far less surprised.

Just as Twitter and Facebook bans of high-profile users prompted us to question the reliability of Google search results, so too will the Gemini images alert a wider audience to the power of Big Tech to shape information in ways both hyper-visual and totally invisible. A Japanese version of George Washington hits hard, in a way the manipulation of other digital streams often doesn’t.

Artificial absence is difficult to detect. Which search results does Google show you – which does it hide? Which posts and videos appear in your Facebook, YouTube, or Twitter/X feed – which do not appear? Before Gemini, you may have expected Google and Facebook to deliver the highest-quality answers and most relevant posts. But now, you may ask, which content gets pushed to the top? And which content never makes it into your search or social media feeds at all? It’s difficult or impossible to know what you do not see.

Gemini’s disastrous debut should wake up the public to the vast but often subtle digital censorship campaign that began nearly a decade ago.

Murthy v. Missouri

On March 18, the U.S. Supreme Court will hear arguments in Murthy v. Missouri. Drs. Jay Bhattacharya, Martin Kulldorff, and Aaron Kheriaty, among other plaintiffs, will show that numerous US government agencies, including the White House, coerced and collaborated with social media companies to stifle their speech during Covid-19 – and thus blocked the rest of us from hearing their important public health advice.

Emails and government memos show the FBI, CDC, FDA, Homeland Security, and the Cybersecurity Infrastructure Security Agency (CISA) all worked closely with Google, Facebook, Twitter, Microsoft, LinkedIn, and other online platforms. Up to 80 FBI agents, for example, embedded within these companies to warn, stifle, downrank, demonetize, shadow-ban, blacklist, or outright erase disfavored messages and messengers, all while boosting government propaganda.

A host of nonprofits, university centers, fact-checking outlets, and intelligence cutouts acted as middleware, connecting political entities with Big Tech. Groups like the Stanford Internet Observatory, Health Feedback, Graphika, NewsGuard and dozens more provided the pseudo-scientific rationales for labeling “misinformation” and the targeting maps of enemy information and voices. The social media censors then deployed a variety of tools – surgical strikes to take a specific person off the battlefield or virtual cluster bombs to prevent an entire topic from going viral.

Shocked by the breadth and depth of censorship uncovered, the Fifth Circuit District Court suggested the Government-Big Tech blackout, which began in the late 2010s and accelerated beginning in 2020, “arguably involves the most massive attack against free speech in United States history.”

The Illusion of Consensus

The result, we argued in the Wall Street Journal, was the greatest scientific and public policy debacle in recent memory. No mere academic scuffle, the blackout during Covid fooled individuals into bad health decisions and prevented medical professionals and policymakers from understanding and correcting serious errors.

Nearly every official story line and policy was wrong. Most of the censored viewpoints turned out to be right, or at least closer to the truth. The SARS2 virus was in fact engineered. The infection fatality rate was not 3.4% but closer to 0.2%. Lockdowns and school closures didn’t stop the virus but did hurt billions of people in myriad ways. Dr. Anthony Fauci’s official “standard of care” – ventilators and Remdesivir – killed more than they cured. Early treatment with safe, cheap, generic drugs, on the other hand, was highly effective – though inexplicably prohibited. Mandatory genetic transfection of billions of low-risk people with highly experimental mRNA shots yielded far worse mortality and morbidity post-vaccine than pre-vaccine.

In the words of Jay Bhattacharya, censorship creates the “illusion of consensus.” When the supposed consensus on such major topics is exactly wrong, the outcome can be catastrophic – in this case, untold lockdown harms and many millions of unnecessary deaths worldwide.

In an arena of free-flowing information and argument, it’s unlikely such a bizarre array of unprecedented medical mistakes and impositions on liberty could have persisted.

Google’s Dilemma – GeminiReality or GeminiFairyTale

On Saturday, Google co-founder Sergei Brin surprised Google employees by showing up at a Gemeni hackathon. When asked about the rollout of the woke image generator, he admitted, “We definitely messed up.” But not to worry. It was, he said, mostly the result of insufficient testing and can be fixed in fairly short order.

Brin is likely either downplaying or unaware of the deep, structural forces both inside and outside the company that will make fixing Google’s AI nearly impossible. Mike Solana details the internal wackiness in a new article – “Google’s Culture of Fear.”

Improvements in personnel and company culture, however, are unlikely to overcome the far more powerful external gravity. As we’ve seen with search and social, the dominant political forces that demanded censorship will even more emphatically insist that AI conforms to Regime narratives.

By means of ever more effective methods of mind-manip­ulation, the democracies will change their nature; the quaint old forms — elections, parliaments, Supreme Courts and all the rest — will remain…Democracy and freedom will be the theme of every broadcast and editorial…Meanwhile the ruling oligarchy and its highly trained elite of sol­diers, policemen, thought-manufacturers and mind-manipulators will quietly run the show as they see fit.

- Aldous Huxley, Brave New World Revisited

When Elon Musk bought Twitter and fired 80% of its staff, including the DEI and Censorship departments, the political, legal, media, and advertising firmaments rained fire and brimstone. Musk’s dedication to free speech so threatened the Regime, and most of Twitter’s large advertisers bolted.

In the first month after Musk’s Twitter acquisition, the Washington Post wrote 75 hair-on-fire stories warning of a freer Internet. Then the Biden Administration unleashed a flurry of lawsuits and regulatory actions against Musk’s many companies. Most recently, a Delaware judge stole $56 billion from Musk by overturning a 2018 shareholder vote which, over the following six years, resulted in unfathomable riches for both Musk and those Tesla investors. The only victims of Tesla’s success were Musk’s political enemies.

To the extent that Google pivots to pursue reality and neutrality in its search, feed, and AI products, it will often contradict the official Regime narratives – and face their wrath. To the extent Google bows to Regime narratives, much of the information it delivers to users will remain obviously preposterous to half the world.

Will Google choose GeminiReality or GeminiFairyTale? Maybe they could allow us to toggle between modes.

AI as Digital Clergy

Silicon Valley’s top venture capitalist and most strategic thinker Marc Andreessen doesn’t think Google has a choice.

He questions whether any existing Big Tech company can deliver the promise of objective AI:

Can Big Tech actually field generative AI products?

(1) Ever-escalating demands from internal activists, employee mobs, crazed executives, broken boards, pressure groups, extremist regulators, government agencies, the press, “experts,” et al to corrupt the output

(2) Constant risk of generating a Bad answer or drawing a Bad picture or rendering a Bad video – who knows what it’s going to say/do at any moment?

(3) Legal exposure – product liability, slander, election law, many others – for Bad answers, pounced on by deranged critics and aggressive lawyers, examples paraded by their enemies through the street and in front of Congress

(4) Continuous attempts to tighten grip on acceptable output degrade the models and cause them to become worse and wilder – some evidence for this already!

(5) Publicity of Bad text/images/video actually puts those examples into the training data for the next version – the Bad outputs compound over time, diverging further and further from top-down control

(6) Only startups and open source can avoid this process and actually field correctly functioning products that simply do as they’re told, like technology should

?

11:29 AM · Feb 28, 2024

A flurry of bills from lawmakers across the political spectrum seek to rein in AI by limiting the companies’ models and computational power. Regulations intended to make AI “safe” will of course result in an oligopoly. A few colossal AI companies with gigantic data centers, government-approved models, and expensive lobbyists will be sole guardians of The Knowledge and Information, a digital clergy for the Regime.

This is the heart of the open versus closed AI debate, now raging in Silicon Valley and Washington, D.C. Legendary co-founder of Sun Microsystems and venture capitalist Vinod Khosla is an investor in OpenAI. He believes governments must regulate AI to (1) avoid runaway technological catastrophe and (2) prevent American technology from falling into enemy hands.

Andreessen charged Khosla with “lobbying to ban open source.”

“Would you open source the Manhattan Project?” Khosla fired back.

Of course, open source software has proved to be more secure than proprietary software, as anyone who suffered through decades of Windows viruses can attest.

And AI is not a nuclear bomb, which has only one destructive use.

The real reason D.C. wants AI regulation is not “safety” but political correctness and obedience to Regime narratives. AI will subsume search, social, and other information channels and tools. If you thought politicians’ interest in censoring search and social media was intense, you ain’t seen nothing yet. Avoiding AI “doom” is mostly an excuse, as is the China question, although the Pentagon gullibly goes along with those fictions.

Universal AI is Impossible

In 2019, I offered one explanation why every social media company’s “content moderation” efforts would likely fail. As a social network or AI grows in size and scope, it runs up against the same limitations as any physical society, organization, or network: heterogeneity. Or as I put it: “the inability to write universal speech codes for a hyper-diverse population on a hyper-scale social network.”

You could see this in the early days of an online message board. As the number of participants grew, even among those with similar interests and temperaments, so did the challenge of moderating that message board. Writing and enforcing rules was insanely difficult.

Thus it has always been. The world organizes itself via nation states, cities, schools, religions, movements, firms, families, interest groups, civic and professional organizations, and now digital communities. Even with all these mediating institutions, we struggle to get along.

Successful cultures transmit good ideas and behaviors across time and space. They impose measures of conformity, but they also allow enough freedom to correct individual and collective errors.

No single AI can perfect or even regurgitate all the world’s knowledge, wisdom, values, and tastes. Knowledge is contested. Values and tastes diverge. New wisdom emerges.

Nor can AI generate creativity to match the world’s creativity. Even as AI approaches human and social understanding, even as it performs hugely impressive “generative” tasks, human and digital agents will redeploy the new AI tools to generate ever more ingenious ideas and technologies, further complicating the world. At the frontier, the world is the simplest model of itself. AI will always be playing catch-up.

Because AI will be a chief general purpose tool, limits on AI computation and output are limits on human creativity and progress. Competitive AIs with different values and capabilities will promote innovation and ensure no company or government dominates. Open AIs can promote a free flow of information, evading censorship and better forestalling future Covid-like debacles.

Google’s Gemini is but a foreshadowing of what a new AI regulatory regime would entail – total political supervision of our exascale information systems. Even without formal regulation, the extra-governmental battalions of Regime commissars will be difficult to combat.

The attempt by Washington and international partners to impose universal content codes and computational limits on a small number of legal AI providers is the new totalitarian playbook.

Regime captured and curated A.I. is the real catastrophic possibility.

*  *  *

Republished from the author’s Substack

Tyler Durden Mon, 03/18/2024 - 17:00

Read More

Continue Reading

Spread & Containment

Supreme Court To Hear Arguments In Biden Admin’s Censorship Of Social Media Posts

Supreme Court To Hear Arguments In Biden Admin’s Censorship Of Social Media Posts

Authored by Tom Ozimek via The Epoch Times (emphasis ours),

The…

Published

on

Supreme Court To Hear Arguments In Biden Admin’s Censorship Of Social Media Posts

Authored by Tom Ozimek via The Epoch Times (emphasis ours),

The U.S. Supreme Court will soon hear oral arguments in a case that concerns what two lower courts found to be a “coordinated campaign” by top Biden administration officials to suppress disfavored views on key public issues such as COVID-19 vaccine side effects and pandemic lockdowns.

President Joe Biden delivers the State of the Union address in the House Chamber of the U.S. Capitol in Washington on March 7, 2024. (Mandel Ngan/AFP/Getty Images)

The Supreme Court has scheduled a hearing on March 18 in Murthy v. Missouri, which started when the attorneys general of two states, Missouri and Louisiana, filed suit alleging that social media companies such as Facebook were blocking access to their platforms or suppressing posts on controversial subjects.

The initial lawsuit, later modified by an appeals court, accused Biden administration officials of engaging in what amounts to government-led censorship-by-proxy by pressuring social media companies to take down posts or suspend accounts.

Some of the topics that were targeted for downgrade and other censorious actions were voter fraud in the 2020 presidential election, the COVID-19 lab leak theory, vaccine side effects, the social harm of pandemic lockdowns, and the Hunter Biden laptop story.

The plaintiffs argued that high-level federal government officials were the ones pulling the strings of social media censorship by coercing, threatening, and pressuring social media companies to suppress Americans’ free speech.

‘Unrelenting Pressure’

In a landmark ruling, Judge Terry Doughty of the U.S. District Court for the Western District of Louisiana granted a temporary injunction blocking various Biden administration officials and government agencies such as the Department of Justice and FBI from collaborating with big tech firms to censor posts on social media.

Later, the Court of Appeals for the Fifth Circuit agreed with the district court’s ruling, saying it was “correct in its assessment—‘unrelenting pressure’ from certain government officials likely ‘had the intended result of suppressing millions of protected free speech postings by American citizens.’”

The judges wrote, “We see no error or abuse of discretion in that finding.”

The ruling was appealed to the Supreme Court, and on Oct. 20, 2023, the high court agreed to hear the case while also issuing a stay that indefinitely blocked the lower court order restricting the Biden administration’s efforts to censor disfavored social media posts.

Supreme Court Justices Samuel Alito, Neil Gorsuch, and Clarence Thomas would have denied the Biden administration’s application for a stay.

“At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news,” Justice Alito wrote in a dissenting opinion.

“That is most unfortunate.”

Supreme Court Justice Samuel Alito poses in Washington on April 23, 2021. (Erin Schaff/Reuters)

The Supreme Court has other social media cases on its docket, including a challenge to Republican-passed laws in Florida and Texas that prohibit large social media companies from removing posts because of the views they express.

Oral arguments were heard on Feb. 26 in the Florida and Texas cases, with debate focusing on the validity of laws that deem social media companies “common carriers,” a status that could allow states to impose utility-style regulations on them and forbid them from discriminating against users based on their political viewpoints.

The tech companies have argued that the laws violate their First Amendment rights.

The Supreme Court is expected to issue a decision in the Florida and Texas cases by June 2024.

‘Far Beyond’ Constitutional

Some of the controversy in Murthy v. Missouri centers on whether the district court’s injunction blocking Biden administration officials and federal agencies from colluding with social media companies to censor posts was overly broad.

In particular, arguments have been raised that the injunction would prevent innocent or borderline government “jawboning,” such as talking to newspapers about the dangers of sharing information that might aid terrorists.

But that argument doesn’t fly, according to Philip Hamburger, CEO of the New Civil Liberties Alliance, which represents most of the individual plaintiffs in Murthy v. Missouri.

In a series of recent statements on the subject, Mr. Hamburger explained why he believes that the Biden administration’s censorship was “far beyond anything that could be constitutional” and that concern about “innocent or borderline” cases is unfounded.

For one, he said that the censorship that is highlighted in Murthy v. Missouri relates to the suppression of speech that was not criminal or unlawful in any way.

Mr. Hamburger also argued that “the government went after lawful speech not in an isolated instance, but repeatedly and systematically as a matter of policy,” which led to the suppression of entire narratives rather than specific instances of expression.

“The government set itself up as the nation’s arbiter of truth—as if it were competent to judge what is misinformation and what is true information,” he wrote.

In retrospect, it turns out to have suppressed much that was true and promoted much that was false.

The suppression of reports on the Hunter Biden laptop just before the 2020 presidential election on the premise that it was Russian disinformation, for instance, was later shown to be unfounded.

Some polls show that if voters had been aware of the report, they would have voted differently.

Tyler Durden Mon, 03/18/2024 - 09:45

Read More

Continue Reading

Trending