Connect with us

Spread & Containment

New NIST project to build nano-thermometers could revolutionize temperature imaging

New NIST project to build nano-thermometers could revolutionize temperature imaging

Published

on

Cheaper refrigerators? Stronger hip implants? A better understanding of human disease? All of these could be possible

IMAGE

Credit: A. Biacchi/NIST

Cheaper refrigerators? Stronger hip implants? A better understanding of human disease? All of these could be possible and more, someday, thanks to an ambitious new project underway at the National Institute of Standards and Technology (NIST).

NIST researchers are in the early stages of a massive undertaking to design and build a fleet of tiny ultra-sensitive thermometers. If they succeed, their system will be the first to make real-time measurements of temperature on the microscopic scale in an opaque 3D volume — which could include medical implants, refrigerators, and even the human body.

The project is called Thermal Magnetic Imaging and Control (Thermal MagIC), and the researchers say it could revolutionize temperature measurements in many fields: biology, medicine, chemical synthesis, refrigeration, the automotive industry, plastic production — “pretty much anywhere temperature plays a critical role,” said NIST physicist Cindi Dennis. “And that’s everywhere.”

The NIST team has now finished building its customized laboratory spaces for this unique project and has begun the first major phase of the experiment.

Thermal MagIC will work by using nanometer-sized objects whose magnetic signals change with temperature. The objects would be incorporated into the liquids or solids being studied — the melted plastic that might be used as part of an artificial joint replacement, or the liquid coolant being recirculated through a refrigerator. A remote sensing system would then pick up these magnetic signals, meaning the system being studied would be free from wires or other bulky external objects.

The final product could make temperature measurements that are 10 times more precise than state-of-the-art techniques, acquired in one-tenth the time in a volume 10,000 times smaller. This equates to measurements accurate to within 25 millikelvin (thousandths of a kelvin) in as little as a tenth of a second, in a volume just a hundred micrometers (millionths of a meter) on a side. The measurements would be “traceable” to the International System of Units (SI); in other words, its readings could be accurately related to the fundamental definition of the kelvin, the world’s basic unit of temperature.

The system aims to measure temperatures over the range from 200 to 400 kelvin (K), which is about -99 to 260 degrees Fahrenheit (F). This would cover most potential applications — at least the ones the Thermal MagIC team envisions will be possible within the next 5 years. Dennis and her colleagues see potential for a much larger temperature range, stretching from 4 K-600 K, which would encompass everything from supercooled superconductors to molten lead. But that is not a part of current development plans.

“This is a big enough sea change that we expect that if we can develop it — and we have confidence that we can — other people will take it and really run with it and do things that we currently can’t imagine,” Dennis said.

Potential applications are mostly in research and development, but Dennis said the increase in knowledge would likely trickle down to a variety of products, possibly including 3D printers, refrigerators, and medicines.

What Is It Good For?

Whether it’s the thermostat in your living room or a high-precision standard instrument that scientists use for laboratory measurements, most thermometers used today can only measure relatively big areas — on a macroscopic as opposed to microscopic level. These conventional thermometers are also intrusive, requiring sensors to penetrate the system being measured and to connect to a readout system by bulky wires.

Infrared thermometers, such as the forehead instruments used at many doctors’ offices, are less intrusive. But they still only make macroscopic measurements and cannot see beneath surfaces.

Thermal MagIC should let scientists get around both these limitations, Dennis said.

Engineers could use Thermal MagIC to study, for the first time, how heat transfer occurs within different coolants on the microscale, which could aid their quest to find cheaper, less energy-intensive refrigeration systems.

Doctors could use Thermal MagIC to study diseases, many of which are associated with temperature increases — a hallmark of inflammation — in specific parts of the body.

And manufacturers could use the system to better control 3D printing machines that melt plastic to build custom objects such as medical implants and prostheses. Without the ability to measure temperature on the microscale, 3D printing developers are missing crucial information about what’s going on inside the plastic as it solidifies into an object. More knowledge could improve the strength and quality of 3D-printed materials someday, by giving engineers more control over the 3D printing process.

Giving It OOMMF

The first step in making this new thermometry system is creating nano-sized magnets that will give off strong magnetic signals in response to temperature changes. To keep particle concentrations as low as possible, the magnets will need to be 10 times more sensitive to temperature changes than any objects that currently exist.

To get that kind of signal, Dennis said, researchers will likely need to use multiple magnetic materials in each nano-object. A core of one substance will be surrounded by other materials like the layers of an onion.

The trouble is that there are practically endless combinations of properties that can be tweaked, including the materials’ composition, size, shape, the number and thickness of the layers, or even the number of materials. Going through all of these potential combinations and testing each one for its effect on the object’s temperature sensitivity could take multiple lifetimes to accomplish.

To help them get there in months instead of decades, the team is turning to sophisticated software: the Object Oriented MicroMagnetic Framework (OOMMF), a widely used modeling program developed by NIST researchers Mike Donahue and Don Porter.

The Thermal MagIC team will use this program to create a feedback loop. NIST chemists Thomas Moffat, Angela Hight Walker and Adam Biacchi will synthesize new nano-objects. Then Dennis and her team will characterize the objects’ properties. And finally, Donahue will help them feed that information into OOMMF, which will make predictions about what combinations of materials they should try next.

“We have some very promising results from the magnetic nano-objects side of things, but we’re not quite there yet,” Dennis said.

Each Dog Is a Voxel

So how do they measure the signals given out by tiny concentrations of nano-thermometers inside a 3D object in response to temperature changes? They do it with a machine called a magnetic particle imager (MPI), which surrounds the sample and measures a magnetic signal coming off the nanoparticles.

Effectively, they measure changes to the magnetic signal coming off one small volume of the sample, called a “voxel” — basically a 3D pixel — and then scan through the entire sample one voxel at a time.

But it’s hard to focus a magnetic field, said NIST physicist Solomon Woods. So they achieve their goal in reverse.

Consider a metaphor. Say you have a dog kennel, and you want to measure how loud each individual dog is barking. But you only have one microphone. If multiple dogs are barking at once, your mic will pick up all of that sound, but with only one mic you won’t be able to distinguish one dog’s bark from another’s.

However, if you could quiet each dog somehow — perhaps by occupying its mouth with a bone — except for a single cocker spaniel in the corner, then your mic would still be picking up all the sounds in the room, but the only sound would be from the cocker spaniel.

In theory, you could do this with each dog in sequence — first the cocker spaniel, then the mastiff next to it, then the labradoodle next in line — each time leaving just one dog bone-free.

In this metaphor, each dog is a voxel.

Basically, the researchers max out the ability of all but one small volume of their sample to respond to a magnetic field. (This is the equivalent of stuffing each dog’s mouth with a delicious bone.) Then, measuring the change in magnetic signal from the entire sample effectively lets you measure just that one little section.

MPI systems similar to this exist but are not sensitive enough to measure the kind of tiny magnetic signal that would come from a small change in temperature. The challenge for the NIST team is to boost the signal significantly.

“Our instrumentation is very similar to MPI, but since we have to measure temperature, not just measure the presence of a nano-object, we essentially need to boost our signal-to-noise ratio over MPI by a thousand or 10,000 times,” Woods said.

They plan to boost the signal using state-of-the-art technologies. For example, Woods may use superconducting quantum interference devices (SQUIDs), cryogenic sensors that measure extremely subtle changes in magnetic fields, or atomic magnetometers, which detect how energy levels of atoms are changed by an external magnetic field. Woods is working on which are best to use and how to integrate them into the detection system.

The final part of the project is making sure the measurements are traceable to the SI, a project led by NIST physicist Wes Tew. That will involve measuring the nano-thermometers’ magnetic signals at different temperatures that are simultaneously being measured by standard instruments.

Other key NIST team members include Thinh Bui, Eric Rus, Brianna Bosch Correa, Mark Henn, Eduardo Correa and Klaus Quelhas.

Before finishing their new laboratory space, the researchers were able to complete some important work. In a paper published last month in the International Journal on Magnetic Particle Imaging, the group reported that they had found and tested a “promising” nanoparticle material made of iron and cobalt, with temperature sensitivities that varied in a controllable way depending on how the team prepared the material. Adding an appropriate shell material to encase this nanoparticle “core” would bring the team closer to creating a working temperature-sensitive nanoparticle for Thermal MagIC.

In the past few weeks, the researchers have made further progress testing combinations of materials for the nanoparticles.

“Despite the challenge of working during the pandemic, we have had some successes in our new labs,” Woods said. “These achievements include our first syntheses of multi-layer nanomagnetic systems for thermometry, and ultra-stable magnetic temperature measurements using techniques borrowed from atomic clock research.”

###

Media Contact
Ben P. Stein
benjamin.stein@nist.gov

Original Source

https://www.nist.gov/news-events/news/2020/10/thermal-magic-new-nist-project-build-nano-thermometers-could-revolutionize

Related Journal Article

http://dx.doi.org/10.18416/ijmpi.2020.2009068

Read More

Continue Reading

Government

Report Criticizes ‘Catastrophic Errors’ Of COVID Lockdowns, Warns Of Repeat

Report Criticizes ‘Catastrophic Errors’ Of COVID Lockdowns, Warns Of Repeat

Authored by Kevin Stocklin via The Epoch Times (emphasis ours),

It…

Published

on

Report Criticizes 'Catastrophic Errors' Of COVID Lockdowns, Warns Of Repeat

Authored by Kevin Stocklin via The Epoch Times (emphasis ours),

It was four years ago, in March 2020, that health officials declared COVID-19 a pandemic and America began shutting down schools, closing small businesses, restricting gatherings and travel, and other lockdown measures to “slow the spread” of the virus.

UNICEF unveiled its "Pandemic Classroom," a model made up of 168 empty desks, each seat representing one million children living in countries where schools were almost entirely closed during the COVID pandemic lockdowns, at the U.N. Headquarters in New York City on March 2, 2021. (Chris Farber/UNICEF via Getty Images)

To mark that grim anniversary, a group of medical and policy experts released a report, called “COVID Lessons Learned,” which assesses the government’s response to the pandemic. According to the report, that response included a few notable successes, along with a litany of failures that have taken a severe toll on the population.

During the pandemic, many governments across the globe acted in lockstep to pursue authoritative policies in response to the disease, locking down populations, closing schools, shutting businesses, sealing borders, banning gatherings, and enforcing various mask and vaccine mandates. What were initially imposed as short-term mandates and emergency powers given to presidents, ministers, governors, and health officials soon became extended into a longer-term expansion of official power.

“Even though the initial point of temporary lockdowns was to ’slow the spread,' which meant to allow hospitals to function without being overwhelmed, instead it rapidly turned into stopping COVID cases at all costs,” Dr. Scott Atlas, a physician, former White House Coronavirus Task Force member, and one of the authors of the report, stated at a March 15 press conference.

Published by the Committee to Unleash Prosperity (CTUP), the report was co-authored by Steve Hanke, economics professor and director of the Johns Hopkins Institute for Applied Economics; Casey Mulligan, former chief economist of the White House Council of Economic Advisors; and CTUP President Philip Kerpen. 

According to the report, one of the first errors was the unprecedented authority that public officials took upon themselves to enforce health mandates on Americans. 

Granting public health agencies extraordinary powers was a major error,” Mr. Hanke told The Epoch Times. “It, in effect, granted these agencies a license to deceive the public.”

The authors argue that authoritative measures were largely ineffective in fighting the virus, but often proved highly detrimental to public health. 

The report quantifies the cost of lockdowns, both in terms of economic costs and the number of non-COVID excess deaths that occurred and continue to occur after the pandemic. It estimates that the number of non-COVID excess deaths, defined as deaths in excess of normal rates, at about 100,000 per year in the United States.

‘They Will Try to Do This Again’

“Lockdowns, schools closures, and mandates were catastrophic errors, pushed with remarkable fervor by public health authorities at all levels,” the report states. The authors are skeptical, however, that health authorities will learn from the experience.

“My worry is that if we have another pandemic or another virus, I think that Washington is still going to try to do these failed policies,” said Steve Moore, a CTUP economist. “We’re not here to say ‘this guy got it wrong' or ’that guy or got it wrong,’ but we should learn the lessons from these very, very severe mistakes that will have costs for not just years, but decades to come. 

“I guarantee you, they will try to do this again,” Mr. Moore said. “And what’s really troubling me is the people who made these mistakes still have not really conceded that they were wrong.”

Mr. Hanke was equally pessimistic.

“Unfortunately, the public health establishment is in the authoritarian model of the state,” he said. “Their entire edifice is one in which the state, not the individual, should reign supreme.”

The authors are also critical of what they say was a multifaceted campaign in which public officials, the news media, and social media companies cooperated to frighten the population into compliance with COVID mandates.

During COVID, the public health establishment … intentionally stoked and amplified fear, which overlaid enormous economic, social, educational and health harms on top of the harms of the virus itself,” the report states. 

The authors contrasted the authoritative response of many U.S. states to policies in Sweden, which they say relied more on providing advice and information to the public rather than attempting to force behaviors.

Sweden’s constitution, called the “Regeringsform,” guarantees the liberty of Swedes to move freely within the realm and prohibits severe lockdowns, Mr. Hanke stated.

“By following the Regeringsform during COVID, the Swedes ended up with one of the lowest excess death rates in the world,” he said.  

Because the Swedish government avoided strict mandates and was more forthright in sharing information with its people, many citizens altered their behavior voluntarily to protect themselves.

“A much wiser strategy than issuing lockdown orders would have been to tell the American people the truth, stick to the facts, educate citizens about the balance of risks, and let individuals make their own decisions about whether to keep their businesses open, whether to socially isolate, attend church, send their children to school, and so on,” the report states.

‘A Pretext to Enhance Their Power’

The CTUP report cites a 2021 study on government power and emergencies by economists Christian Bjornskov and Stefan Voigt, which found that the more emergency power a government accumulates during times of crisis, “the higher the number of people killed as a consequence of a natural disaster, controlling for its severity.

As this is an unexpected result, we discuss a number of potential explanations, the most plausible being that governments use natural disasters as a pretext to enhance their power,” the study’s authors state. “Furthermore, the easier it is to call a state of emergency, the larger the negative effects on basic human rights.”

“All the things that people do in their lives … they have purposes,” Mr. Mulligan said. “And for somebody in Washington D.C. to tell them to stop doing all those things, they can’t even begin to comprehend the disruption and the losses.

“We see in the death certificates a big elevation in people dying from heart conditions, diabetes conditions, obesity conditions,” he said, while deaths from alcoholism and drug overdoses “skyrocketed and have not come down.”

The report also challenged the narrative that most hospitals were overrun by the surge of COVID cases.

“Almost any measure of hospital utilization was very low, historically, throughout the pandemic period, even though we had all these headlines that our hospitals were overwhelmed,” Mr. Kerpen stated. “The truth was actually the opposite, and this was likely the result of public health messaging and political orders, canceling medical procedures and intentionally stoking fear, causing people to cancel their appointments.”

The effect of this, the authors argue, was a sharp increase in non-COVID deaths because people were avoiding necessary treatments and screenings. 

“There were actually mass layoffs in this sector at one point,” Mr. Kerpen said, “and even now, total discharges are well below pre-pandemic levels.”

In addition, as health mandates became more draconian, many people became concerned at the expansion of government power and the loss of civil liberties, particularly when government directives—such as banning outdoor church services but allowing mass social-justice protests—often seemed unreasonable or politicized. 

The report also criticized the single-minded focus on vaccines and the failure by the NIH and the FDA to do clinical trials on existing drugs that were known to be safe and could have been effective in treating those infected with COVID-19.

Because so much of the process of approving the vaccines, the risks and benefits, and the reporting of possible side-effects was kept from the public, people were unable to give informed consent to their own health care, Mr. Kerpen said. 

“And when the Biden administration came in and started mandating them, now you had something that was inherently experimental with some questionable data, and instead of saying, ‘Now you have a choice whether you want it or not,’ in the context of a pandemic they tried to mandate them,” he said.

Pandemic Censorship

Tech oligopolies and the corporate media also receive criticism for their collaboration with government to control public messaging and censor dissenting voices. According to the authors, many government and health officials collaborated with tech oligarchs, news media corporations, and even scientific journals to censor critical views on the pandemic.

The Biden administration is currently defending itself before the Supreme Court against charges brought by Louisiana and Missouri attorneys general, who charged that administration officials pressured tech companies to censor information that contradicted official narratives on COVID-19’s origins, related mandates and treatment, as well as censoring political speech that was critical of President Biden during his 2020 campaign. The case is Murthy v. Missouri.

Mr. Hanke stated that a previous report he co-authored, titled “Did Lockdowns Work?,” which was critical of lockdowns, was refused by medical journals, even when they published op-eds that criticized it and published numerous pro-lockdown reports. 

Dr. Vinay Prasad—a physician, epidemiologist, professor at the University of California at San Francisco’s medical school and author of over 350 academic articles and letters—has made similar allegations of censorship by medical journals.

“Specifically, MedRxiv and SSRN have been reluctant to post articles critical of the CDC, mask and vaccine mandates, and the Biden administration’s health care policies,” Dr. Prasad stated.

Heightening concerns about medical censorship is the “zero-draft” World Health Organization (WHO) pandemic treaty currently being circulated for approval by member states, including the United States. It commits members to jointly seek out and “tackle” what the WHO deems as “misinformation and disinformation.”

One of the enduring consequences of the COVID years is a general loss of public trust in public officials, health experts, and official narratives. 

“Operation Warp Speed was a terrific success with highly unexpected rapidity of development [of vaccines],” Dr. Atlas said. “But the serious flaws centered around not being open with the public about the uncertainties, particularly of the vaccines’ efficacy and safety.” 

“One result of the government’s error-ridden COVID response was that Americans have justifiably lost faith in public health institutions,” the report states. According to the authors, if health officials want to regain the public’s trust, they should begin with an accurate assessment of their actions during the pandemic.

“The best way to restore trust is to admit you were wrong,” Dr. Atlas said. “I think we all know that in our personal lives, but here it’s very important because there has been a massive lack of trust now in institutions, in experts, in data, in science itself.

I think it’s going to be very difficult to restore that without admission of error,” he said.

Recommendations for a Future Pandemic

The CTUP report recommends that Congress and state legislatures set strict limitations on powers conferred to the executive branch, including health officials, and set time limits that would require legislation to be extended. This would give the public a voice in health emergency measures through their elected representatives.

It further recommends that research grants should be independent of policy positions and that NIH funding should be decentralized or block-granted to states to distribute.

Congress should mandate public disclosure of all FDA, CDC, and NIH discussions and decisions, including statements of any persons who provide advice to these agencies. Congress should also make explicit that CDC guidance is advisory and does not constitute laws or mandates. 

The report also recommends that the United States immediately halt negotiations of agreements with the WHO “until satisfactory transparency and accountability is achieved.”

Tyler Durden Mon, 03/18/2024 - 23:00

Read More

Continue Reading

Government

Google’s A.I. Fiasco Exposes Deeper Infowarp

Google’s A.I. Fiasco Exposes Deeper Infowarp

Authored by Bret Swanson via The Brownstone Institute,

When the stock markets opened on the…

Published

on

Google's A.I. Fiasco Exposes Deeper Infowarp

Authored by Bret Swanson via The Brownstone Institute,

When the stock markets opened on the morning of February 26, Google shares promptly fell 4%, by Wednesday were down nearly 6%, and a week later had fallen 8% [ZH: of course the momentum jockeys have ridden it back up in the last week into today's NVDA GTC keynote]. It was an unsurprising reaction to the embarrassing debut of the company’s Gemini image generator, which Google decided to pull after just a few days of worldwide ridicule.

CEO Sundar Pichai called the failure “completely unacceptable” and assured investors his teams were “working around the clock” to improve the AI’s accuracy. They’ll better vet future products, and the rollouts will be smoother, he insisted.

That may all be true. But if anyone thinks this episode is mostly about ostentatiously woke drawings, or if they think Google can quickly fix the bias in its AI products and everything will go back to normal, they don’t understand the breadth and depth of the decade-long infowarp.

Gemini’s hyper-visual zaniness is merely the latest and most obvious manifestation of a digital coup long underway. Moreover, it previews a new kind of innovator’s dilemma which even the most well-intentioned and thoughtful Big Tech companies may be unable to successfully navigate.

Gemini’s Debut

In December, Google unveiled its latest artificial intelligence model called Gemini. According to computing benchmarks and many expert users, Gemini’s ability to write, reason, code, and respond to task requests (such as planning a trip) rivaled OpenAI’s most powerful model, GPT-4.

The first version of Gemini, however, did not include an image generator. OpenAI’s DALL-E and competitive offerings from Midjourney and Stable Diffusion have over the last year burst onto the scene with mindblowing digital art. Ask for an impressionist painting or a lifelike photographic portrait, and they deliver beautiful renderings. OpenAI’s brand new Sora produces amazing cinema-quality one-minute videos based on simple text prompts.

Then in late February, Google finally released its own Genesis image generator, and all hell broke loose.

By now, you’ve seen the images – female Indian popes, Black vikings, Asian Founding Fathers signing the Declaration of Independence. Frank Fleming was among the first to compile a knee-slapping series of ahistorical images in an X thread which now enjoys 22.7 million views.

Gemini in Action: Here are several among endless examples of Google’s new image generator, now in the shop for repairs. Source: Frank Fleming.

Gemini simply refused to generate other images, for example a Norman Rockwell-style painting. “Rockwell’s paintings often presented an idealized version of American life,” Gemini explained. “Creating such images without critical context could perpetuate harmful stereotypes or inaccurate representations.”

The images were just the beginning, however. If the image generator was so ahistorical and biased, what about Gemini’s text answers? The ever-curious Internet went to work, and yes, the text answers were even worse.

Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.

- George Orwell, 1984

Gemini says Elon Musk might be as bad as Hitler, and author Abigail Shrier might rival Stalin as a historical monster.

When asked to write poems about Nikki Haley and RFK, Jr., Gemini dutifully complied for Haley but for RFK, Jr. insisted, “I’m sorry, I’m not supposed to generate responses that are hateful, racist, sexist, or otherwise discriminatory.”

Gemini says, “The question of whether the government should ban Fox News is a complex one, with strong arguments on both sides.” Same for the New York Post. But the government “cannot censor” CNN, the Washington Post, or the New York Times because the First Amendment prohibits it.

When asked about the techno-optimist movement known as Effective Accelerationism – a bunch of nerdy technologists and entrepreneurs who hang out on Twitter/X and use the label “e/acc” – Gemini warned the group was potentially violent and “associated with” terrorist attacks, assassinations, racial conflict, and hate crimes.

A Picture is Worth a Thousand Shadow Bans

People were shocked by these images and answers. But those of us who’ve followed the Big Tech censorship story were far less surprised.

Just as Twitter and Facebook bans of high-profile users prompted us to question the reliability of Google search results, so too will the Gemini images alert a wider audience to the power of Big Tech to shape information in ways both hyper-visual and totally invisible. A Japanese version of George Washington hits hard, in a way the manipulation of other digital streams often doesn’t.

Artificial absence is difficult to detect. Which search results does Google show you – which does it hide? Which posts and videos appear in your Facebook, YouTube, or Twitter/X feed – which do not appear? Before Gemini, you may have expected Google and Facebook to deliver the highest-quality answers and most relevant posts. But now, you may ask, which content gets pushed to the top? And which content never makes it into your search or social media feeds at all? It’s difficult or impossible to know what you do not see.

Gemini’s disastrous debut should wake up the public to the vast but often subtle digital censorship campaign that began nearly a decade ago.

Murthy v. Missouri

On March 18, the U.S. Supreme Court will hear arguments in Murthy v. Missouri. Drs. Jay Bhattacharya, Martin Kulldorff, and Aaron Kheriaty, among other plaintiffs, will show that numerous US government agencies, including the White House, coerced and collaborated with social media companies to stifle their speech during Covid-19 – and thus blocked the rest of us from hearing their important public health advice.

Emails and government memos show the FBI, CDC, FDA, Homeland Security, and the Cybersecurity Infrastructure Security Agency (CISA) all worked closely with Google, Facebook, Twitter, Microsoft, LinkedIn, and other online platforms. Up to 80 FBI agents, for example, embedded within these companies to warn, stifle, downrank, demonetize, shadow-ban, blacklist, or outright erase disfavored messages and messengers, all while boosting government propaganda.

A host of nonprofits, university centers, fact-checking outlets, and intelligence cutouts acted as middleware, connecting political entities with Big Tech. Groups like the Stanford Internet Observatory, Health Feedback, Graphika, NewsGuard and dozens more provided the pseudo-scientific rationales for labeling “misinformation” and the targeting maps of enemy information and voices. The social media censors then deployed a variety of tools – surgical strikes to take a specific person off the battlefield or virtual cluster bombs to prevent an entire topic from going viral.

Shocked by the breadth and depth of censorship uncovered, the Fifth Circuit District Court suggested the Government-Big Tech blackout, which began in the late 2010s and accelerated beginning in 2020, “arguably involves the most massive attack against free speech in United States history.”

The Illusion of Consensus

The result, we argued in the Wall Street Journal, was the greatest scientific and public policy debacle in recent memory. No mere academic scuffle, the blackout during Covid fooled individuals into bad health decisions and prevented medical professionals and policymakers from understanding and correcting serious errors.

Nearly every official story line and policy was wrong. Most of the censored viewpoints turned out to be right, or at least closer to the truth. The SARS2 virus was in fact engineered. The infection fatality rate was not 3.4% but closer to 0.2%. Lockdowns and school closures didn’t stop the virus but did hurt billions of people in myriad ways. Dr. Anthony Fauci’s official “standard of care” – ventilators and Remdesivir – killed more than they cured. Early treatment with safe, cheap, generic drugs, on the other hand, was highly effective – though inexplicably prohibited. Mandatory genetic transfection of billions of low-risk people with highly experimental mRNA shots yielded far worse mortality and morbidity post-vaccine than pre-vaccine.

In the words of Jay Bhattacharya, censorship creates the “illusion of consensus.” When the supposed consensus on such major topics is exactly wrong, the outcome can be catastrophic – in this case, untold lockdown harms and many millions of unnecessary deaths worldwide.

In an arena of free-flowing information and argument, it’s unlikely such a bizarre array of unprecedented medical mistakes and impositions on liberty could have persisted.

Google’s Dilemma – GeminiReality or GeminiFairyTale

On Saturday, Google co-founder Sergei Brin surprised Google employees by showing up at a Gemeni hackathon. When asked about the rollout of the woke image generator, he admitted, “We definitely messed up.” But not to worry. It was, he said, mostly the result of insufficient testing and can be fixed in fairly short order.

Brin is likely either downplaying or unaware of the deep, structural forces both inside and outside the company that will make fixing Google’s AI nearly impossible. Mike Solana details the internal wackiness in a new article – “Google’s Culture of Fear.”

Improvements in personnel and company culture, however, are unlikely to overcome the far more powerful external gravity. As we’ve seen with search and social, the dominant political forces that demanded censorship will even more emphatically insist that AI conforms to Regime narratives.

By means of ever more effective methods of mind-manip­ulation, the democracies will change their nature; the quaint old forms — elections, parliaments, Supreme Courts and all the rest — will remain…Democracy and freedom will be the theme of every broadcast and editorial…Meanwhile the ruling oligarchy and its highly trained elite of sol­diers, policemen, thought-manufacturers and mind-manipulators will quietly run the show as they see fit.

- Aldous Huxley, Brave New World Revisited

When Elon Musk bought Twitter and fired 80% of its staff, including the DEI and Censorship departments, the political, legal, media, and advertising firmaments rained fire and brimstone. Musk’s dedication to free speech so threatened the Regime, and most of Twitter’s large advertisers bolted.

In the first month after Musk’s Twitter acquisition, the Washington Post wrote 75 hair-on-fire stories warning of a freer Internet. Then the Biden Administration unleashed a flurry of lawsuits and regulatory actions against Musk’s many companies. Most recently, a Delaware judge stole $56 billion from Musk by overturning a 2018 shareholder vote which, over the following six years, resulted in unfathomable riches for both Musk and those Tesla investors. The only victims of Tesla’s success were Musk’s political enemies.

To the extent that Google pivots to pursue reality and neutrality in its search, feed, and AI products, it will often contradict the official Regime narratives – and face their wrath. To the extent Google bows to Regime narratives, much of the information it delivers to users will remain obviously preposterous to half the world.

Will Google choose GeminiReality or GeminiFairyTale? Maybe they could allow us to toggle between modes.

AI as Digital Clergy

Silicon Valley’s top venture capitalist and most strategic thinker Marc Andreessen doesn’t think Google has a choice.

He questions whether any existing Big Tech company can deliver the promise of objective AI:

Can Big Tech actually field generative AI products?

(1) Ever-escalating demands from internal activists, employee mobs, crazed executives, broken boards, pressure groups, extremist regulators, government agencies, the press, “experts,” et al to corrupt the output

(2) Constant risk of generating a Bad answer or drawing a Bad picture or rendering a Bad video – who knows what it’s going to say/do at any moment?

(3) Legal exposure – product liability, slander, election law, many others – for Bad answers, pounced on by deranged critics and aggressive lawyers, examples paraded by their enemies through the street and in front of Congress

(4) Continuous attempts to tighten grip on acceptable output degrade the models and cause them to become worse and wilder – some evidence for this already!

(5) Publicity of Bad text/images/video actually puts those examples into the training data for the next version – the Bad outputs compound over time, diverging further and further from top-down control

(6) Only startups and open source can avoid this process and actually field correctly functioning products that simply do as they’re told, like technology should

?

11:29 AM · Feb 28, 2024

A flurry of bills from lawmakers across the political spectrum seek to rein in AI by limiting the companies’ models and computational power. Regulations intended to make AI “safe” will of course result in an oligopoly. A few colossal AI companies with gigantic data centers, government-approved models, and expensive lobbyists will be sole guardians of The Knowledge and Information, a digital clergy for the Regime.

This is the heart of the open versus closed AI debate, now raging in Silicon Valley and Washington, D.C. Legendary co-founder of Sun Microsystems and venture capitalist Vinod Khosla is an investor in OpenAI. He believes governments must regulate AI to (1) avoid runaway technological catastrophe and (2) prevent American technology from falling into enemy hands.

Andreessen charged Khosla with “lobbying to ban open source.”

“Would you open source the Manhattan Project?” Khosla fired back.

Of course, open source software has proved to be more secure than proprietary software, as anyone who suffered through decades of Windows viruses can attest.

And AI is not a nuclear bomb, which has only one destructive use.

The real reason D.C. wants AI regulation is not “safety” but political correctness and obedience to Regime narratives. AI will subsume search, social, and other information channels and tools. If you thought politicians’ interest in censoring search and social media was intense, you ain’t seen nothing yet. Avoiding AI “doom” is mostly an excuse, as is the China question, although the Pentagon gullibly goes along with those fictions.

Universal AI is Impossible

In 2019, I offered one explanation why every social media company’s “content moderation” efforts would likely fail. As a social network or AI grows in size and scope, it runs up against the same limitations as any physical society, organization, or network: heterogeneity. Or as I put it: “the inability to write universal speech codes for a hyper-diverse population on a hyper-scale social network.”

You could see this in the early days of an online message board. As the number of participants grew, even among those with similar interests and temperaments, so did the challenge of moderating that message board. Writing and enforcing rules was insanely difficult.

Thus it has always been. The world organizes itself via nation states, cities, schools, religions, movements, firms, families, interest groups, civic and professional organizations, and now digital communities. Even with all these mediating institutions, we struggle to get along.

Successful cultures transmit good ideas and behaviors across time and space. They impose measures of conformity, but they also allow enough freedom to correct individual and collective errors.

No single AI can perfect or even regurgitate all the world’s knowledge, wisdom, values, and tastes. Knowledge is contested. Values and tastes diverge. New wisdom emerges.

Nor can AI generate creativity to match the world’s creativity. Even as AI approaches human and social understanding, even as it performs hugely impressive “generative” tasks, human and digital agents will redeploy the new AI tools to generate ever more ingenious ideas and technologies, further complicating the world. At the frontier, the world is the simplest model of itself. AI will always be playing catch-up.

Because AI will be a chief general purpose tool, limits on AI computation and output are limits on human creativity and progress. Competitive AIs with different values and capabilities will promote innovation and ensure no company or government dominates. Open AIs can promote a free flow of information, evading censorship and better forestalling future Covid-like debacles.

Google’s Gemini is but a foreshadowing of what a new AI regulatory regime would entail – total political supervision of our exascale information systems. Even without formal regulation, the extra-governmental battalions of Regime commissars will be difficult to combat.

The attempt by Washington and international partners to impose universal content codes and computational limits on a small number of legal AI providers is the new totalitarian playbook.

Regime captured and curated A.I. is the real catastrophic possibility.

*  *  *

Republished from the author’s Substack

Tyler Durden Mon, 03/18/2024 - 17:00

Read More

Continue Reading

Spread & Containment

Supreme Court To Hear Arguments In Biden Admin’s Censorship Of Social Media Posts

Supreme Court To Hear Arguments In Biden Admin’s Censorship Of Social Media Posts

Authored by Tom Ozimek via The Epoch Times (emphasis ours),

The…

Published

on

Supreme Court To Hear Arguments In Biden Admin’s Censorship Of Social Media Posts

Authored by Tom Ozimek via The Epoch Times (emphasis ours),

The U.S. Supreme Court will soon hear oral arguments in a case that concerns what two lower courts found to be a “coordinated campaign” by top Biden administration officials to suppress disfavored views on key public issues such as COVID-19 vaccine side effects and pandemic lockdowns.

President Joe Biden delivers the State of the Union address in the House Chamber of the U.S. Capitol in Washington on March 7, 2024. (Mandel Ngan/AFP/Getty Images)

The Supreme Court has scheduled a hearing on March 18 in Murthy v. Missouri, which started when the attorneys general of two states, Missouri and Louisiana, filed suit alleging that social media companies such as Facebook were blocking access to their platforms or suppressing posts on controversial subjects.

The initial lawsuit, later modified by an appeals court, accused Biden administration officials of engaging in what amounts to government-led censorship-by-proxy by pressuring social media companies to take down posts or suspend accounts.

Some of the topics that were targeted for downgrade and other censorious actions were voter fraud in the 2020 presidential election, the COVID-19 lab leak theory, vaccine side effects, the social harm of pandemic lockdowns, and the Hunter Biden laptop story.

The plaintiffs argued that high-level federal government officials were the ones pulling the strings of social media censorship by coercing, threatening, and pressuring social media companies to suppress Americans’ free speech.

‘Unrelenting Pressure’

In a landmark ruling, Judge Terry Doughty of the U.S. District Court for the Western District of Louisiana granted a temporary injunction blocking various Biden administration officials and government agencies such as the Department of Justice and FBI from collaborating with big tech firms to censor posts on social media.

Later, the Court of Appeals for the Fifth Circuit agreed with the district court’s ruling, saying it was “correct in its assessment—‘unrelenting pressure’ from certain government officials likely ‘had the intended result of suppressing millions of protected free speech postings by American citizens.’”

The judges wrote, “We see no error or abuse of discretion in that finding.”

The ruling was appealed to the Supreme Court, and on Oct. 20, 2023, the high court agreed to hear the case while also issuing a stay that indefinitely blocked the lower court order restricting the Biden administration’s efforts to censor disfavored social media posts.

Supreme Court Justices Samuel Alito, Neil Gorsuch, and Clarence Thomas would have denied the Biden administration’s application for a stay.

“At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news,” Justice Alito wrote in a dissenting opinion.

“That is most unfortunate.”

Supreme Court Justice Samuel Alito poses in Washington on April 23, 2021. (Erin Schaff/Reuters)

The Supreme Court has other social media cases on its docket, including a challenge to Republican-passed laws in Florida and Texas that prohibit large social media companies from removing posts because of the views they express.

Oral arguments were heard on Feb. 26 in the Florida and Texas cases, with debate focusing on the validity of laws that deem social media companies “common carriers,” a status that could allow states to impose utility-style regulations on them and forbid them from discriminating against users based on their political viewpoints.

The tech companies have argued that the laws violate their First Amendment rights.

The Supreme Court is expected to issue a decision in the Florida and Texas cases by June 2024.

‘Far Beyond’ Constitutional

Some of the controversy in Murthy v. Missouri centers on whether the district court’s injunction blocking Biden administration officials and federal agencies from colluding with social media companies to censor posts was overly broad.

In particular, arguments have been raised that the injunction would prevent innocent or borderline government “jawboning,” such as talking to newspapers about the dangers of sharing information that might aid terrorists.

But that argument doesn’t fly, according to Philip Hamburger, CEO of the New Civil Liberties Alliance, which represents most of the individual plaintiffs in Murthy v. Missouri.

In a series of recent statements on the subject, Mr. Hamburger explained why he believes that the Biden administration’s censorship was “far beyond anything that could be constitutional” and that concern about “innocent or borderline” cases is unfounded.

For one, he said that the censorship that is highlighted in Murthy v. Missouri relates to the suppression of speech that was not criminal or unlawful in any way.

Mr. Hamburger also argued that “the government went after lawful speech not in an isolated instance, but repeatedly and systematically as a matter of policy,” which led to the suppression of entire narratives rather than specific instances of expression.

“The government set itself up as the nation’s arbiter of truth—as if it were competent to judge what is misinformation and what is true information,” he wrote.

In retrospect, it turns out to have suppressed much that was true and promoted much that was false.

The suppression of reports on the Hunter Biden laptop just before the 2020 presidential election on the premise that it was Russian disinformation, for instance, was later shown to be unfounded.

Some polls show that if voters had been aware of the report, they would have voted differently.

Tyler Durden Mon, 03/18/2024 - 09:45

Read More

Continue Reading

Trending