Connect with us

Science

Twitter Suspends Account Of Chinese Scientist Who Published Paper Alleging Covid Was Created In Wuhan Lab

Twitter Suspends Account Of Chinese Scientist Who Published Paper Alleging Covid Was Created In Wuhan Lab

Published

on

Twitter Suspends Account Of Chinese Scientist Who Published Paper Alleging Covid Was Created In Wuhan Lab Tyler Durden Tue, 09/15/2020 - 14:44

On Sunday afternoon we asked how long before the twitter account of the "rogue" Chinese virologist, Dr. Li-Meng Yan, who yesterday "shocked" the world of establishment scientists and other China sycophants, by publishing a "smoking gun" scientific paper demonstrating that the Covid-19 virus was manmade, is "silenced."

We now have the answer: less than two days. A cursory check of Dr Yan's twitter page reveals that the account has been suspended as of this moment.

The suspension took place shortly after Dr Yan had accumulated roughly 60,000 followers in less than 48 hours. The snapshot below was taken earlier in the day precisely in anticipation of this suspension.

It was not immediately clear what justification Twitter had to suspend the scientist who, to the best of our knowledge, had just 4 tweets as of Tuesday morning none of which violated any stated Twitter policies, with the only relevant tweet being a link to her scientific paper co-written with three other Chinese scientists titled "Unusual Features of the SARS-CoV-2 Genome Suggesting Sophisticated Laboratory Modification Rather Than Natural Evolution and Delineation of Its Probable Synthetic Route" which laid out why the Wuhan Institute of Virology had created the covid-19 virus.

While we appreciate that Twitter may have experienced pressure from either China, or the established scientist community, to silence Dr Yan for proposing a theory that flies in the face of everything that has been accepted as undisputed gospel - after all Twitter did just that to us - we are confident that by suspending her account, Jack Dorsey has only added more fuel to the fire of speculations that the covid virus was indeed manmade (not to mention countless other tangential conspiracy theories).

If Yan was wrong, why not just let other scientists respond in the open to the all too valid arguments presented in Dr. Yan's paper? Isn't that what "science" is all about? Why just shut her up?

Because if we have already crossed the tipping point when anyone who proposes an "inconvenient" explanation for an established "truth" has to be immediately censored, then there is little that can be done to salvage the disintegration of a society that once held freedom of speech as paramount.

For those who missed it, here is our post breaking down Dr. Yan's various allegations which twitter saw fit to immediately censor instead of allowing a healthy debate to emerge.

We hope Twitter will provide a very reasonable and sensible explanation for this unprecedented censorship.

For those who missed it, her paper is below:

 

Read More

Continue Reading

Government

This startup is setting a DALL-E 2-like AI free, consequences be damned

DALL-E 2, OpenAI’s powerful text-to-image AI system, can create photos in the style of cartoonists, 19th century daguerreotypists, stop-motion animators…

Published

on

DALL-E 2, OpenAI’s powerful text-to-image AI system, can create photos in the style of cartoonists, 19th century daguerreotypists, stop-motion animators and more. But it has an important, artificial limitation: a filter that prevents it from creating images depicting public figures and content deemed too toxic.

Now an open source alternative to DALL-E 2 is on the cusp of being released, and it’ll have no such filter.

London- and Los Altos-based startup Stability AI this week announced the release of a DALL-E 2-like system, Stable Diffusion, to just over a thousand researchers ahead of a public launch in the coming weeks. A collaboration between Stability AI, media creation company RunwayML, Heidelberg University researchers, and the research groups EleutherAI and LAION, Stable Diffusion is designed to run on most high-end consumer hardware, generating 512×512-pixel images in just a few seconds given any text prompt.

Stable Diffusion sample outputs.

“Stable Diffusion will allow both researchers and soon the public to run this under a range of conditions, democratizing image generation,” Stability AI CEO and founder Emad Mostaque wrote in a blog post. “We look forward to the open ecosystem that will emerge around this and further models to truly explore the boundaries of latent space.”

But Stable Diffusion’s lack of safeguards compared to systems like DALL-E 2 poses tricky ethical questions for the AI community. Even if the results aren’t perfectly convincing yet, making fake images of public figures opens a large can of worms. And making the raw components of the system freely available leaves the door open to bad actors who could train them on subjectively inappropriate content, like pornography and graphic violence.

Creating Stable Diffusion

Stable Diffusion is the brainchild of Mostque. Having graduated from Oxford with a Masters in mathematics and computer science, Mostque served as an analyst at various hedge funds before shifting gears to more public-facing works. In 2019, he co-founded Symmitree, a project that aimed to reduce the cost of smartphones and internet access for people living in impoverished communities. And in 2020, Mostque was the chief architect of Collective & Augmented Intelligence Against COVID-19, an alliance to help policymakers make decisions in the face of the pandemic by leveraging software.

He co-founded Stability AI in 2020, motivated both by a personal fascination with AI and what he characterized as a lack of “organization” within the open source AI community.

Stable Diffusion Obama

An image of former president Barrack Obama created by Stable Diffusion.

“Nobody has any voting rights except our 75 employees — no billionaires, big funds, governments or anyone else with control of the company or the communities we support. We’re completely independent,” Mostaque told TechCrunch in an email. “We plan to use our compute to accelerate open source, foundational AI.”

Mostque says that Stability AI funded the creation of LAION 5B, an open source, 250-terabyte dataset containing 5.6 billion images scraped from the internet. (“LAION” stands for Large-scale Artificial Intelligence Open Network, a nonprofit organization with the goal of making AI, datasets and code available to the public.) The company also worked with the LAION group to create a subset of LAION 5B called LAION-Aesthetics, which contains AI-filtered images ranked as particularly “beautiful” by testers of Stable Diffusion.

The initial version of Stable Diffusion was based on LAION-400M, the predecessor to LAION 5B, which was known to contain depictions of sex, slurs and harmful stereotypes. LAION-Aesthetics attempts to correct for this, but it’s too early to tell to what extent it’s successful.

Stable Diffusion

A collage of images created by Stable Diffusion.

In any case, Stable Diffusion builds on research incubated at OpenAI as well as Runway and Google Brain, one of Google’s AI R&D divisions. The system was trained on text-image pairs from LAION-Aesthetics to learn the associations between written concepts and images, like how the word “bird” can refer not only to bluebirds but parakeets and bald eagles, as well as more abstract notions.

At runtime, Stable Diffusion — like DALL-E 2 — breaks the image generation process down into a process of “diffusion.” It starts with pure noise and refines an image over time, making it incrementally closer to a given text description until there’s no noise left at all.

Boris Johnson Stable Diffusion

Boris Johnson wielding various weapons, generated by Stable Diffusion.

Stability AI used a cluster of 4,000 Nvidia A1000 GPUs running in AWS to train Stable Diffusion over the course of a month. CompVis, the machine vision and learning research group at Ludwig Maximilian University of Munich, oversaw the training, while Stability AI donated the compute power.

Stable Diffusion can run on graphics cards with around 5GB of VRAM. That’s roughly the capacity of mid-range cards like Nvidia’s GTX 1660, priced around $230. Work is underway on bringing compatibility to AMD MI200’s data center cards and even MacBooks with Apple’s M1 chip (although in the case of the latter, without GPU acceleration, image generation will take as long as a few minutes).

“We have optimized the model, compressing the knowledge of over 100 terabytes of images,” Mosque said. “Variants of this model will be on smaller datasets, particularly as reinforcement learning with human feedback and other techniques are used to take these general digital brains and make then even smaller and focused.”

Stability AI Stable Diffusion

Samples from Stable Diffusion.

For the past few weeks, Stability AI has allowed a limited number of users to query the Stable Diffusion model through its Discord server, slowing increasing the number of maximum queries to stress-test the system. Stability AI says that over 15,000 testers have used Stable Diffusion to create 2 million images a day.

Far-reaching implications

Stability AI plans to take a dual approach in making Stable Diffusion more widely available. It’ll host the model in the cloud, allowing people to continue using it to generate images without having to run the system themselves. In addition, the startup will release what it calls “benchmark” models under a permissive license that can be used for any purpose — commercial or otherwise — as well as compute to train the models.

That will make Stability AI the first to release an image generation model nearly as high-fidelity as DALL-E 2. While other AI-powered image generators have been available for some time, including Midjourney, NightCafe and Pixelz.ai, none have open-sourced their frameworks. Others, like Google and Meta, have chosen to keep their technologies under tight wraps, allowing only select users to pilot them for narrow use cases.

Stability AI will make money by training “private” models for customers and acting as a general infrastructure layer, Mostque said — presumably with a sensitive treatment of intellectual property. The company claims to have other commercializable projects in the works, including AI models for generating audio, music and even video.

Stable Diffusion Harry Potter

Sand sculptures of Harry Potter and Hogwarts, generated by Stable Diffusion.

“We will provide more details of our sustainable business model soon with our official launch, but it is basically the commercial open source software playbook: services and scale infrastructure,” Mostque said. “We think AI will go the way of servers and databases, with open beating proprietary systems — particularly given the passion of our communities.”

With the hosted version of Stable Diffusion — the one available through Stability AI’s Discord server — Stability AI doesn’t permit every kind of image generation. The startup’s terms of service ban some lewd or sexual material (although not scantily-clad figures), hateful or violent imagery (such as antisemitic iconography, racist caricatures, misogynistic and misandrist propaganda), prompts containing copyrighted or trademarked material, and personal information like phone numbers and Social Security numbers. But Stability AI won’t implement keyword-level filters like OpenAI’s, which prevent DALL-E 2 from even attempting to generate an image that might violate its content policy.

Stable Diffusion women

A Stable Diffusion generation, given the prompt: “very sexy woman with black hair, pale skin, in bikini, wet hair, sitting on the beach.”

Stability AI also doesn’t have a policy against images with public figures. That presumably makes deepfakes fair game (and Renaissance-style paintings of famous rappers), though the model struggles with faces at times, introducing odd artifacts that a skilled Photoshop artist rarely would.

“Our benchmark models that we release are based on general web crawls and are designed to represent the collective imagery of humanity compressed into files a few gigabytes big,” Mostque said. “Aside from illegal content, there is minimal filtering, and it is on the user to use it as they will.”

Stable Diffusion Hitler

An image of Hitler generated by Stable Diffusion.

Potentially more problematic are the soon-to-be-released tools for creating custom and fine-tuned Stable Diffusion models. An “AI furry porn generator” profiled by Vice offers a preview of what might come; an art student going by the name of CuteBlack trained an image generator to churn out illustrations of anthropomorphic animal genitalia by scraping artwork from furry fandom sites. The possibilities don’t stop at pornography. In theory, a malicious actor could fine-tune Stable Diffusion on images of riots and gore, for instance, or propaganda.

Already, testers in Stability AI’s Discord server are using Stable Diffusion to generate a range of content disallowed by other image generation services, including images of the war in Ukraine, nude women, a Chinese invasion of Taiwan, and controversial depictions of religious figures like the Prophet Mohammed. Many of the results bear telltale signs of an algorithmic creation, like disproportionate limbs and an incongruous mix of art styles. But others are passable on first glance. And the tech, presumably, will continue to improve.

Nude women Stability AI

Nude women generated by Stable Diffusion.

Mostque acknowledged that the tools could be used by bad actors to create “really nasty stuff,” and CompVis says that the public release of the benchmark Stable Diffusion model will “incorporate ethical considerations.” But Mostque argues that — by making the tools freely available — it allows the community to develop countermeasures.

“We hope to be the catalyst to coordinate global open source AI, both independent and academic, to build vital infrastructure, models and tools to maximize our collective potential,” Mostque said. “This is amazing technology that can transform humanity for the better and should be open infrastructure for all.”

Stable Diffusion Zelensky

A generation from Stable Diffusion, with the prompt: “[Ukrainian president Volodymyr] Zelenskyy committed crimes in Bucha.”

Not everyone agrees, as evidenced by the controversy over “GPT-4chan,” an AI model trained on one of 4chan’s infamously toxic discussion boards. AI researcher Yannic Kilcher made GPT-4chan — which learned to output racist, antisemitic and misogynist hate speech — available earlier this year on Hugging Face, a hub for sharing trained AI models. Following discussions on social media and Hugging Face’s comment section, the Hugging Face team first “gated” access to the model before removing it altogether, but not before it was downloaded over a thousand times.

War in Ukraine Stability AI

“War in Ukraine” images generated by Stable Diffusion.

Meta’s recent chatbot fiasco illustrates the challenge of keeping even ostensibly safe models from going off the rails. Just days after making its most advanced AI chatbot to date, BlenderBot 3, available on the web, Meta was forced to confront media reports that the bot made frequent antisemitic comments and repeated false claims about former U.S. president Donald Trump winning reelection two years ago.

BlenderBot 3’s toxicity came from biases in the public websites that were used to train it. It’s a well-known problem in AI — even when fed filtered training data, models tend to amplify biases like photo sets that portray men as executives and women as assistants. With DALL-E 2, OpenAI has attempted to combat this by implementing techniques, including dataset filtering, that help the model generate more “diverse” images. But some users claim that they’ve made the model less accurate than before at creating images based on certain prompts.

Stable Diffusion contains little in the way of mitigations besides training dataset filtering. So what’s to prevent someone from generating, say, photorealistic images of protests, “evidence” of fake moon landings and general misinformation? Nothing really. But Mostque says that’s the point.

Stable Diffusion protest

Given the prompt “protests against the dilma government, brazil [sic],” Stable Diffusion created this image.

“A percentage of people are simply unpleasant and weird, but that’s humanity,” Mostque said. “Indeed, it is our belief this technology will be prevalent, and the paternalistic and somewhat condescending attitude of many AI aficionados is misguided in not trusting society … We are taking significant safety measures including formulating cutting-edge tools to help mitigate potential harms across release and our own services. With hundreds of thousands developing on this model, we are confident the net benefit will be immensely positive and as billions use this tech harms will be negated.”

Read More

Continue Reading

Government

When CRISPR meets optical sensors – a new born of a nucleic acid sensing platform: MOPCS

This study is led by Prof. Han Zhang (Institute of Microscale Optoelectronics, College of Physics and Optoelectronic Engineering, Shenzhen University)…

Published

on

This study is led by Prof. Han Zhang (Institute of Microscale Optoelectronics, College of Physics and Optoelectronic Engineering, Shenzhen University) and Supervised by Dr. Zhongjian Xie, Dr. Xiaopeng Ma (Shenzhen Children’s Hospital) and Dr. Yaqing He (Shenzhen Center for Disease Control and Prevention).

Credit: ©Science China Press

This study is led by Prof. Han Zhang (Institute of Microscale Optoelectronics, College of Physics and Optoelectronic Engineering, Shenzhen University) and Supervised by Dr. Zhongjian Xie, Dr. Xiaopeng Ma (Shenzhen Children’s Hospital) and Dr. Yaqing He (Shenzhen Center for Disease Control and Prevention).

The outbreak of the COVID-19 pandemic is partially due to the challenge of identifying asymptomatic and pre-symptomatic carriers of the virus, and thus highlights a strong motivation for diagnostics that can be rapidly deployed with high sensitivity. On the other hand, several concerned SARS-CoV-2 variants, including the Omicron, are required to be identified as soon as the samples are identified as ‘positive’. Unfortunately, a traditional PCR test does not allow their specific identification.

Clustered regularly interspaced short palindromic repeat (CRISPR) system is a well-known microbial natural adaptive immune system and developed as a revolutionary genomic editing tool. The widely known mechanism of the CRISPR technology is while CRISPR-associated (Cas) nuclease combined with a chimeric guide RNA (gRNA), such complex can bine to gene locus with protospacer-adjacent motif (PAM), recognize and cleave a site-specific nucleotide sequence. This mechanism has been widely applied in gene therapies. At the same time, gene detecting methodologies that use different types of Cas nucleases have been developed and used widely. For example, SHERLOCK is a method employing Cas12 or Cas13 to detect pre-amplified DNA or RNA sequences. Other methods like HOLMES, HOLMESv2, CONAN, etc., are presenting the advantages of high specificity and flexibility.

Surface plasmon resonance (SPR)-sensing technology is a well-known versatile technique within the region of optical sensing platforms. It is widely used in research of molecular interactions, including antigen-antibody, drug-target, protein-nucleic acid, protein-protein, and protein-lipid, by monitoring the refractive index change on the sensor surface caused by the surface change of weight.

“What if combining CRISPR and SPR?”. Dr. Zhi Chen, a post-doctoral researcher at Shenzhen University, who specialized in medical science and molecular biology, grasped this spark. This idea was promptly appraised by the leader of his research group, Prof. Han Zhang. Another researcher, Dr. Jingfeng Li, was willing to cooperate with Dr. Zhi Chen and finish this research. After a huge amount and repeating tests for this study, a prototype of the CRISPR-empowered SPR sensing platform was finally built. More and more co-workers joined in and supported this research, including the Shenzhen Center for Disease Control and Prevention.

Dr. Zhi Chen, Dr. Jingfeng Li, and Prof. Han Zhang named this platform MOPCS (Methodologies Of Photonic CRISPR Sensing), which combines an optical sensing technology-surface plasmon resonance (SPR), and the ‘gene scissors’ CRISPR technique to achieve both high sensitivity and specificity of viral variants’ measurement. MOPCS is a low-cost, CRISPR/Cas12a system-empowered SPR gene detecting platform that can analyze viral RNA, without the need for amplification, within 38 min from sample input to results output, and achieve a limit of detection of 15 fM. Besides, MOPCS achieves a highly sensitive analysis of SARS-CoV-2 and mutations appear in variants B.1.617.2 (Delta), B.1.1.529 (Omicron), and BA.1 (a subtype of Omicron). This platform was also used to analyze some recently collected patient samples from a local outbreak in Shenzhen City and identified by the Centers for Disease Control and Prevention. This innovative CRISPR-empowered SPR platform will further contribute to various fast, sensitive, and accurate detection of target nucleic acid sequences with single-base mutations.

###

See the article:

A CRISPR/Cas12a empowered surface plasmon resonance platform for rapid and specific diagnosis of the Omicron variant of SARS-CoV-2

https://doi.org/10.1093/nsr/nwac104


Read More

Continue Reading

Government

It’s Over: CDC Says People Exposed To COVID No Longer Need To Quarantine

It’s Over: CDC Says People Exposed To COVID No Longer Need To Quarantine

"Sorry you guys had to miss your grandmother’s funeral, but at least…

Published

on

It's Over: CDC Says People Exposed To COVID No Longer Need To Quarantine

"Sorry you guys had to miss your grandmother's funeral, but at least you don't have to quarantine anymore!" 

That may as well have been the message that the CDC put out on Thursday, conceding after years of micromanaging a "crisis" that those who are exposed to Covid no longer need to quarantine, regardless of their vaccination status, something which would have gotten you banned on all social networks in a millisecond if you dared to tweet just a little over a year ago. 

New guidelines only recommend that people who have been exposed "wear a mask for 10 days" and get tested for the virus on day 5, according to the New York Times, a radical departure from the prior draconian measures which required self-imposed quaratines for as long as 14 days.

The CDC also doesn't recommend staying at least 6 feet away from other people to reduce the risk of exposure, CNN noted. It's a recommendation that boldly flies in the face of the scared sh*tless narrative the agency has been pushing. 

According to the same report, the new guidelines also say that contact tracing "should be limited to hospitals and certain high-risk group-living situations such as nursing homes". They also "de-emphasize the use of regular testing to screen for Covid-19, except in certain high-risk settings like nursing homes and prisons."

And just like that, the hysteria was over, but at least we all "followed the science"

* * *

Greta Massetti, a C.D.C. epidemiologist, said Thursday: “We know that Covid-19 is here to stay. High levels of population immunity due to vaccination and previous infection, and the many tools that we have available to protect people from severe illness and death, have put us in a different place.”

According to Massetti and the Times, the new guidelines "emphasize the importance of vaccination and other measures, including antiviral treatments and ventilation."

Not to mention, they also mark a drastic 180 degree shift from how the CDC was handling Covid over the past 24 months. But suddenly, with mid-term elections just months away, the new guidelines are being praised by health officials - imagine that.

Amesh Adalja, a senior scholar at the Johns Hopkins Center for Health Security, said: "I think this a welcome change. It actually shows how far we’ve come.” Yes... and we have been carrying the goalposts all along it appears.

But, despite this convenient narrative, the very same article goes on to make the case that not enough Americans have been vaccinated. After all, what would a writeup on Covid by the NY Times be without crowing about the number of still unvaccinated MAGA-hat wearing Americans:

And while nearly all Americans are now eligible to be vaccinated, many are not up-to-date on their shots. Just 30 percent of 5- to 11-year-olds and 60 percent of 12- to 17-year-olds have received their primary vaccine series nationwide. Among adults 65 and older, who are at highest risk of severe disease, 65 percent have received a booster.

Jennifer Nuzzo, director of the Pandemic Center at the Brown University School of Public Health, concluded: “Obviously, we have to do more work to make sure that more people avail themselves of the protection that those tools have to offer and that more people can access those tools. I do think there’s been an overall dial-back in the ground game that’s needed to get people vaccinated.”

Said one Zerohedge contributor...

Tyler Durden Fri, 08/12/2022 - 08:17

Read More

Continue Reading

Trending