Connect with us

Spread & Containment

Twitter says it’s no longer enforcing COVID-19 misleading information policy

Twitter is no longer enforcing its policy against misleading information about COVID-19, per an update posted to an official company blog page. Reuters…

Published

on

Twitter is no longer enforcing its policy against misleading information about COVID-19, per an update posted to an official company blog page.

Reuters spotted the change earlier — which said the change was effective as of last Wednesday.

“Effective November 23, 2022, Twitter is no longer enforcing the COVID-19 misleading information policy,” the social media company writes in a brief grey-on-grey note on a company webpage that’s still emblazoned with the title: “Coronavirus: Staying safe and informed on Twitter”.

No explanation was given by Twitter for the policy change.

Under its prior COVID-19 misinformation policy the company had said it would remove “demonstrably false or potentially misleading content that has the highest risk of causing harm”.

Grey on grey update freezing enforcement of Twitter’s COVID-19 misleading information policy (Screengrab: Natasha Lomas/TechCrunch)

Since billionaire Tesla owner, Elon Musk, took over the company last month, on closing his $44BN takeover and drastically slashing headcount, Twitter has stopped responded to press requests — and appears to have entirely shuttered its comms function — leaving Musk’s own tweets or posts like this one to its company blog as the only official outlet for confirming what it’s doing.

While it’s not clear why Twitter has abandoned enforcement of the COVID-19 policy there was plenty of nuance in how it could be interpreted — as well as a range of enforcement actions that might be applied by Twitter, including putting contextual or warning labels on tweets; reducing visibility and blocking sharing; all the way up to requiring removal of the tweet; and, for repeat offenders, suspending accounts.

All that enforcement has presumably now ceased under Musk — who, on taking over Twitter at the end of October, tweeted gleefully that “the bird is freed!”.

Roughly a month later, Musk’s approach to ‘liberating’ speech on the platform means he’s opened the door to conspiracy nonsense peddlers to amplify dangerous bs about COVID-19 on Twitter. So the backsliding is real.

See, for example, this wild claim tweeted last Friday by mega Musk fanboy, Kimdotcom — whose account has some 1.1M followers on Twitter — in which he heavily implies that “vaccines now kill more people than COVID”. His ‘evidence’ for that? A graph of “excess deaths” in Europe whose source, EuroMOMO, does not make any reference to causes of excess deaths.

A report made to Twitter, via its official misinformation reporting channel, of Kimdotcom’s tweet for spreading COVID-19 misinformation did not yield a response from the company last week. And that inaction is now apparently Twitter policy.

At the time of writing it was still possible to file a report via official Twitter channels of COVID-19 misinformation — see the below screengrab for its unamended policy statements — but, again, no action will presumably be taken on any such reports…

Twitter misleading info report

Screengrab: Natasha Lomas/TechCrunch

Per details of the framework the company had previously used for evaluating “potentially misleading” claims related to COVID-19 — to determine whether it would or would not take action on a particular tweet — it said for this type of tweet to qualify as a misleading claim “it must be an assertion of fact (not an opinion), expressed definitively, and intended to influence others’ behavior”.

“Under this policy, we consider claims to be false or misleading if (1) they have been confirmed to be false by subject-matter experts, such as public health authorities; or (2) they include information which is shared in a way that could confuse or deceive people,” it also previously stipulated.

Additionally, Twitter, pre-Musk, conceded that it would be unable to take enforcement action “on every Tweet that contains incomplete or disputed information about COVID-19” — saying it would therefore focus on addressing those claims “that could adversely impact an individual, group, or community”; with its greatest concern stated as being to curb misleading information that could increase the likelihood of exposure to the virus, or which might negatively impact health systems’ capacity to cope, or could lead to discrimination and avoidance of communities and/or places of business based on “perceived affiliation with protected groups”.

Given all that nuance around enforcement it’s not 100% certain Twitter, pre-Musk, would have taken down Kimdotcom’s tweet. But it’s 100% guaranteed it won’t do anything about any nonsense tweets about COVID-19 now with Musk in charge.

It’s not clear why the company would want to step back from enforcing a policy that was intended to help protect public health. But Musk has presented himself as a free speech absolutist and continues to actively seek to stoke culture wars on the platform he now owns.

He also recently said he would let scores of previously banned Twitter accounts return to the platform under a general amnesty — so he’s been tended toward pulling out any stops (though he did apparently draw the line at unbanning InfoWars’ conspiracy hate preacher, Alex Jones, implying distaste about lies he’d spread about massacred school children).

What Musk’s (near) free-for-all for disinformation will mean for Twitter users is a continued degradation of the quality of the information they’re being exposed to on the platform.

(See also: His disastrous paid verification scheme — which does not distinguish between people who’ve paid for a ‘Blue Check’ and accounts that got one under the prior actual identity-verification scheme, unless you click through to read some tiny, grey print.)

This free pass for disinformation seems likely to result in Twitter losing more users as more people decide they’ve had enough of being exposed to nonsense and take flight, seeking less toxic online spaces to socialize.

Advertisers are also unlikely to relish their brands being served up alongside misleading tweets about COVID-19.

Another looming question for Musk-Twitter is how regulators will respond to the amped up disinformation risk.

As we’ve written before, under its prior leadership Twitter had signed itself up to a series of voluntary commitments to fight the spread of disinformation on its platform in the European Union. As far as we’re aware, the company has not revoked its status as a signatory to this EU Code of Practice on Disinformation. But its participation in the initiative appears to only exist purely on paper now.

We’ve reached out to the European Commission for a response to Twitter’s policy change on misleading information about COVID-19  and will update this report with any response.

While the EU Code is not legally binding, and breaching it does not imply any sanctions, the Commission — whose initiative this is — is about to take on a major oversight role for large platforms under the incoming Digital Services Act (DSA). It has previously said adherence to the disinformation Code will be factored into its assessment of platforms’ compliance with the legally binding requirements of the DSA. And breaching that regime could incur penalties of up to 6% of global turnover. 

Twitter says it’s no longer enforcing COVID-19 misleading information policy by Natasha Lomas originally published on TechCrunch

Read More

Continue Reading

Spread & Containment

Sylvester researchers, collaborators call for greater investment in bereavement care

MIAMI, FLORIDA (March 15, 2024) – The public health toll from bereavement is well-documented in the medical literature, with bereaved persons at greater…

Published

on

MIAMI, FLORIDA (March 15, 2024) – The public health toll from bereavement is well-documented in the medical literature, with bereaved persons at greater risk for many adverse outcomes, including mental health challenges, decreased quality of life, health care neglect, cancer, heart disease, suicide, and death. Now, in a paper published in The Lancet Public Health, researchers sound a clarion call for greater investment, at both the community and institutional level, in establishing support for grief-related suffering.

Credit: Photo courtesy of Memorial Sloan Kettering Comprehensive Cancer Center

MIAMI, FLORIDA (March 15, 2024) – The public health toll from bereavement is well-documented in the medical literature, with bereaved persons at greater risk for many adverse outcomes, including mental health challenges, decreased quality of life, health care neglect, cancer, heart disease, suicide, and death. Now, in a paper published in The Lancet Public Health, researchers sound a clarion call for greater investment, at both the community and institutional level, in establishing support for grief-related suffering.

The authors emphasized that increased mortality worldwide caused by the COVID-19 pandemic, suicide, drug overdose, homicide, armed conflict, and terrorism have accelerated the urgency for national- and global-level frameworks to strengthen the provision of sustainable and accessible bereavement care. Unfortunately, current national and global investment in bereavement support services is woefully inadequate to address this growing public health crisis, said researchers with Sylvester Comprehensive Cancer Center at the University of Miami Miller School of Medicine and collaborating organizations.  

They proposed a model for transitional care that involves firmly establishing bereavement support services within healthcare organizations to ensure continuity of family-centered care while bolstering community-based support through development of “compassionate communities” and a grief-informed workforce. The model highlights the responsibility of the health system to build bridges to the community that can help grievers feel held as they transition.   

The Center for the Advancement of Bereavement Care at Sylvester is advocating for precisely this model of transitional care. Wendy G. Lichtenthal, PhD, FT, FAPOS, who is Founding Director of the new Center and associate professor of public health sciences at the Miller School, noted, “We need a paradigm shift in how healthcare professionals, institutions, and systems view bereavement care. Sylvester is leading the way by investing in the establishment of this Center, which is the first to focus on bringing the transitional bereavement care model to life.”

What further distinguishes the Center is its roots in bereavement science, advancing care approaches that are both grounded in research and community-engaged.  

The authors focused on palliative care, which strives to provide a holistic approach to minimize suffering for seriously ill patients and their families, as one area where improvements are critically needed. They referenced groundbreaking reports of the Lancet Commissions on the value of global access to palliative care and pain relief that highlighted the “undeniable need for improved bereavement care delivery infrastructure.” One of those reports acknowledged that bereavement has been overlooked and called for reprioritizing social determinants of death, dying, and grief.

“Palliative care should culminate with bereavement care, both in theory and in practice,” explained Lichtenthal, who is the article’s corresponding author. “Yet, bereavement care often is under-resourced and beset with access inequities.”

Transitional bereavement care model

So, how do health systems and communities prioritize bereavement services to ensure that no bereaved individual goes without needed support? The transitional bereavement care model offers a roadmap.

“We must reposition bereavement care from an afterthought to a public health priority. Transitional bereavement care is necessary to bridge the gap in offerings between healthcare organizations and community-based bereavement services,” Lichtenthal said. “Our model calls for health systems to shore up the quality and availability of their offerings, but also recognizes that resources for bereavement care within a given healthcare institution are finite, emphasizing the need to help build communities’ capacity to support grievers.”

Key to the model, she added, is the bolstering of community-based support through development of “compassionate communities” and “upskilling” of professional services to assist those with more substantial bereavement-support needs.

The model contains these pillars:

  • Preventive bereavement care –healthcare teams engage in bereavement-conscious practices, and compassionate communities are mindful of the emotional and practical needs of dying patients’ families.
  • Ownership of bereavement care – institutions provide bereavement education for staff, risk screenings for families, outreach and counseling or grief support. Communities establish bereavement centers and “champions” to provide bereavement care at workplaces, schools, places of worship or care facilities.
  • Resource allocation for bereavement care – dedicated personnel offer universal outreach, and bereaved stakeholders provide input to identify community barriers and needed resources.
  • Upskilling of support providers – Bereavement education is integrated into training programs for health professionals, and institutions offer dedicated grief specialists. Communities have trained, accessible bereavement specialists who provide support and are educated in how to best support bereaved individuals, increasing their grief literacy.
  • Evidence-based care – bereavement care is evidence-based and features effective grief assessments, interventions, and training programs. Compassionate communities remain mindful of bereavement care needs.

Lichtenthal said the new Center will strive to materialize these pillars and aims to serve as a global model for other health organizations. She hopes the paper’s recommendations “will cultivate a bereavement-conscious and grief-informed workforce as well as grief-literate, compassionate communities and health systems that prioritize bereavement as a vital part of ethical healthcare.”

“This paper is calling for healthcare institutions to respond to their duty to care for the family beyond patients’ deaths. By investing in the creation of the Center for the Advancement of Bereavement Care, Sylvester is answering this call,” Lichtenthal said.

Follow @SylvesterCancer on X for the latest news on Sylvester’s research and care.

# # #

Article Title: Investing in bereavement care as a public health priority

DOI: 10.1016/S2468-2667(24)00030-6

Authors: The complete list of authors is included in the paper.

Funding: The authors received funding from the National Cancer Institute (P30 CA240139 Nimer) and P30 CA008748 Vickers).

Disclosures: The authors declared no competing interests.

# # #


Read More

Continue Reading

Spread & Containment

Separating Information From Disinformation: Threats From The AI Revolution

Separating Information From Disinformation: Threats From The AI Revolution

Authored by Per Bylund via The Mises Institute,

Artificial intelligence…

Published

on

Separating Information From Disinformation: Threats From The AI Revolution

Authored by Per Bylund via The Mises Institute,

Artificial intelligence (AI) cannot distinguish fact from fiction. It also isn’t creative or can create novel content but repeats, repackages, and reformulates what has already been said (but perhaps in new ways).

I am sure someone will disagree with the latter, perhaps pointing to the fact that AI can clearly generate, for example, new songs and lyrics. I agree with this, but it misses the point. AI produces a “new” song lyric only by drawing from the data of previous song lyrics and then uses that information (the inductively uncovered patterns in it) to generate what to us appears to be a new song (and may very well be one). However, there is no artistry in it, no creativity. It’s only a structural rehashing of what exists.

Of course, we can debate to what extent humans can think truly novel thoughts and whether human learning may be based solely or primarily on mimicry. However, even if we would—for the sake of argument—agree that all we know and do is mere reproduction, humans have limited capacity to remember exactly and will make errors. We also fill in gaps with what subjectively (not objectively) makes sense to us (Rorschach test, anyone?). Even in this very limited scenario, which I disagree with, humans generate novelty beyond what AI is able to do.

Both the inability to distinguish fact from fiction and the inductive tether to existent data patterns are problems that can be alleviated programmatically—but are open for manipulation.

Manipulation and Propaganda

When Google launched its Gemini AI in February, it immediately became clear that the AI had a woke agenda. Among other things, the AI pushed woke diversity ideals into every conceivable response and, among other things, refused to show images of white people (including when asked to produce images of the Founding Fathers).

Tech guru and Silicon Valley investor Marc Andreessen summarized it on X (formerly Twitter): “I know it’s hard to believe, but Big Tech AI generates the output it does because it is precisely executing the specific ideological, radical, biased agenda of its creators. The apparently bizarre output is 100% intended. It is working as designed.”

There is indeed a design to these AIs beyond the basic categorization and generation engines. The responses are not perfectly inductive or generative. In part, this is necessary in order to make the AI useful: filters and rules are applied to make sure that the responses that the AI generates are appropriate, fit with user expectations, and are accurate and respectful. Given the legal situation, creators of AI must also make sure that the AI does not, for example, violate intellectual property laws or engage in hate speech. AI is also designed (directed) so that it does not go haywire or offend its users (remember Tay?).

However, because such filters are applied and the “behavior” of the AI is already directed, it is easy to take it a little further. After all, when is a response too offensive versus offensive but within the limits of allowable discourse? It is a fine and difficult line that must be specified programmatically.

It also opens the possibility for steering the generated responses beyond mere quality assurance. With filters already in place, it is easy to make the AI make statements of a specific type or that nudges the user in a certain direction (in terms of selected facts, interpretations, and worldviews). It can also be used to give the AI an agenda, as Andreessen suggests, such as making it relentlessly woke.

Thus, AI can be used as an effective propaganda tool, which both the corporations creating them and the governments and agencies regulating them have recognized.

Misinformation and Error

States have long refused to admit that they benefit from and use propaganda to steer and control their subjects. This is in part because they want to maintain a veneer of legitimacy as democratic governments that govern based on (rather than shape) people’s opinions. Propaganda has a bad ring to it; it’s a means of control.

However, the state’s enemies—both domestic and foreign—are said to understand the power of propaganda and do not hesitate to use it to cause chaos in our otherwise untainted democratic society. The government must save us from such manipulation, they claim. Of course, rarely does it stop at mere defense. We saw this clearly during the covid pandemic, in which the government together with social media companies in effect outlawed expressing opinions that were not the official line (see Murthy v. Missouri).

AI is just as easy to manipulate for propaganda purposes as social media algorithms but with the added bonus that it isn’t only people’s opinions and that users tend to trust that what the AI reports is true. As we saw in the previous article on the AI revolution, this is not a valid assumption, but it is nevertheless a widely held view.

If the AI then can be instructed to not comment on certain things that the creators (or regulators) do not want people to see or learn, then it is effectively “memory holed.” This type of “unwanted” information will not spread as people will not be exposed to it—such as showing only diverse representations of the Founding Fathers (as Google’s Gemini) or presenting, for example, only Keynesian macroeconomic truths to make it appear like there is no other perspective. People don’t know what they don’t know.

Of course, nothing is to say that what is presented to the user is true. In fact, the AI itself cannot distinguish fact from truth but only generates responses according to direction and only based on whatever the AI has been fed. This leaves plenty of scope for the misrepresentation of the truth and can make the world believe outright lies. AI, therefore, can easily be used to impose control, whether it is upon a state, the subjects under its rule, or even a foreign power.

The Real Threat of AI

What, then, is the real threat of AI? As we saw in the first article, large language models will not (cannot) evolve into artificial general intelligence as there is nothing about inductive sifting through large troves of (humanly) created information that will give rise to consciousness. To be frank, we haven’t even figured out what consciousness is, so to think that we will create it (or that it will somehow emerge from algorithms discovering statistical language correlations in existing texts) is quite hyperbolic. Artificial general intelligence is still hypothetical.

As we saw in the second article, there is also no economic threat from AI. It will not make humans economically superfluous and cause mass unemployment. AI is productive capital, which therefore has value to the extent that it serves consumers by contributing to the satisfaction of their wants. Misused AI is as valuable as a misused factory—it will tend to its scrap value. However, this doesn’t mean that AI will have no impact on the economy. It will, and already has, but it is not as big in the short-term as some fear, and it is likely bigger in the long-term than we expect.

No, the real threat is AI’s impact on information. This is in part because induction is an inappropriate source of knowledge—truth and fact are not a matter of frequency or statistical probabilities. The evidence and theories of Nicolaus Copernicus and Galileo Galilei would get weeded out as improbable (false) by an AI trained on all the (best and brightest) writings on geocentrism at the time. There is no progress and no learning of new truths if we trust only historical theories and presentations of fact.

However, this problem can probably be overcome by clever programming (meaning implementing rules—and fact-based limitations—to the induction problem), at least to some extent. The greater problem is the corruption of what AI presents: the misinformation, disinformation, and malinformation that its creators and administrators, as well as governments and pressure groups, direct it to create as a means of controlling or steering public opinion or knowledge.

This is the real danger that the now-famous open letter, signed by Elon Musk, Steve Wozniak, and others, pointed to:

“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

Other than the economically illiterate reference to “automat[ing] away all the jobs,” the warning is well-taken. AI will not Terminator-like start to hate us and attempt to exterminate mankind. It will not make us all into biological batteries, as in The Matrix. However, it will—especially when corrupted—misinform and mislead us, create chaos, and potentially make our lives “solitary, poor, nasty, brutish and short.”

Tyler Durden Fri, 03/15/2024 - 06:30

Read More

Continue Reading

Spread & Containment

‘Excess Mortality Skyrocketed’: Tucker Carlson and Dr. Pierre Kory Unpack ‘Criminal’ COVID Response

‘Excess Mortality Skyrocketed’: Tucker Carlson and Dr. Pierre Kory Unpack ‘Criminal’ COVID Response

As the global pandemic unfolded, government-funded…

Published

on

'Excess Mortality Skyrocketed': Tucker Carlson and Dr. Pierre Kory Unpack 'Criminal' COVID Response

As the global pandemic unfolded, government-funded experimental vaccines were hastily developed for a virus which primarily killed the old and fat (and those with other obvious comorbidities), and an aggressive, global campaign to coerce billions into injecting them ensued.

Then there were the lockdowns - with some countries (New Zealand, for example) building internment camps for those who tested positive for Covid-19, and others such as China welding entire apartment buildings shut to trap people inside.

It was an egregious and unnecessary response to a virus that, while highly virulent, was survivable by the vast majority of the general population.

Oh, and the vaccines, which governments are still pushing, didn't work as advertised to the point where health officials changed the definition of "vaccine" multiple times.

Tucker Carlson recently sat down with Dr. Pierre Kory, a critical care specialist and vocal critic of vaccines. The two had a wide-ranging discussion, which included vaccine safety and efficacy, excess mortality, demographic impacts of the virus, big pharma, and the professional price Kory has paid for speaking out.

Keep reading below, or if you have roughly 50 minutes, watch it in its entirety for free on X:

"Do we have any real sense of what the cost, the physical cost to the country and world has been of those vaccines?" Carlson asked, kicking off the interview.

"I do think we have some understanding of the cost. I mean, I think, you know, you're aware of the work of of Ed Dowd, who's put together a team and looked, analytically at a lot of the epidemiologic data," Kory replied. "I mean, time with that vaccination rollout is when all of the numbers started going sideways, the excess mortality started to skyrocket."

When asked "what kind of death toll are we looking at?", Kory responded "...in 2023 alone, in the first nine months, we had what's called an excess mortality of 158,000 Americans," adding "But this is in 2023. I mean, we've  had Omicron now for two years, which is a mild variant. Not that many go to the hospital."

'Safe and Effective'

Tucker also asked Kory why the people who claimed the vaccine were "safe and effective" aren't being held criminally liable for abetting the "killing of all these Americans," to which Kory replied: "It’s my kind of belief, looking back, that [safe and effective] was a predetermined conclusion. There was no data to support that, but it was agreed upon that it would be presented as safe and effective."

Carlson and Kory then discussed the different segments of the population that experienced vaccine side effects, with Kory noting an "explosion in dying in the youngest and healthiest sectors of society," adding "And why did the employed fare far worse than those that weren't? And this particularly white collar, white collar, more than gray collar, more than blue collar."

Kory also said that Big Pharma is 'terrified' of Vitamin D because it "threatens the disease model." As journalist The Vigilant Fox notes on X, "Vitamin D showed about a 60% effectiveness against the incidence of COVID-19 in randomized control trials," and "showed about 40-50% effectiveness in reducing the incidence of COVID-19 in observational studies."

Professional costs

Kory - while risking professional suicide by speaking out, has undoubtedly helped save countless lives by advocating for alternate treatments such as Ivermectin.

Kory shared his own experiences of job loss and censorship, highlighting the challenges of advocating for a more nuanced understanding of vaccine safety in an environment often resistant to dissenting voices.

"I wrote a book called The War on Ivermectin and the the genesis of that book," he said, adding "Not only is my expertise on Ivermectin and my vast clinical experience, but and I tell the story before, but I got an email, during this journey from a guy named William B Grant, who's a professor out in California, and he wrote to me this email just one day, my life was going totally sideways because our protocols focused on Ivermectin. I was using a lot in my practice, as were tens of thousands of doctors around the world, to really good benefits. And I was getting attacked, hit jobs in the media, and he wrote me this email on and he said, Dear Dr. Kory, what they're doing to Ivermectin, they've been doing to vitamin D for decades..."

"And it's got five tactics. And these are the five tactics that all industries employ when science emerges, that's inconvenient to their interests. And so I'm just going to give you an example. Ivermectin science was extremely inconvenient to the interests of the pharmaceutical industrial complex. I mean, it threatened the vaccine campaign. It threatened vaccine hesitancy, which was public enemy number one. We know that, that everything, all the propaganda censorship was literally going after something called vaccine hesitancy."

Money makes the world go 'round

Carlson then hit on perhaps the most devious aspect of the relationship between drug companies and the medical establishment, and how special interests completely taint science to the point where public distrust of institutions has spiked in recent years.

"I think all of it starts at the level the medical journals," said Kory. "Because once you have something established in the medical journals as a, let's say, a proven fact or a generally accepted consensus, consensus comes out of the journals."

"I have dozens of rejection letters from investigators around the world who did good trials on ivermectin, tried to publish it. No thank you, no thank you, no thank you. And then the ones that do get in all purportedly prove that ivermectin didn't work," Kory continued.

"So and then when you look at the ones that actually got in and this is where like probably my biggest estrangement and why I don't recognize science and don't trust it anymore, is the trials that flew to publication in the top journals in the world were so brazenly manipulated and corrupted in the design and conduct in, many of us wrote about it. But they flew to publication, and then every time they were published, you saw these huge PR campaigns in the media. New York Times, Boston Globe, L.A. times, ivermectin doesn't work. Latest high quality, rigorous study says. I'm sitting here in my office watching these lies just ripple throughout the media sphere based on fraudulent studies published in the top journals. And that's that's that has changed. Now that's why I say I'm estranged and I don't know what to trust anymore."

Vaccine Injuries

Carlson asked Kory about his clinical experience with vaccine injuries.

"So how this is how I divide, this is just kind of my perception of vaccine injury is that when I use the term vaccine injury, I'm usually referring to what I call a single organ problem, like pericarditis, myocarditis, stroke, something like that. An autoimmune disease," he replied.

"What I specialize in my practice, is I treat patients with what we call a long Covid long vaxx. It's the same disease, just different triggers, right? One is triggered by Covid, the other one is triggered by the spike protein from the vaccine. Much more common is long vax. The only real differences between the two conditions is that the vaccinated are, on average, sicker and more disabled than the long Covids, with some pretty prominent exceptions to that."

Watch the entire interview above, and you can support Tucker Carlson's endeavors by joining the Tucker Carlson Network here...

Tyler Durden Thu, 03/14/2024 - 16:20

Read More

Continue Reading

Trending