Connect with us

Government

Elderly caretech platform Birdie gets $11.5M Series A led by Index

SaaS-maker Birdie has closed an $11.5 million Series A round of funding led by Index Ventures. Existing investor Kamet Ventures also participated. The UK-based caretech startup has raised a total of $22.9M since being founded back in 2017 (a 2018 raise…

Published

on

SaaS-maker Birdie has closed an $11.5 million Series A round of funding led by Index Ventures. Existing investor Kamet Ventures also participated.

The UK-based caretech startup has raised a total of $22.9M since being founded back in 2017 (a 2018 raise that was called a Series A at the time is now being classed as a seed expansion). It’s focused on building tools for social care providers to drive efficiencies in a chronically under resourced sector.

Birdie isn’t a care provider itself (so it’s not a direct competitor to a startup like Lifted); rather it aims to support care providers with a suite of digital tools intended to reduce admin costs and makes it easier to manage the care being provided to individuals — doing away with the need for paper-based records, and enabling real-time visibility such as via carer check-ins and medication-related notifications.

The wider mission is for the platform to support care providers to offer more co-ordinated, personalized and — the hope is — preventative care so that older adults can be supported to live for longer in their own homes.

“Technology can completely transform the way we look after the elderly and help them to age at home much longer, healthier and happier,” says CEO and co-founder Max Parmentier, explaining the founding premise. “We position ourselves as a solution to uniquely offer a full support for the elderly to age at home… So we started off with the people closest to the elderly and caring for the elderly which are the care providers. And when we look at how these providers are operating they are extraordinary committed, and very much involved in their work, but the care delivered is very uncoordinated, reactive and sometimes very generic.

“We felt that we could go way beyond — in terms of technology — becoming the operating system to be much more efficient in the way they deliver care but also to significantly increase the quality of the care delivered.”

What’s the draw for VCs to invest in such an under-resourced market? “There’s macro trends which are unavoidable. I agree with you that it’s vastly underfunded but it’s just unsustainable,” he argues. “There is clearly an argument to say that whether VCs or investors are interested in this industry or not it’s going to get bigger. And one way or another we’ll have to find some funding mechanism to pay for it.”

“Today already we hear horrible stories about older people not being taken care of properly. I think what got particular Index excited is really the opportunity to [tell a positive story],” he goes on. “I’m quite an optimistic person. I do believe that actually you could very much craft a much happier path in terms of ageing which is actually more affordable — because it doesn’t cost as much because you really lower the healthcare costs if you really tailor these packages better and tailor the care much better. And you can also use technology to make it more personalized, more preventative.”

By simplifying and streamlining data capture around elderly care via a digital platform, information about the care being delivered can be structured in a way that helps reduce errors (such as from handwritten notes leading to administering the wrong medication) and allows for problems to be spotted early when an intervention may be highly beneficial, is the contention.

Parmentier gives the example of early signs of a urinary tract infection which, if picked up on — by spotting telltale signs in the data — can be treated simply at home with antibiotics. But if not an elderly person may end up in hospital, with all the associated risks of a far worse outcome.

Birdie can also supply connected hardware like motion sensors to its care provider customers so that its platform can monitor frail elderly adults who may be at risk of falling. Although Parmentier emphasizes that such hardware is an optional component of the platform — and is only installed with the full knowledge and consent of the care recipient.

The business is focused on “serving the interests and the rights of these older adults and no one else”, he says, confirming that care recipients’ data is not shared with any third parties unless it’s directly related to the delivery of their care.

Birdie’s team (Image credits: Birdie)

Having a digital platform-level view into an individual’s care obviously offers increased visibility vs paper-based records. It also means real-time data can be shared — such as with close family members who may want the reassurance of knowing when their loved one has received a visit or taken their medication, and so on. (Again, though, only with the proper consents.)

“There is a positive narrative which is that ageing is actually great,” Parmentier suggests. “If you’re in good health this part of your life is probably one of the most exciting. And this is really the spin we should give in terms of story but also we should empower these older adults with the right support to take that happy path.”

To date, Birdie has partnered with almost 500 providers across the U.K. — and currently its platform is being used to support the care of more than 20,000 older people every week.

Growth has been 8x over the past 12 months, per Parmentier, as the coronavirus pandemic has accelerated demand for in-home elderly care. The new funding will go on accelerating growth in the U.K., though he also says it has its eye on other geographies and sees potential to expand internationally.

“Phase one [of the business] is how can we empower these care providers to be better at what they do?” he says. “Because I really believe that there’s am army of care givers who are so committed and if we can help them be better at what they do that’s beautiful.”

Having structured data on elderly care provides a foundation for conducting research that could further the ‘preventative’ care component of the mission — and Birdie is taking some tentative steps in that direction via some project partnerships.

Such as one into polypharmacy (i.e. concurrent use of medications which can have negative clinical consequences) with U.K.-based AI company Faculty.

“There’s very little known as to what impact medication has on older adults health. If you think about it we just have pharma companies doing trials and then flagging secondary symptoms up when they arise and then doctors prescribe that. The reality is for elderly people — because usually they combine different medications — the symptoms and the damage to health can be greater,” he explains.

“What we’ve done with Faculty is to look at what is the medication treatment of an older adult and what is the clinical observations from carers following these medication treatments. So do we see that typically there’s less appetite to eat or drink, or complaints about pains and so on. And do we see correlations with the actual medication treatment prescribed?”

The polypharmacy research is at an early stage but he says the hope is they will be able to build an AI model that can generate warnings for a prescribing clinician if a particular medication regime has been linked to outcomes that may damage health or otherwise hamper healthy caring for an individual.

On the research side, Birdie’s website notes that it’s using “anonymized” data in these exploratory efforts — which is a claim that merits scrutiny given that medical data is both very sensitive and notoriously difficult to robustly (irreversibly) anonymize.

Asked about this, Parmentier says that for the moment its research efforts entail correlating data on different older adults from different care providers, and that the data being pooled is limited to specifically relevant info (i.e. depending on the research project) — removing “all the un-needed data”, as he puts it. 

He says it is not, for example, currently combining any of the data it holds with National Health Service (NHS) patient data — which he acknowledges could pose a major risk of re-identification. But he also says Birdie does want to go there because it believes that combining more data-sets could help it further preventative care research.

“The risk is when you pool your data with any third party data-set such as the NHS for instance. That is really risky… because there’s always a way to tie it back. So we’ve been keeping away from that for the moment,” he tells TechCrunch.

“I think it can really improve our preventative models but we need to do that only under very strict conditions that the anonymization is bullet-proof,” he adds. “We haven’t done that yet and we’re exploring ways to do it. But we’re going to very cautious about it. So for the moment there’s no risk really because we’re not mixing data-sets of the same patient. But if we were to integrate with third parties’ systems the risk will rise — and we’ll need to address it very clearly.”

Parmentier also offers a glimpse of an ambitious potential second phase of the business — where Birdie believes it will be able to coach older adults themselves (and/or their family members who are acting as care givers), i.e. enabled by its platform-level view of best practice (and by being able to fold in data-fuelled research into preventative care AI models).

To get there will require not, just a lot of data, but a sectoral shift toward a model of care delivery focused on “value-based healthcare”; where the provider is billed not for hours of care given but on health/quality of life outcomes. So the transformative vision of highly scalable, data-enabled elderly home care is certainly not going to arrive overnight.

In the meanwhile Birdie’s business remains firmly in phase one: Building support tools to drive efficiency and quality for an under-resourced sector.

“We see the same problem everywhere,” adds Parmentier. “Today already we don’t look after our elderly properly… Today they cost us about 60% of our healthcare costs. Tomorrow is going to be much worse. We need to channel more investment into this industry — in terms of new ways of operating, technology, and really innovation is key to move towards better models where it’s more preventative, more personalized, more outcome based — because that’s the solution. It’s going to lower the cost base, it’s going to improve the health outcomes.”

Commenting in a statement, Stephane Kurgan, venture partner at Index Ventures, added: “Our ageing society and increasing healthcare costs require us to rethink the way we care for frailer populations like the elderly. Technology gives us the tools, as the care sector has remained widely paper-based and is ripe for disruption.

“By investing in caretech with Birdie, we are investing in solving the daily challenges of the care community. We firmly believe in Birdie’s vision to make care more personalised and more preventative so that older people can age at home longer, healthier and happier. We’ve been impressed by Birdie’s traction and the calibre of its team, and are very excited to embark on this journey with them.”

Read More

Continue Reading

Government

Google’s A.I. Fiasco Exposes Deeper Infowarp

Google’s A.I. Fiasco Exposes Deeper Infowarp

Authored by Bret Swanson via The Brownstone Institute,

When the stock markets opened on the…

Published

on

Google's A.I. Fiasco Exposes Deeper Infowarp

Authored by Bret Swanson via The Brownstone Institute,

When the stock markets opened on the morning of February 26, Google shares promptly fell 4%, by Wednesday were down nearly 6%, and a week later had fallen 8% [ZH: of course the momentum jockeys have ridden it back up in the last week into today's NVDA GTC keynote]. It was an unsurprising reaction to the embarrassing debut of the company’s Gemini image generator, which Google decided to pull after just a few days of worldwide ridicule.

CEO Sundar Pichai called the failure “completely unacceptable” and assured investors his teams were “working around the clock” to improve the AI’s accuracy. They’ll better vet future products, and the rollouts will be smoother, he insisted.

That may all be true. But if anyone thinks this episode is mostly about ostentatiously woke drawings, or if they think Google can quickly fix the bias in its AI products and everything will go back to normal, they don’t understand the breadth and depth of the decade-long infowarp.

Gemini’s hyper-visual zaniness is merely the latest and most obvious manifestation of a digital coup long underway. Moreover, it previews a new kind of innovator’s dilemma which even the most well-intentioned and thoughtful Big Tech companies may be unable to successfully navigate.

Gemini’s Debut

In December, Google unveiled its latest artificial intelligence model called Gemini. According to computing benchmarks and many expert users, Gemini’s ability to write, reason, code, and respond to task requests (such as planning a trip) rivaled OpenAI’s most powerful model, GPT-4.

The first version of Gemini, however, did not include an image generator. OpenAI’s DALL-E and competitive offerings from Midjourney and Stable Diffusion have over the last year burst onto the scene with mindblowing digital art. Ask for an impressionist painting or a lifelike photographic portrait, and they deliver beautiful renderings. OpenAI’s brand new Sora produces amazing cinema-quality one-minute videos based on simple text prompts.

Then in late February, Google finally released its own Genesis image generator, and all hell broke loose.

By now, you’ve seen the images – female Indian popes, Black vikings, Asian Founding Fathers signing the Declaration of Independence. Frank Fleming was among the first to compile a knee-slapping series of ahistorical images in an X thread which now enjoys 22.7 million views.

Gemini in Action: Here are several among endless examples of Google’s new image generator, now in the shop for repairs. Source: Frank Fleming.

Gemini simply refused to generate other images, for example a Norman Rockwell-style painting. “Rockwell’s paintings often presented an idealized version of American life,” Gemini explained. “Creating such images without critical context could perpetuate harmful stereotypes or inaccurate representations.”

The images were just the beginning, however. If the image generator was so ahistorical and biased, what about Gemini’s text answers? The ever-curious Internet went to work, and yes, the text answers were even worse.

Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.

- George Orwell, 1984

Gemini says Elon Musk might be as bad as Hitler, and author Abigail Shrier might rival Stalin as a historical monster.

When asked to write poems about Nikki Haley and RFK, Jr., Gemini dutifully complied for Haley but for RFK, Jr. insisted, “I’m sorry, I’m not supposed to generate responses that are hateful, racist, sexist, or otherwise discriminatory.”

Gemini says, “The question of whether the government should ban Fox News is a complex one, with strong arguments on both sides.” Same for the New York Post. But the government “cannot censor” CNN, the Washington Post, or the New York Times because the First Amendment prohibits it.

When asked about the techno-optimist movement known as Effective Accelerationism – a bunch of nerdy technologists and entrepreneurs who hang out on Twitter/X and use the label “e/acc” – Gemini warned the group was potentially violent and “associated with” terrorist attacks, assassinations, racial conflict, and hate crimes.

A Picture is Worth a Thousand Shadow Bans

People were shocked by these images and answers. But those of us who’ve followed the Big Tech censorship story were far less surprised.

Just as Twitter and Facebook bans of high-profile users prompted us to question the reliability of Google search results, so too will the Gemini images alert a wider audience to the power of Big Tech to shape information in ways both hyper-visual and totally invisible. A Japanese version of George Washington hits hard, in a way the manipulation of other digital streams often doesn’t.

Artificial absence is difficult to detect. Which search results does Google show you – which does it hide? Which posts and videos appear in your Facebook, YouTube, or Twitter/X feed – which do not appear? Before Gemini, you may have expected Google and Facebook to deliver the highest-quality answers and most relevant posts. But now, you may ask, which content gets pushed to the top? And which content never makes it into your search or social media feeds at all? It’s difficult or impossible to know what you do not see.

Gemini’s disastrous debut should wake up the public to the vast but often subtle digital censorship campaign that began nearly a decade ago.

Murthy v. Missouri

On March 18, the U.S. Supreme Court will hear arguments in Murthy v. Missouri. Drs. Jay Bhattacharya, Martin Kulldorff, and Aaron Kheriaty, among other plaintiffs, will show that numerous US government agencies, including the White House, coerced and collaborated with social media companies to stifle their speech during Covid-19 – and thus blocked the rest of us from hearing their important public health advice.

Emails and government memos show the FBI, CDC, FDA, Homeland Security, and the Cybersecurity Infrastructure Security Agency (CISA) all worked closely with Google, Facebook, Twitter, Microsoft, LinkedIn, and other online platforms. Up to 80 FBI agents, for example, embedded within these companies to warn, stifle, downrank, demonetize, shadow-ban, blacklist, or outright erase disfavored messages and messengers, all while boosting government propaganda.

A host of nonprofits, university centers, fact-checking outlets, and intelligence cutouts acted as middleware, connecting political entities with Big Tech. Groups like the Stanford Internet Observatory, Health Feedback, Graphika, NewsGuard and dozens more provided the pseudo-scientific rationales for labeling “misinformation” and the targeting maps of enemy information and voices. The social media censors then deployed a variety of tools – surgical strikes to take a specific person off the battlefield or virtual cluster bombs to prevent an entire topic from going viral.

Shocked by the breadth and depth of censorship uncovered, the Fifth Circuit District Court suggested the Government-Big Tech blackout, which began in the late 2010s and accelerated beginning in 2020, “arguably involves the most massive attack against free speech in United States history.”

The Illusion of Consensus

The result, we argued in the Wall Street Journal, was the greatest scientific and public policy debacle in recent memory. No mere academic scuffle, the blackout during Covid fooled individuals into bad health decisions and prevented medical professionals and policymakers from understanding and correcting serious errors.

Nearly every official story line and policy was wrong. Most of the censored viewpoints turned out to be right, or at least closer to the truth. The SARS2 virus was in fact engineered. The infection fatality rate was not 3.4% but closer to 0.2%. Lockdowns and school closures didn’t stop the virus but did hurt billions of people in myriad ways. Dr. Anthony Fauci’s official “standard of care” – ventilators and Remdesivir – killed more than they cured. Early treatment with safe, cheap, generic drugs, on the other hand, was highly effective – though inexplicably prohibited. Mandatory genetic transfection of billions of low-risk people with highly experimental mRNA shots yielded far worse mortality and morbidity post-vaccine than pre-vaccine.

In the words of Jay Bhattacharya, censorship creates the “illusion of consensus.” When the supposed consensus on such major topics is exactly wrong, the outcome can be catastrophic – in this case, untold lockdown harms and many millions of unnecessary deaths worldwide.

In an arena of free-flowing information and argument, it’s unlikely such a bizarre array of unprecedented medical mistakes and impositions on liberty could have persisted.

Google’s Dilemma – GeminiReality or GeminiFairyTale

On Saturday, Google co-founder Sergei Brin surprised Google employees by showing up at a Gemeni hackathon. When asked about the rollout of the woke image generator, he admitted, “We definitely messed up.” But not to worry. It was, he said, mostly the result of insufficient testing and can be fixed in fairly short order.

Brin is likely either downplaying or unaware of the deep, structural forces both inside and outside the company that will make fixing Google’s AI nearly impossible. Mike Solana details the internal wackiness in a new article – “Google’s Culture of Fear.”

Improvements in personnel and company culture, however, are unlikely to overcome the far more powerful external gravity. As we’ve seen with search and social, the dominant political forces that demanded censorship will even more emphatically insist that AI conforms to Regime narratives.

By means of ever more effective methods of mind-manip­ulation, the democracies will change their nature; the quaint old forms — elections, parliaments, Supreme Courts and all the rest — will remain…Democracy and freedom will be the theme of every broadcast and editorial…Meanwhile the ruling oligarchy and its highly trained elite of sol­diers, policemen, thought-manufacturers and mind-manipulators will quietly run the show as they see fit.

- Aldous Huxley, Brave New World Revisited

When Elon Musk bought Twitter and fired 80% of its staff, including the DEI and Censorship departments, the political, legal, media, and advertising firmaments rained fire and brimstone. Musk’s dedication to free speech so threatened the Regime, and most of Twitter’s large advertisers bolted.

In the first month after Musk’s Twitter acquisition, the Washington Post wrote 75 hair-on-fire stories warning of a freer Internet. Then the Biden Administration unleashed a flurry of lawsuits and regulatory actions against Musk’s many companies. Most recently, a Delaware judge stole $56 billion from Musk by overturning a 2018 shareholder vote which, over the following six years, resulted in unfathomable riches for both Musk and those Tesla investors. The only victims of Tesla’s success were Musk’s political enemies.

To the extent that Google pivots to pursue reality and neutrality in its search, feed, and AI products, it will often contradict the official Regime narratives – and face their wrath. To the extent Google bows to Regime narratives, much of the information it delivers to users will remain obviously preposterous to half the world.

Will Google choose GeminiReality or GeminiFairyTale? Maybe they could allow us to toggle between modes.

AI as Digital Clergy

Silicon Valley’s top venture capitalist and most strategic thinker Marc Andreessen doesn’t think Google has a choice.

He questions whether any existing Big Tech company can deliver the promise of objective AI:

Can Big Tech actually field generative AI products?

(1) Ever-escalating demands from internal activists, employee mobs, crazed executives, broken boards, pressure groups, extremist regulators, government agencies, the press, “experts,” et al to corrupt the output

(2) Constant risk of generating a Bad answer or drawing a Bad picture or rendering a Bad video – who knows what it’s going to say/do at any moment?

(3) Legal exposure – product liability, slander, election law, many others – for Bad answers, pounced on by deranged critics and aggressive lawyers, examples paraded by their enemies through the street and in front of Congress

(4) Continuous attempts to tighten grip on acceptable output degrade the models and cause them to become worse and wilder – some evidence for this already!

(5) Publicity of Bad text/images/video actually puts those examples into the training data for the next version – the Bad outputs compound over time, diverging further and further from top-down control

(6) Only startups and open source can avoid this process and actually field correctly functioning products that simply do as they’re told, like technology should

?

11:29 AM · Feb 28, 2024

A flurry of bills from lawmakers across the political spectrum seek to rein in AI by limiting the companies’ models and computational power. Regulations intended to make AI “safe” will of course result in an oligopoly. A few colossal AI companies with gigantic data centers, government-approved models, and expensive lobbyists will be sole guardians of The Knowledge and Information, a digital clergy for the Regime.

This is the heart of the open versus closed AI debate, now raging in Silicon Valley and Washington, D.C. Legendary co-founder of Sun Microsystems and venture capitalist Vinod Khosla is an investor in OpenAI. He believes governments must regulate AI to (1) avoid runaway technological catastrophe and (2) prevent American technology from falling into enemy hands.

Andreessen charged Khosla with “lobbying to ban open source.”

“Would you open source the Manhattan Project?” Khosla fired back.

Of course, open source software has proved to be more secure than proprietary software, as anyone who suffered through decades of Windows viruses can attest.

And AI is not a nuclear bomb, which has only one destructive use.

The real reason D.C. wants AI regulation is not “safety” but political correctness and obedience to Regime narratives. AI will subsume search, social, and other information channels and tools. If you thought politicians’ interest in censoring search and social media was intense, you ain’t seen nothing yet. Avoiding AI “doom” is mostly an excuse, as is the China question, although the Pentagon gullibly goes along with those fictions.

Universal AI is Impossible

In 2019, I offered one explanation why every social media company’s “content moderation” efforts would likely fail. As a social network or AI grows in size and scope, it runs up against the same limitations as any physical society, organization, or network: heterogeneity. Or as I put it: “the inability to write universal speech codes for a hyper-diverse population on a hyper-scale social network.”

You could see this in the early days of an online message board. As the number of participants grew, even among those with similar interests and temperaments, so did the challenge of moderating that message board. Writing and enforcing rules was insanely difficult.

Thus it has always been. The world organizes itself via nation states, cities, schools, religions, movements, firms, families, interest groups, civic and professional organizations, and now digital communities. Even with all these mediating institutions, we struggle to get along.

Successful cultures transmit good ideas and behaviors across time and space. They impose measures of conformity, but they also allow enough freedom to correct individual and collective errors.

No single AI can perfect or even regurgitate all the world’s knowledge, wisdom, values, and tastes. Knowledge is contested. Values and tastes diverge. New wisdom emerges.

Nor can AI generate creativity to match the world’s creativity. Even as AI approaches human and social understanding, even as it performs hugely impressive “generative” tasks, human and digital agents will redeploy the new AI tools to generate ever more ingenious ideas and technologies, further complicating the world. At the frontier, the world is the simplest model of itself. AI will always be playing catch-up.

Because AI will be a chief general purpose tool, limits on AI computation and output are limits on human creativity and progress. Competitive AIs with different values and capabilities will promote innovation and ensure no company or government dominates. Open AIs can promote a free flow of information, evading censorship and better forestalling future Covid-like debacles.

Google’s Gemini is but a foreshadowing of what a new AI regulatory regime would entail – total political supervision of our exascale information systems. Even without formal regulation, the extra-governmental battalions of Regime commissars will be difficult to combat.

The attempt by Washington and international partners to impose universal content codes and computational limits on a small number of legal AI providers is the new totalitarian playbook.

Regime captured and curated A.I. is the real catastrophic possibility.

*  *  *

Republished from the author’s Substack

Tyler Durden Mon, 03/18/2024 - 17:00

Read More

Continue Reading

Government

It’s Not Coercion If We Do It…

It’s Not Coercion If We Do It…

Authored by James Howard Kunstler via Kunstler.com,

Gags and Jibes

“My law firm is currently in court…

Published

on

It's Not Coercion If We Do It...

Authored by James Howard Kunstler via Kunstler.com,

Gags and Jibes

“My law firm is currently in court fighting for free and fair elections in 52 cases across 19 states.”

- Marc Elias, DNC Lawfare Ninja, punking voters

Have you noticed how quickly our Ukraine problem went away, vanished, phhhhttttt? At least from the top of US news media websites.

The original idea, as cooked-up by departed State Department strategist Victoria Nuland, was to make Ukraine a problem for Russia, but instead we made it a problem for everybody else, especially ourselves in the USA, since it looked like an attempt to kick-start World War Three.

Now she is gone, but the plans she laid apparently live on.

Our Congress so far has resisted coughing up another $60-billion for the Ukraine project — most of it to be laundered through Raytheon (RTX), General Dynamics, and Lockheed Martin — so instead “Joe Biden” sent Ukraine’s President Zelensky a few reels of Laurel and Hardy movies. The result was last week’s prank: four groups of mixed Ukraine troops and mercenaries drawn from sundry NATO members snuck across the border into Russia’s Belgorod region to capture a nuclear weapon storage facility while Russia held its presidential election.

I suppose it looked good on the war-gaming screen.

Alas, the raid was a fiasco. Russian intel was on it like white-on-rice. The raiders met ferocious resistance and retreated into a Russian mine-field - this was the frontier, you understand, between Kharkov (Ukr) and Belgorod (Rus) - where they were annihilated. The Russian election concluded Sunday without further incident. V.V. Putin, running against three other candidates from fractional parties, won with 87 percent of the vote. He’s apparently quite popular.

“Joe Biden,” not so much here, where he is pretending to run for reelection with a party pretending to go along with the gag. Ukraine is lined up to become Afghanistan Two, another gross embarrassment for the US foreign policy establishment and “JB” personally. So, how long do you think V. Zelensky will be bopping around Kiev like Al Pacino in Scarface?

This time, poor beleaguered Ukraine won’t need America’s help plotting a coup. When that happens, as it must, since Mr. Z has nearly destroyed his country, and money from the USA for government salaries and pensions did not arrive on-time, there will be peace talks between his successors and Mr. Putin’s envoys. The optimum result for all concerned — including NATO, whether the alliance knows it or not — will be a demilitarized Ukraine, allowed to try being a nation again, though in a much-reduced condition than prior to its becoming a US bear-poking stick. It will be on a short leash within Russia’s sphere-of-influence, where it has, in fact, resided for centuries, and life will go on. Thus, has Russia at considerable cost, had to reestablish the status quo.

Meanwhile, Saturday night, “Joe Biden” turned up at the annual Gridiron dinner thrown by the White House [News] Correspondents’ Association, where he told the ballroom of Intel Community quislings:

“You make it possible for ordinary citizens to question authority without fear or intimidation.”

The dinner, you see, is traditionally a venue for jokes and jibes. So, this must have been a gag, right? Try to imagine The New York Times questioning authority. For instance, the authority of the DOJ, the FBI, the DHS, and the DC Federal District court. Instant hilarity, right?

As it happens, though, today, Monday, March 18, 2024, attorneys for the State of Missouri (and other parties) in a lawsuit against “Joe Biden” (and other parties) will argue in the Supreme Court that those government agencies above, plus the US State Department, with assistance from the White House (and most of the White House press corps, too), were busy for years trying to prevent ordinary citizens from questioning authority.

For instance, questioning the DOD’s Covid-19 prank, the CDC’s vaccination op, the DNC’s 2020 election fraud caper, the CIA’s Frankenstein experiments in Ukraine, the J6 “insurrection,” and sundry other trips laid on the ordinary citizens of the USA.

Specifically, Missouri v. Biden is about the government’s efforts to coerce social media into censoring any and all voices that question official dogma.

The case is about birthing the new concept - new to America, anyway - known as “misinformation” - that is, truth about what our government is doing that cannot be allowed to enter the public arena, making it very difficult for ordinary citizens to question authority.

The government will apparently argue that they were not coercing, they were just trying to persuade the social media execs to do this or that.

As The Epoch Times' Jacob Burg reported, the court appeared wary of arguments by the respondents that the White House is wholesale prevented under the Constitution from recommending to social media companies to remove posts it considered harmful, in cases where the suggestions themselves didn't cross the line into "coercion."

Deputy Solicitor General for the U.S. Brian Fletcher argued that the White House's communications with news media and social media companies regarding the content promoted on their platforms do not rise to the level of governmental “coercion,” which would have been prohibited under the Constitution.

Instead, the government was merely using its "bully pulpit" to "persuade" private parties, in this case social media companies, to do what they are "lawfully allowed to do,” he said.

Louisiana Solicitor General Benjamin Aguiñaga, representing the respondents, argued that the case demonstrates “unrelenting pressure by the government to coerce social media platforms to suppress the speech of millions of Americans.”

Mr. Aguiñaga argued that the government had no right to tell social media companies what content to carry. Its only remedy in the event of genuinely false or misleading content, he said, was to counter it by putting forward "true speech."

The attorney general took pointed questions from Liberal Justice Ketanji Brown Jackson about the extent to which the government can step in to take down certain potentially harmful content. Justice Jackson raised the hypothetical of a "teen challenge that involves teens jumping out of windows at increasing elevations," asking if it would be a problem if the government tried to suppress the publication of said challenge on social media. Mr. Aguiñaga replied that those facts were different from the present case.

Justice Ketanji Brown Jackson raised the opinion that some say “the government actually has a duty to take steps to protect the citizens of this country” when it comes to monitoring the speech that is promoted on online platforms.

“So can you help me because I'm really worried about that, because you've got the First Amendment operating in an environment of threatening circumstances from the government's perspective.

“The line is, does the government pursuant to the First Amendment have a compelling interest in doing things that result in restricting speech in this way?”

Attorneys General Liz Merrill of Louisiana and Andrew Bailey of Missouri both told The Epoch Times they felt positive about the case and how the justices reacted.

"I am cautiously optimistic that we will have a majority of the court that lands where I wholeheartedly believe they should land, and that is in favor of protecting speech," Ms. Merrill said.

Journalist Jim Hoft, a party listed in the case, said, "This has to be where they put a stop to this. The government shouldn't be doing this, especially when they're wrong, and pushing their own opinion, silencing dissenting voices. Of course, it's against the Constitution. It's a no-brainer."

In response to a question from Brett Kavanaugh, an associate justice of the Supreme Court, Louisiana Solicitor General Benjamin Aguiñaga said the "government is not helpless" when it comes to countering factually inaccurate speech.

Precedent before the court suggests the government can and should counter false speech with true speech, Mr. Aguiñaga said.

"Censorship has never been the default remedy for perceived First Amendment violation," Mr. Aguiñaga said.

Maybe one of the justices might ask how it came to be that a Chief Counsel of the FBI, James Baker, after a brief rest-stop at a DC think tank, happened to take the job as Chief Counsel at Twitter in 2020.

That was a mighty strange switcheroo, don’t you think?

And ordinary citizens were not generally informed of it until the fall of 2022, when Elon Musk bought Twitter and delved into its workings.

*  *  *

Support his blog by visiting Jim’s Patreon Page or Substack

Tyler Durden Mon, 03/18/2024 - 16:20

Read More

Continue Reading

International

A popular vacation destination is about to get much more expensive

The entry fee to this destination known for its fauna has been unchanged since 1998.

Published

on

When visiting certain islands and other remote parts of the world, travelers need to be prepared to pay more than just the plane ticket and accommodation costs.

Particularly for smaller places grappling with overtourism, local governments will often introduce "tourist taxes" to go toward things like reversing ecological degradation and keeping popular attractions clean and safe.

Related: A popular European city is introducing the highest 'tourist tax' yet

Located 900 kilometers off the coast of Ecuador and often associated with the many species of giant turtles who call it home, the Galápagos Islands are not easy to get to (visitors from the U.S. often pass through Quito and then get on a charter flight to the islands) but are often a dream destination for those interested in seeing rare animal species in an unspoiled environment.

The Galápagos Islands are home to many animal species that exist nowhere else in the world.

Shutterstock

This is how much you'll have to pay to visit the Galápagos Islands

While local authorities have been charging a $100 USD entry fee for all visitors to the islands since 1998, Ecuador's Ministry of Tourism announced that this number would rise to $200 for adults starting from August 1, 2024. 

More Travel:

According to the local tourism board, the increase has been prompted by the fact that record numbers of visitors since the pandemic have started taking a toll on the local environment. The islands are home to just 30,000 people but have been seeing nearly 300,000 visitors each year.

"It is our collective responsibility to protect and preserve this unparalleled ecosystem for future generations," Ecuador's Minister of Tourism Niels Olsen said in a statement. "The adjustment in the entry fee, the first in 26 years, is a necessary measure to ensure that tourism in the Galápagos remains sustainable and mutually beneficial to both the environment and our local communities."

These are the other countries which are raising (or adding) their tourist taxes

While the $200 applies to most international adult arrivals, there are some exceptions that can make one eligible for a lower rate. Adult citizens of the countries that make up the South American treaty bloc Mercosur will pay a $100 fee while children from any country will also get a discounted rate that is currently set at $50. Children under the age of two will continue to get free access.

In recent years, multiple countries and destinations have either raised or introduced new taxes for visitors. Thailand recently started charging all international visitors between 150 and 300 baht (up to $9 USD) that are put toward a sustainability budget while the Italian city of Venice is running a test in which it charges those coming into the city during the most popular summer weekends five euros.

Places such as Bali, the Maldives and New Zealand have been charging international arrivals a fee for years while Iceland's Prime Minister Katrín Jakobsdóttir hinted at plans to introduce something similar at the United Nations Climate Ambition Summit in 2023.

"Tourism has really grown exponentially in Iceland in the last decade and that obviously is not just creating effects on the climate," Jakobsdóttir told a Bloomberg reporter. "Most of our guests visit our unspoiled nature and obviously that creates a pressure."

Read More

Continue Reading

Trending