Connect with us

Government

‘Orwellian’ AI lie detector project challenged in EU court

A legal challenge was heard today in Europe’s Court of Justice in relation to a controversial EU-funded research project using artificial intelligence for facial ‘lie detection’ with the aim of speeding up immigration checks. The transparency lawsuit…

Published

on

A legal challenge was heard today in Europe’s Court of Justice in relation to a controversial EU-funded research project using artificial intelligence for facial ‘lie detection’ with the aim of speeding up immigration checks.

The transparency lawsuit against the EU’s Research Executive Agency (REA), which oversees the bloc’s funding programs, was filed in March 2019 by Patrick Breyer, MEP of the Pirate Party Germany and a civil liberties activist — who has successfully sued the Commission before over a refusal to disclose documents.

He’s seeking the release of documents on the ethical evaluation, legal admissibility, marketing and results of the project. And is hoping to set a principle that publicly funded research must comply with EU fundamental rights — and help avoid public money being wasted on AI ‘snake oil’ in the process.

“The EU keeps having dangerous surveillance and control technology developed, and will even fund weapons research in the future, I hope for a landmark ruling that will allow public scrutiny and debate on unethical publicly funded research in the service of private profit interests,” said Breyer in a statement following today’s hearing. “With my transparency lawsuit, I want the court to rule once and for all that taxpayers, scientists, media and Members of Parliament have a right to information on publicly funded research — especially in the case of pseudoscientific and Orwellian technology such as the ‘iBorderCtrl video lie detector’.”

The court has yet to set a decision date on the case but Breyer said the judges questioned the agency “intensively and critically for over an hour” — and revealed that documents relating to the AI technology involved, which have not been publicly disclosed but had been reviewed by the judges, contain information such as “ethnic characteristics”, raising plenty of questions.

The presiding judge went on to query whether it wouldn’t be in the interests of the EU research agency to demonstrate that it has nothing to hide by publishing more information about the controversial iBorderCtrl project, per Breyer.

AI ‘lie detection’

The research in question is controversial because the notion of an accurate lie detector machine remains science fiction, and with good reason: There’s no evidence of a ‘universal psychological signal’ for deceit.

Yet this AI-fuelled commercial R&D ‘experiment’ to build a video lie detector — which entailed testers being asked to respond to questions put to them by a virtual border guard as a webcam scanned their facial expressions and the system sought to detect what an official EC summary of the project describes as “biomarkers of deceit” in an effort to score the truthfulness of their facial expressions (yes, really) — scored over €4.5M/$5.4M in EU research funding under the bloc’s Horizon 2020 scheme.

The iBorderCtrl project ran between September 2016 and August 2019, with the funding spread between 13 private or for-profit entities across a number of Member States (including the UK, Poland, Greece and Hungary).

Public research reports the Commission said would be published last year, per a written response to Breyer’s questions challenging the lack of transparency, do not appear to have seen the light of day yet.

Back in 2019 The Intercept was able to test out the iBorderCtrl system for itself. The video lie detector falsely accused its reporter of lying — judging she had given four false answers out of 16, and giving her an overall score of 48 which it reported that a policeman who assessed the results said triggered a suggestion from the system she should be subject to further checks (though was not as the system was never run for real during border tests).

The Intercept said it had to file a data access request — a right that’s established in EU law — in order to obtain a copy of the reporter’s results. Its report quoted Ray Bull, a professor of criminal investigation at the University of Derby, who described the iBorderCtrl project as “not credible” — given the lack of n evidence that monitoring microgestures on people’s faces is an accurate way to measure lying.

“They are deceiving themselves into thinking it will ever be substantially effective and they are wasting a lot of money. The technology is based on a fundamental misunderstanding of what humans do when being truthful and deceptive,” Bull also told it.

The notion that AI can automagically predict human traits if you just pump in enough data is distressingly common — just look at recent attempts to revive phrenology by applying machine learning to glean ‘personality traits’ from face shape. So a face-scanning AI ‘lie detector’ sits in a long and ignoble anti-scientific ‘tradition’.

In the 21st century it’s frankly incredible that millions of euros of public money are being funnelled into rehashing terrible old ideas — before you even consider the ethical and legal blindspots inherent in the EU funding research that runs counter to fundamental rights set out in the EU’s charter. When you consider all the bad decisions involved in letting this fly it looks head-hangingly shameful.

The granting of funds to such a dubious application of AI also appears to ignore all the (good) research that has been done showing how data-driven technologies risk scaling bias and discrimination.

We can’t know for sure, though, because only very limited information has been released about how the consortia behind iBorderCtrl assessed ethics considerations in their experimental application — which is a core part of the legal complaint.

The challenge in front of the European Court of Justice in Luxembourg poses some very awkward questions for the Commission: Should the EU be pouring taxpayer cash into pseudoscientific ‘research’? Shouldn’t it be trying to fund actual science? And why does its flagship research program — the jewel in the EU crown — have so little public oversight?

The fact that a video lie detector made it through the EU’s ‘ethics self-assessment‘ process, meanwhile, suggests the claimed ‘ethics checks’ aren’t worth a second glance.

“The decision on whether to accept [an R&D] application or not is taken by the REA after Member States representatives have taken a decision. So there is no public scrutiny, there is no involvement of parliament or NGOs. There is no [independent] ethics body that will screen all of those projects. The whole system is set up very badly,” says Breyer.

“Their argument is basically that the purpose of this R&D is not to contribute to science or to do something for public good or to contribute to EU policies but the purpose of these programs really is to support the industry — to develop stuff to sell. So it’s really supposed to be an economical program, the way it has been devised. And I think we really actually need a discussion about whether this is right, whether this should be so.”

“The EU’s about to regulate AI and here it is actually funding unethical and unlawful technologies,” he adds.

No external ethics oversight

Not only does it look hypocritical for the EU to be funding rights-hostile research but — critics contend — it’s a waste of public money that could be spend on genuinely useful research (be it for a security purpose or, more broadly, for the public good; and for furthering those ‘European values’ EU lawmakers love to refer to).

“What we need to know and understand is that research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that actually wastes money for other programs that would be really important and useful,” argues Breyer.

“For example in the security program you could maybe do some good in terms of police protective gear. Or maybe in terms of informing the population in terms of crime prevention. So you could do a lot of good if these means were used properly — and not on this dubious technology that will hopefully never be used.”

The latest incarnation of the EU’s flagship research and innovation program, which takes over from Horizon 2020, has a budget of ~€95.5BN for the 2021-2027 period. And driving digital transformation and developments in AI are among the EU’s stated research funding priorities. So the pot of money available for ‘experimental’ AI looks massive.

But who will be making sure that money isn’t wasted on algorithmic snake oil — and dangerous algorithmic snake oil in instances where the R&D runs so clearly counter to the EU’s own charter of fundamental human rights?

The European Commission declined multiple requests for spokespeople to talk about these issues but it did send some on the record points (below), and some background information regarding access to documents which is a key part of the legal complaint.

Among the Commission’s on the record statements on ‘ethics in research’, it started with the claim that “ethics is given the highest priority in EU funded research”.

“All research and innovation activities carried out under Horizon 2020 must comply with ethical principles and relevant national, EU and international law, including the Charter of Fundamental Rights and the European Convention on Human Rights,” it also told us, adding: “All proposals undergo a specific ethics evaluation which verifies and contractually obliges the compliance of the research project with ethical rules and standards.”

It did not elaborate on how a ‘video lie detector’ could possibly comply with EU fundamental rights — such as the right to dignity, privacy, equality and non-discrimination.

And it’s worth noting that the European Data Protection Supervisor (EDPS) has raised concerns about misalignment between EU-funded scientific research and data protection law, writing in a preliminary opinion last year: “We recommend intensifying dialogue between data protection authorities and ethical review boards for a common understanding of which activities qualify as genuine research, EU codes of conduct for scientific research, closer alignment between EU research framework programmes and data protection standards, and the beginning of a debate on the circumstances in which access by researchers to data held by private companies can be based on public interest”.

On the iBorderCtrl project specifically the Commission told us that the project appointed an ethics advisor to oversee the implementation of the ethical aspects of research “in compliance with the initial ethics requirement”. “The advisor works in ways to ensure autonomy and independence from the consortium,” it claimed, without disclosing who the project’s (self-appointed) ethics advisor is.

“Ethics aspects are constantly monitored by the Commission/REA during the execution of the project through the revision of relevant deliverables and carefully analysed in cooperation with external independent experts during the technical review meetings linked to the end of the reporting periods,” it went on, adding that: “A satisfactory ethics check was conducted in March 2019.”

It did not provide any further details about this self-regulatory “ethics check”.

“The way how it works so far is basically some expert group that the Commission sets up with propose/call for tender,” says Breyer, discussing how the EU’s research program is structured. “It’s dominated by industry experts, it doesn’t have any members of parliament in there, it only has — I think — one civil society representative in it, so that’s falsely composed right from the start. Then it goes to the Research Executive Agency and the actual decision is taking by representatives of the Member States.

“The call [for research proposals] itself doesn’t sound so bad if you look it up — it’s very general — so the problem really was the specific proposal that they proposed in response to it. And these are not screened by independent experts, as far as I understand it. The issue of ethics is dealt with by self assessment. So basically the applicant is supposed to indicate whether there is a high ethical risk involved in the project or not. And only if they indicate so will experts — selected by the REA — do an ethics assessment.

“We don’t know who’s been selected, we don’t know their opinions — it’s also being kept secret — and if it turns out later that a project in unethical it’s not possible to revoke the grant.”

The hypocrisy charge comes in sharply here because the Commission is in the process of shaping risk-based rules for the application of AI. And EU lawmakers have been saying for years that artificial intelligence technologies need ‘guardrails’ to make sure they’re applied in line with regional values and rights.

Commission EVP Margrethe Vestager has talked about the need for rules to ensure artificial intelligence is “used ethically” and can “support human decisions and not undermine them”, for example.

Yet EU institutions are simultaneously splashing public funds on AI research that would clearly be unlawful if implemented in the region, and which civil society critics decry as obviously unethical given the lack of scientific basis underpinning ‘lie detection’.

In an FAQ section of the iBorderCtrl website, the commercial consortia behind the project concedes that real-world deployment of some of the technologies involved would not be covered by the existing EU legal framework — adding that this means “they could not be implemented without a democratic political decision establishing a legal basis”.

Or, put another way, such a system would be illegal to actually use for border checks in Europe without a change in the law. Yet European taxpayer funding was nonetheless ploughed in.

A spokesman for the EDPS declined to comment on Breyer’s case specifically but he confirmed that its preliminary opinion on scientific research and data protection is still relevant.

He also pointed to further related work which addresses a recent Commission push to encourage pan-EU health data sharing for research purposes — where the EDPS advises that data protection safeguards should be defined “at the outset” and also that a “thought through” legal basis should be established ahead of research taking place.

The EDPS recommends paying special attention to the ethical use of data within the [health data sharing] framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation,” the EU’s chief data supervisor writes, adding that he’s “convinced that the success of the [health data sharing plan] will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights”.

tl;dr: Legal and ethical use of data must be the DNA of research efforts — not a check-box afterthought.

Unverifiable tech

In addition to a lack of independent ethics oversight of research projects that gain EU funding, there is — currently and worryingly for supposedly commercially minded research — no way for outsiders to independently verify (or, well, falsify) the technology involved.

In the case of the iBorderCtrl tech no meaningful data on the outcomes of the project has been made public and requests for data sought under freedom of information law have been blocked on commercial interest grounds.

Breyer has been trying without success to obtain information about the results of the project since it finished in 2019. The Guardian reported in detail on his fight back in December.

Under the legal framework wrapping EU research he says there’s only a very limited requirement to publish information on project outcomes — and only long after the fact. His hope is thus that the Court of Justice will agree ‘commercial interests’ can’t be used to over-broadly deny disclosure of information in the public interest.

“They basically argue there is no obligation to examine whether a project actually works so they have the right to fund research that doesn’t work,” he tells TechCrunch. “They also argue that basically it’s sufficient to exclude access if any publication of the information would damage the ability to sell the technology — and that’s an extremely wide interpretation of commercially sensitive information.

“What I would accept is excluding information that really contains business secrets like source code of software programs or internal calculations or the like. But that certainly shouldn’t cover, for example, if a project is labelled as unethical. It’s not a business secret but obviously it will harm their ability to sell it — but obviously that interpretation is just outrageously wide.”

“I’m hoping that this [legal action] will be a precedent to clarify that information on such unethical — and also unlawful if it were actually used or deployed — technologies, that the public right to know takes precedence over the commercial interests to sell the technology,” he adds. “They are saying we won’t release the information because doing so will diminish the chances of selling the technology. And so when I saw this then I said well it’s definitely worth going to court over because they will be treating all requests the same.”

Civil society organizations have also been thwarted in attempts to get detailed information about the iBorderCtrl project. The Intercept reported in 2019 that researchers at the Milan-based Hermes Center for Transparency and Digital Human Rights used freedom of information laws to obtain internal documents about the iBorderCtrl system, for example, but the hundreds of pages they got back were heavily redacted — with many completely blacked out.

“I’ve heard from [journalists] who have tried in vain to find out about other dubious research projects that they are massively withholding information. Even stuff like the ethics report or the legal assessment — that’s all stuff that doesn’t contain any commercial secrets, as such,” Breyer continues. “It doesn’t contain any source code, nor any sensitive information — they haven’t even released these partially.

“I find it outrageous that an EU authority [the REA] will actually say we don’t care what the interest is in this because as soon as it could diminish sales then we will withhold the information. I don’t think that’s acceptable, both in terms of taxpayers’ interests in knowing about what their money is being used for but also in terms of the scientific interest in being able to test/to verify these experiments on the so called ‘deception detection’ — which is very contested if it really works. And in order to verify or falsify it scientists of course need to have access to the specifics about these trials.

“Also democratically speaking if ever the legislator wants to decide on the introduction of such a system or even on the framing of these research programs we basically need to know the details — for example what was the number of false positives? How well does it really work? Does it have a discriminatory effect because it works less well on certain groups of people such as facial recognition technology. That’s all stuff that we really urgently need to know.”

Regarding access to documents related to EU-funded research the Commission referred us to Regulation no. 1049/2001 — which it said “lays down the general principles and limits” — though it added that “each case is analysed carefully and individually”.

However the Commission’s interpretation of the regulations of the Horizon program appears to entirely exclude the application of the freedom of information — at least in the iBorderCtrl project case.

Per Breyer, they limit public disclosure to a summary of the research findings — that can be published some three or four years after the completion of the project.

“You’ll see an essay of five or six pages in some scientific magazine about this project and of course you can’t use it to verify or falsify the technology,” he says. “You can’t see what exactly they’ve been doing — who they’ve been talking to. So this summary is pretty useless scientifically and to the public and democratically and it takes ages. So I hope that in the future we will get more insight and hopefully a public debate.”

The EU research program’s legal framework is secondary legislation. So Breyer’s argument is that a blanket clause about protecting ‘commercial interests’ should not be able to trump fundamental EU rights to transparency. But of course it will be up to the court to decide.

“I think I stand some good chance especially since transparency and access to information is actually a fundamental right in the EU — it’s in the EU charter of fundamental rights. And this Horizon legislation is only secondary legislation — they can’t deviate from the primary law. And they need to be interpreted in line with it,” he adds. “So I think the court will hopefully say that this is applicable and they will do some balancing in the context of the freedom of information which also protects commercial information but subject to prevailing public interests. So I think they will find a good compromise and hopefully better insight and more transparency.

“Maybe they’ll blacken out some parts of the document, redact some of it but certainly I hope that in principle we will get access to that. And thereby also make sure that in the future the Commission and the REA will have to hand over most of the stuff that’s been requested on this research. Because there’s a lot of dubious projects out there.”

A better system of research project oversight could start by having the committee that decides on funding applications not being comprised of mostly industry and EU Member State representatives (who of course will always want EU cash to come to their region) — but also parliamentary representatives, more civil society representatives and scientists, per Breyer.

“It should have independent participants and those should be the majority,” he says. “That would make sense to steer the research activities in the direction of public good, of compliance with our values, of useful research — because what we need to know and understand is research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that wastes money for other programs that would be really important and useful.”

He also points to a new EU research program being set up that’s focused on defence — under the same structure, lacking proper public scrutiny of funding decisions or information disclosure, noting: “They want to do this for defence as well. So that will be even about lethal technologies.”

To date the only disclosures around iBorderCtrl have been a few parts of the technical specifications of its system and some of a communications report, per Breyer, who notes that both were ‘heavily redacted”.

“They don’t say for example which border agencies they have introduced this system to, they don’t say which politicians they’ve been talking to,” he says. “The interesting thing actually is that part of this funding is also presenting the technology to border authorities in the EU and politicians. Which is very interesting because the Commission keeps saying look this is only research; it doesn’t matter really. But in actual fact they are already using the project to promote the technology and the sales of it. And even if this is never used at EU borders funding the development will mean that it could be used by other governments — it could be sold to China and Saudi Arabia and the like.

“And also the deception detection technology — the company that is marketing it [a Manchester-based company called Silent Talker Ltd] — is also offering it to insurance companies, or to be used on job interviews, or maybe if you apply for a loan at a bank. So this idea that an AI system would be able to detect lies risks being used in the private sector very broadly and since I’m saying that it doesn’t work at all and it’s basically a lottery lots of people risk having disadvantages from this dubious technology.”

“It’s quite outrageous that nobody prevents the EU from funding such ‘voodoo’ technology,” he adds.

The Commission told us that “The Intelligent Portable Border Control System” (aka iBorderCtrl) “explored new ideas on increasing efficiency, convenience and security of land border crossing”, and like all security research projects it was “aimed at testing new ideas and technologies to address security challenges”.

“iBorderCtrl was not expected to deliver ready-made technologies or products. Not all research projects lead to the development of technologies with real-world applications. Once research projects are over, it is up to Member States to decide whether they want to further research and/or develop solutions studied by the project,” it also said. 

It also pointed out that specific application of any future technology “will always have to respect EU and national law and safeguards, including on fundamental rights and the EU rules on the protection of personal data”.

However Breyer also calls foul on the Commission seeking to deflect public attention by claiming ‘it’s only R&D’ or that it’s not deciding on the use of any particular technology. “Of course factually it creates pressure on the legislator to agree to something that has been developed if it turns out to be useful or to work,” he argues. “And also even if it’s not used by the EU itself it will be sold somewhere else — and so I think the lack of scrutiny and ethical assessment of this research is really scandalous. Especially as they have repeatedly developed and researched surveillance technologies — including mass surveillance of public spaces.”

“They have projects on Internet on bulk data collection and processing of Internet data. The security program is very problematic because they do research into interferences with fundamental rights — with the right to privacy,” he goes on. “There are no limitations really in the program to rule out unethical methods of mass surveillance or the like. And not only are there no material limitations but also there is no institutional set-up to be able to exclude such projects right from the beginning. And then even once the programs have been devised and started they will even refuse to disclose access to them. And that’s really outrageous and as I said I hope the court will do some proper balancing and provide for more insight and then we can basically trigger a public debate on the design of these research schemes.”

Pointing again to the Commission’s plan to set up a defence R&D fund under the same industry-centric decision-making structure — with a “similarly deficient ethics appraisal mechanism” — he notes that while there are some limits on EU research being able to fund autonomous weapons, other areas could make bids for taxpayer cash — such as weapons of mass destruction and nuclear weapons.

“So this will be hugely problematic and will have the same issue of transparency, all the more of course,” he adds.

On transparency generally, the Commission told us it “always encourages projects to publicise as much as possible their results”. While, for iBorderCtrl specifically, it said more information about the project is available on the CORDIS website and the dedicated project website.

If you take the time to browse to the ‘publications‘ page of the iBorderCtrl website you’ll find a number of “deliverables” — including an “ethics advisor”; the “ethic’s advisor’s first report”; an “ethics of profiling, the risk of stigmatization of individuals and mitigation plan”; and an “EU wide legal and ethical review report” — all of which are listed as “confidential”.

Read More

Continue Reading

International

Analyst reviews Apple stock price target amid challenges

Here’s what could happen to Apple shares next.

Published

on

They said it was bound to happen.

It was Jan. 11, 2024 when software giant Microsoft  (MSFT)  briefly passed Apple  (AAPL)  as the most valuable company in the world.

Microsoft's stock closed 0.5% higher, giving it a market valuation of $2.859 trillion. 

It rose as much as 2% during the session and the company was briefly worth $2.903 trillion. Apple closed 0.3% lower, giving the company a market capitalization of $2.886 trillion. 

"It was inevitable that Microsoft would overtake Apple since Microsoft is growing faster and has more to benefit from the generative AI revolution," D.A. Davidson analyst Gil Luria said at the time, according to Reuters.

The two tech titans have jostled for top spot over the years and Microsoft was ahead at last check, with a market cap of $3.085 trillion, compared with Apple's value of $2.684 trillion.

Analysts noted that Apple had been dealing with weakening demand, including for the iPhone, the company’s main source of revenue. 

Demand in China, a major market, has slumped as the country's economy makes a slow recovery from the pandemic and competition from Huawei.

Sales in China of Apple's iPhone fell by 24% in the first six weeks of 2024 compared with a year earlier, according to research firm Counterpoint, as the company contended with stiff competition from a resurgent Huawei "while getting squeezed in the middle on aggressive pricing from the likes of OPPO, vivo and Xiaomi," said senior Analyst Mengmeng Zhang.

“Although the iPhone 15 is a great device, it has no significant upgrades from the previous version, so consumers feel fine holding on to the older-generation iPhones for now," he said.

A man scrolling through Netflix on an Apple iPad Pro. Photo by Phil Barker/Future Publishing via Getty Images.

Future Publishing/Getty Images

Big plans for China

Counterpoint said that the first six weeks of 2023 saw abnormally high numbers with significant unit sales being deferred from December 2022 due to production issues.

Apple is planning to open its eighth store in Shanghai – and its 47th across China – on March 21.

Related: Tech News Now: OpenAI says Musk contract 'never existed', Xiaomi's EV, and more

The company also plans to expand its research centre in Shanghai to support all of its product lines and open a new lab in southern tech hub Shenzhen later this year, according to the South China Morning Post.

Meanwhile, over in Europe, Apple announced changes to comply with the European Union's Digital Markets Act (DMA), which went into effect last week, Reuters reported on March 12.

Beginning this spring, software developers operating in Europe will be able to distribute apps to EU customers directly from their own websites instead of through the App Store.

"To reflect the DMA’s changes, users in the EU can install apps from alternative app marketplaces in iOS 17.4 and later," Apple said on its website, referring to the software platform that runs iPhones and iPads. 

"Users will be able to download an alternative marketplace app from the marketplace developer’s website," the company said.

Apple has also said it will appeal a $2 billion EU antitrust fine for thwarting competition from Spotify  (SPOT)  and other music streaming rivals via restrictions on the App Store.

The company's shares have suffered amid all this upheaval, but some analysts still see good things in Apple's future.

Bank of America Securities confirmed its positive stance on Apple, maintaining a buy rating with a steady price target of $225, according to Investing.com

The firm's analysis highlighted Apple's pricing strategy evolution since the introduction of the first iPhone in 2007, with initial prices set at $499 for the 4GB model and $599 for the 8GB model.

BofA said that Apple has consistently launched new iPhone models, including the Pro/Pro Max versions, to target the premium market. 

Analyst says Apple selloff 'overdone'

Concurrently, prices for previous models are typically reduced by about $100 with each new release. 

This strategy, coupled with installment plans from Apple and carriers, has contributed to the iPhone's installed base reaching a record 1.2 billion in 2023, the firm said.

More Tech Stocks:

Apple has effectively shifted its sales mix toward higher-value units despite experiencing slower unit sales, BofA said.

This trend is expected to persist and could help mitigate potential unit sales weaknesses, particularly in China. 

BofA also noted Apple's dominance in the high-end market, maintaining a market share of over 90% in the $1,000 and above price band for the past three years.

The firm also cited the anticipation of a multi-year iPhone cycle propelled by next-generation AI technology, robust services growth, and the potential for margin expansion.

On Monday, Evercore ISI analysts said they believed that the sell-off in the iPhone maker’s shares may be “overdone.”

The firm said that investors' growing preference for AI-focused stocks like Nvidia  (NVDA)  has led to a reallocation of funds away from Apple. 

In addition, Evercore said concerns over weakening demand in China, where Apple may be losing market share in the smartphone segment, have affected investor sentiment.

And then ongoing regulatory issues continue to have an impact on investor confidence in the world's second-biggest company.

“We think the sell-off is rather overdone, while we suspect there is strong valuation support at current levels to down 10%, there are three distinct drivers that could unlock upside on the stock from here – a) Cap allocation, b) AI inferencing, and c) Risk-off/defensive shift," the firm said in a research note.

Related: Veteran fund manager picks favorite stocks for 2024

Read More

Continue Reading

International

Major typhoid fever surveillance study in sub-Saharan Africa indicates need for the introduction of typhoid conjugate vaccines in endemic countries

There is a high burden of typhoid fever in sub-Saharan African countries, according to a new study published today in The Lancet Global Health. This high…

Published

on

There is a high burden of typhoid fever in sub-Saharan African countries, according to a new study published today in The Lancet Global Health. This high burden combined with the threat of typhoid strains resistant to antibiotic treatment calls for stronger prevention strategies, including the use and implementation of typhoid conjugate vaccines (TCVs) in endemic settings along with improvements in access to safe water, sanitation, and hygiene.

Credit: IVI

There is a high burden of typhoid fever in sub-Saharan African countries, according to a new study published today in The Lancet Global Health. This high burden combined with the threat of typhoid strains resistant to antibiotic treatment calls for stronger prevention strategies, including the use and implementation of typhoid conjugate vaccines (TCVs) in endemic settings along with improvements in access to safe water, sanitation, and hygiene.

 

The findings from this 4-year study, the Severe Typhoid in Africa (SETA) program, offers new typhoid fever burden estimates from six countries: Burkina Faso, Democratic Republic of the Congo (DRC), Ethiopia, Ghana, Madagascar, and Nigeria, with four countries recording more than 100 cases for every 100,000 person-years of observation, which is considered a high burden. The highest incidence of typhoid was found in DRC with 315 cases per 100,000 people while children between 2-14 years of age were shown to be at highest risk across all 25 study sites.

 

There are an estimated 12.5 to 16.3 million cases of typhoid every year with 140,000 deaths. However, with generic symptoms such as fever, fatigue, and abdominal pain, and the need for blood culture sampling to make a definitive diagnosis, it is difficult for governments to capture the true burden of typhoid in their countries.

 

“Our goal through SETA was to address these gaps in typhoid disease burden data,” said lead author Dr. Florian Marks, Deputy Director General of the International Vaccine Institute (IVI). “Our estimates indicate that introduction of TCV in endemic settings would go to lengths in protecting communities, especially school-aged children, against this potentially deadly—but preventable—disease.”

 

In addition to disease incidence, this study also showed that the emergence of antimicrobial resistance (AMR) in Salmonella Typhi, the bacteria that causes typhoid fever, has led to more reliance beyond the traditional first line of antibiotic treatment. If left untreated, severe cases of the disease can lead to intestinal perforation and even death. This suggests that prevention through vaccination may play a critical role in not only protecting against typhoid fever but reducing the spread of drug-resistant strains of the bacteria.

 

There are two TCVs prequalified by the World Health Organization (WHO) and available through Gavi, the Vaccine Alliance. In February 2024, IVI and SK bioscience announced that a third TCV, SKYTyphoid™, also achieved WHO PQ, paving the way for public procurement and increasing the global supply.

 

Alongside the SETA disease burden study, IVI has been working with colleagues in three African countries to show the real-world impact of TCV vaccination. These studies include a cluster-randomized trial in Agogo, Ghana and two effectiveness studies following mass vaccination in Kisantu, DRC and Imerintsiatosika, Madagascar.

 

Dr. Birkneh Tilahun Tadesse, Associate Director General at IVI and Head of the Real-World Evidence Department, explains, “Through these vaccine effectiveness studies, we aim to show the full public health value of TCV in settings that are directly impacted by a high burden of typhoid fever.” He adds, “Our final objective of course is to eliminate typhoid or to at least reduce the burden to low incidence levels, and that’s what we are attempting in Fiji with an island-wide vaccination campaign.”

 

As more countries in typhoid endemic countries, namely in sub-Saharan Africa and South Asia, consider TCV in national immunization programs, these data will help inform evidence-based policy decisions around typhoid prevention and control.

 

###

 

About the International Vaccine Institute (IVI)
The International Vaccine Institute (IVI) is a non-profit international organization established in 1997 at the initiative of the United Nations Development Programme with a mission to discover, develop, and deliver safe, effective, and affordable vaccines for global health.

IVI’s current portfolio includes vaccines at all stages of pre-clinical and clinical development for infectious diseases that disproportionately affect low- and middle-income countries, such as cholera, typhoid, chikungunya, shigella, salmonella, schistosomiasis, hepatitis E, HPV, COVID-19, and more. IVI developed the world’s first low-cost oral cholera vaccine, pre-qualified by the World Health Organization (WHO) and developed a new-generation typhoid conjugate vaccine that is recently pre-qualified by WHO.

IVI is headquartered in Seoul, Republic of Korea with a Europe Regional Office in Sweden, a Country Office in Austria, and Collaborating Centers in Ghana, Ethiopia, and Madagascar. 39 countries and the WHO are members of IVI, and the governments of the Republic of Korea, Sweden, India, Finland, and Thailand provide state funding. For more information, please visit https://www.ivi.int.

 

CONTACT

Aerie Em, Global Communications & Advocacy Manager
+82 2 881 1386 | aerie.em@ivi.int


Read More

Continue Reading

International

US Spent More Than Double What It Collected In February, As 2024 Deficit Is Second Highest Ever… And Debt Explodes

US Spent More Than Double What It Collected In February, As 2024 Deficit Is Second Highest Ever… And Debt Explodes

Earlier today, CNBC’s…

Published

on

US Spent More Than Double What It Collected In February, As 2024 Deficit Is Second Highest Ever... And Debt Explodes

Earlier today, CNBC's Brian Sullivan took a horse dose of Red Pills when, about six months after our readers, he learned that the US is issuing $1 trillion in debt every 100 days, which prompted him to rage tweet, (or rageX, not sure what the proper term is here) the following:

We’ve added 60% to national debt since 2018. Germany - a country with major economic woes - added ‘just’ 32%.   

Maybe it will never matter.   Maybe MMT is real.   Maybe we just cancel or inflate it out. Maybe career real estate borrowers or career politicians aren’t the answer.

I have no idea.  Only time will tell.   But it’s going to be fascinating to watch it play out.

He is right: it will be fascinating, and the latest budget deficit data simply confirmed that the day of reckoning will come very soon, certainly sooner than the two years that One River's Eric Peters predicted this weekend for the coming "US debt sustainability crisis."

According to the US Treasury, in February, the US collected $271 billion in various tax receipts, and spent $567 billion, more than double what it collected.

The two charts below show the divergence in US tax receipts which have flatlined (on a trailing 6M basis) since the covid pandemic in 2020 (with occasional stimmy-driven surges)...

... and spending which is about 50% higher compared to where it was in 2020.

The end result is that in February, the budget deficit rose to $296.3 billion, up 12.9% from a year prior, and the second highest February deficit on record.

And the punchline: on a cumulative basis, the budget deficit in fiscal 2024 which began on October 1, 2023 is now $828 billion, the second largest cumulative deficit through February on record, surpassed only by the peak covid year of 2021.

But wait there's more: because in a world where the US is spending more than twice what it is collecting, the endgame is clear: debt collapse, and while it won't be tomorrow, or the week after, it is coming... and it's also why the US is now selling $1 trillion in debt every 100 days just to keep operating (and absorbing all those millions of illegal immigrants who will keep voting democrat to preserve the socialist system of the US, so beloved by the Soros clan).

And it gets even worse, because we are now in the ponzi finance stage of the Minsky cycle, with total interest on the debt annualizing well above $1 trillion, and rising every day

... having already surpassed total US defense spending and soon to surpass total health spending and, finally all social security spending, the largest spending category of all, which means that US debt will now rise exponentially higher until the inevitable moment when the US dollar loses its reserve status and it all comes crashing down.

We conclude with another observation by CNBC's Brian Sullivan, who quotes an email by a DC strategist...

.. which lays out the proposed Biden budget as follows:

The budget deficit will growth another $16 TRILLION over next 10 years. Thats *with* the proposed massive tax hikes.

Without them the deficit will grow $19 trillion.

That's why you will hear the "deficit is being reduced by $3 trillion" over the decade.

No family budget or business could exist with this kind of math.

Of course, in the long run, neither can the US... and since neither party will ever cut the spending which everyone by now is so addicted to, the best anyone can do is start planning for the endgame.

Tyler Durden Tue, 03/12/2024 - 18:40

Read More

Continue Reading

Trending