Connect with us

Government

The ethics of artificial intelligence: A path toward responsible AI

TheStreet spoke to a series of experts and ethicists to discuss ethics and responsibility in the field of AI.

Published

on

Artificial intelligence has been around for decades. But the scope of the conversation around AI changed dramatically last year, when OpenAI launched ChatGPT, a Large Language Model that, once prompted, can spit out almost-passable prose in a strange semblance of, well, artificial intelligence. 

Its existence has amplified a debate among scientists, executives and regulators around the harms, threats and benefits of the technology. 

Related: US Expert Warns of One Overlooked AI Risk

Now, governments are racing to pen feasible regulation, with the U.S. so far seeming to look predominantly to prominent tech CEOs for their insight into regulatory practices, rather than scientists and researchers. And companies are racing to increase the capabilities of their AI tech as the boards of nearly every industry look for ways to adopt AI.

With harms and risks of dramatic social inequity, climate impact, increased fraud, misinformation and political instability pushed to the side amidst predictions of super-intelligent AI, the ethical question comes into greater focus.

The answer to it is not surprisingly nuanced. And though there is a path forward, there remains a litany of ethical red flags regarding AI and those responsible for its creation. 

'There's going to be a hell of a lot of abuse of these technologies.'

The ethical issue intrinsic to AI has nothing to do with purported concerns of developing a world-destroying superintelligence. These fears, spouted by Elon Musk and Sam Altman, have no basis in reality, according to Suresh Venkatasubramanian, an AI researcher and professor who in 2021 served as a White House tech advisor. 

"It's a ploy by some. It's an actual belief by others. And it's a cynical tactic by even more," Venkatasubramanian told TheStreet. "It's a great degree of religious fervor sort of masked as rational thinking."

"I believe that we should address the harms that we are seeing in the world right now that are very concrete," he added. "And I do not believe that these arguments about future risks are either credible or should be prioritized over what we're seeing right now. There's no science in X risk."

Rather, the issue with AI is that there is a "significant concentration of power" within the field that could, according to Nell Watson, a leading AI researcher and ethicist, exacerbate the harms the technology is causing. 

Sam Altman, CEO of OpenAI, told Congress in May that 'regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.'

Getty Images / Win McNamee

"There isn't a synchronicity between the ability for people to make decisions about AI systems, what those systems are doing, how they're interpreting them and what kinds of impressions these systems are making," Watson told TheStreet. 

And though normal civilians don't have any say in whether — or how — these systems get created, the vast majority of people, according to recent polling by the Institute for AI Policy, want AI development to slow down. More than 80% of those surveyed don't trust tech companies to self-regulate when it comes to AI; 82% want to slow down the development of the technology and 71% think the risks outweigh the potential rewards. 

With the power to create and deploy AI models concentrated to just a few tech giants — companies incentivized to earn revenue in order to maximize shareholder value — Watson is not optimistic that the firms deploying AI will do so responsibly. 

"Businesses can save a lot of money if they get rid of middle managers and line managers and things like that," Watson said. "The prognosis is not good. There's going to be a hell of a lot of abuse of these technologies. Not always deliberately, but simply out of complacency or out of ignorance. 

"A lot of these systems are going to end up having a terrible impact on people."

This impact is not some distant threat; it has been ongoing for years. Britain's Horizon Post Office scandal involved "dozens of people being wrongfully sent to jail by an algorithmic management system that said that they were stealing when they were not," Watson said. 

Dozens of these convictions were later overturned.

"There are real, actual harms to people from systems that are discriminatory, unsafe, ineffective, not transparent, unaccountable. That's real," Venkatasubramanian said. "We've had 10 years or more of people actually being harmed. We're not concerned about hypotheticals."

Related: Here's the Steep, Invisible Cost Of Using AI Models Like ChatGPT

Responsible AI in Big Tech

This concentration of control, according to Brian Green, an ethicist with the Institute for Technology, Ethics, & Culture, is potentially dangerous considering the ethical questions at hand: rampant misinformation, data scraping and training AI models on content without notifying, crediting or compensating the original creator. 

"There are lots of things to be worried about because there are just so many things that can go wrong," Green told TheStreet. "The more power that people have, the more they can use that power for bad purposes, and they might not be intending to use it for that; it might just happen as a side effect."

Though he recognized that there is a long way to go, Green, who co-authored a handbook on ethics in emerging technology, is optimistic that if companies start handling small ethical tasks, it will prepare everyone to handle larger issues (such as economic disruption) when those issues come to hand. 

If the firms behind AI start thinking intentionally about ethics, striving to make "AI that's more fair, that's more inclusive, that's safer, that's more secure, that's more private, then that should get them prepared to take on any big issues in the future," Green said. "If you're doing these small things well, you should be able to do the big things well, also."

This effort, according to Watson, needs to go beyond mere ethical intentions; it ought to involve the combination of ethics with AI safety work to prevent some of "the worst excesses" of these models. 

'We are on the event horizon of the black hole that is artificial superintelligence,' Musk said in May. 

The Washington Post/Getty Images

"The people who are impacted should have a say in how it gets implemented and developed," Venkatasubramanian said. "It absolutely can be done. But we need to make it happen. It's not going to happen by accident."

The regulatory approach

Citing the importance of clear, actionable regulation to guarantee that the companies developing these technologies engage them responsibly, Watson's greatest hope is that alignment comes easily and regulation comes quickly. Her greatest fear is that the congressional approach to AI might mimic that of the congressional approach to carbon emissions and the environment. 

"There was a point where everybody, liberal, conservative, could agree this was a good thing," Watson said. "And then it became politicized and it died. The same thing could very easily happen with AI ethics and safety."

Related: Some of the laws to regulate AI are already in place, expert argues

Green, though optimistic, was likewise of the opinion that people, from those artists impacted by generative AI, to the companies developing it, to the lawmakers in Washington, must actively work to ensure this technology is equitable. 

"You really need either some kind of strong social movement towards doing it or you need government regulation," Green said. "If every consumer said 'I'm not going to use a product from this company until they get their act together, ethically,' then it would work."

A growing concern around regulation, however, specifically that which might limit the kind or quantity of data that AI companies could scrape, is that it would further cement Big Tech's lead over any smaller startups. 

Amazon,  (AMZN) - Get Free Report, Google  (GOOGL) - Get Free Report and Apple  (AAPL) - Get Free Report "have all the data. They don't have to share it with anybody. How do we ever catch up?" Diana Lee, co-founder and CEO of Constellation, an automated marketing firm, told TheStreet. "When it comes to information that's on the web that's publicly traded information, we feel like that's already ethical because it's already out there."

Others, such as Microsoft  (MSFT) - Get Free Report, have often discussed the importance of striking a "better balance between regulation and innovation." 

But these recurring fears of hindering innovation, Venkatasubramanian said, have yet to be legitimately expounded upon, and to him, hold little water. The same executives who have highlighted fears of a regulatory impact on innovation have done little to explain how regulation could hurt innovation. 

"All I can hear is 'we want to conduct business as usual,'" he said. "It's not a balance."

The important thing now, Venkatasubramanian said, is for regulators to avoid the "trap of thinking there's only one thing to do. There are multiple things to do."

Chief among them is clear, enforceable regulation. Venkatasubramanian co-authored the White House's Blueprint for an AI Bill of Rights, which he said could easily be adopted into regulation. The Bill of Rights lays out a series of principles — safe and effective systems, discrimination protections, data privacy, notice and explanation and human alternatives — designed to protect people from AI harm. 

Senate Majority Leader Chuck Schumer (D-N.Y.) hosted prominent tech executives at the first AI forum Sept. 13.

The Washington Post/Getty Images

"It is really important that Congress pays attention not just to AI as generative AI but AI broadly," he said. "Everyone's thinking about ChatGPT; it'd be really terrible if all the legislation that gets proposed only focuses on generative AI. 

"All the harms that we're talking about will exist even without generative AI."

Related: Why ChatGPT Can't Turn Into Marvel Villain Ultron ... Yet

Chuck Schumer's AI Forums

In an effort to better inform Congress about a constantly evolving technological landscape, Senate Majority Leader Chuck Schumer (D-N.Y.) hosted the first of a series of nine AI forums Sept. 13. Musk, Altman, Bill Gates and executives from companies ranging from Google  (GOOGL) - Get Free Report to Nvidia  (NVDA) - Get Free Report were present at the meeting, a fact that garnered wide-spread criticism for appearing to focus regulatory attention on those who stand to benefit from the technology, rather than those impacted by or studying it. 

"I think they missed an opportunity because everyone pays attention to the first one. They made a very clear statement," Venkatasubramanian said. "And I think it is important, critically important, to hear from the people who are actually impacted. And I really, really hope that the future forums do that."

The executives behind the companies building and deploying these models, Venkatasubramanian added, don't seem to understand what they're creating. Some, including Musk and Altman, have "very strange ideas about what we should be concerned about. These are the folks Congress is hearing from."

The path toward a positive AI future

While the harms and risks remain incontrovertible, artificial intelligence could lead to massive societal improvements. As Gary Marcus, a leading AI researcher, has said, AI, properly leveraged, can help scientists across all fields solve problems and gain understanding at a faster rate. Medicines can be discovered and produced more quickly. 

The tech can even be used to help greater understand and mitigate some impacts of climate change by allowing scientists to better collate data in order to discover predictive trends and patterns. 

Current systems —LLMs like ChatGPT — however, "are not going to reinvent material science and save the climate," Marcus told the New York Times in May. "I feel that we are moving into a regime where the biggest benefit is efficiency. These tools might give us tremendous productivity benefits but also destroy the fabric of society."

Further, Venkatasubramanian said, there is a growing list of incredible innovations happening in the field around building responsible AI, innovating methods of auditing AI systems, building instruments to examine systems for disparities and building explainable models. 

These "responsible" AI innovations are vital to get to a positive future where AI can be appropriately leveraged in a net-beneficial way, Venkatasubramanian said. 

"Short term, we need laws, regulations, we need this now. What that will trigger in the medium term is market creation; we're beginning to see companies form that offer responsible AI as a service, auditing as a service," he said. "The laws and regulations will create a demand for this kind of work."

The longer-term change that Venkatasubramanian thinks must happen, though, is a cultural one. And this shift might take a few years.

"We need people to deprogram themselves from the whole, 'move fast and break things' attitude that we've had so far. People need to change their expectations," he said. "That culture change will take time because you create the laws, the laws create the market demand, that creates the need for jobs and skills which changes the educational process.

"So you see a whole pipeline playing out on different time scales. That's what I want to see. I think it's entirely doable. I think this can happen. We have the code, we have the knowledge. We just have to have the will to do it."

If you work in artificial intelligence, contact Ian by email ian.krietzberg@thearenagroup.net or Signal 732-804-1223

Action Alerts PLUS offers expert portfolio guidance to help you make informed investing decisions. Sign up now.

Read More

Continue Reading

Government

Mathematicians use AI to identify emerging COVID-19 variants

Scientists at The Universities of Manchester and Oxford have developed an AI framework that can identify and track new and concerning COVID-19 variants…

Published

on

Scientists at The Universities of Manchester and Oxford have developed an AI framework that can identify and track new and concerning COVID-19 variants and could help with other infections in the future.

Credit: source: https://phil.cdc.gov/Details.aspx?pid=23312

Scientists at The Universities of Manchester and Oxford have developed an AI framework that can identify and track new and concerning COVID-19 variants and could help with other infections in the future.

The framework combines dimension reduction techniques and a new explainable clustering algorithm called CLASSIX, developed by mathematicians at The University of Manchester. This enables the quick identification of groups of viral genomes that might present a risk in the future from huge volumes of data.

The study, presented this week in the journal PNAS, could support traditional methods of tracking viral evolution, such as phylogenetic analysis, which currently require extensive manual curation.

Roberto Cahuantzi, a researcher at The University of Manchester and first and corresponding author of the paper, said: “Since the emergence of COVID-19, we have seen multiple waves of new variants, heightened transmissibility, evasion of immune responses, and increased severity of illness.

“Scientists are now intensifying efforts to pinpoint these worrying new variants, such as alpha, delta and omicron, at the earliest stages of their emergence. If we can find a way to do this quickly and efficiently, it will enable us to be more proactive in our response, such as tailored vaccine development and may even enable us to eliminate the variants before they become established.”

Like many other RNA viruses, COVID-19 has a high mutation rate and short time between generations meaning it evolves extremely rapidly. This means identifying new strains that are likely to be problematic in the future requires considerable effort.

Currently, there are almost 16 million sequences available on the GISAID database (the Global Initiative on Sharing All Influenza Data), which provides access to genomic data of influenza viruses.

Mapping the evolution and history of all COVID-19 genomes from this data is currently done using extremely large amounts of computer and human time.

The described method allows automation of such tasks. The researchers processed 5.7 million high-coverage sequences in only one to two days on a standard modern laptop; this would not be possible for existing methods, putting identification of concerning pathogen strains in the hands of more researchers due to reduced resource needs.

Thomas House, Professor of Mathematical Sciences at The University of Manchester, said: “The unprecedented amount of genetic data generated during the pandemic demands improvements to our methods to analyse it thoroughly. The data is continuing to grow rapidly but without showing a benefit to curating this data, there is a risk that it will be removed or deleted.

“We know that human expert time is limited, so our approach should not replace the work of humans all together but work alongside them to enable the job to be done much quicker and free our experts for other vital developments.”

The proposed method works by breaking down genetic sequences of the COVID-19 virus into smaller “words” (called 3-mers) represented as numbers by counting them. Then, it groups similar sequences together based on their word patterns using machine learning techniques.

Stefan Güttel, Professor of Applied Mathematics at the University of Manchester, said: “The clustering algorithm CLASSIX we developed is much less computationally demanding than traditional methods and is fully explainable, meaning that it provides textual and visual explanations of the computed clusters.”

Roberto Cahuantzi added: “Our analysis serves as a proof of concept, demonstrating the potential use of machine learning methods as an alert tool for the early discovery of emerging major variants without relying on the need to generate phylogenies.

“Whilst phylogenetics remains the ‘gold standard’ for understanding the viral ancestry, these machine learning methods can accommodate several orders of magnitude more sequences than the current phylogenetic methods and at a low computational cost.”


Read More

Continue Reading

International

There will soon be one million seats on this popular Amtrak route

“More people are taking the train than ever before,” says Amtrak’s Executive Vice President.

Published

on

While the size of the United States makes it hard for it to compete with the inter-city train access available in places like Japan and many European countries, Amtrak trains are a very popular transportation option in certain pockets of the country — so much so that the country’s national railway company is expanding its Northeast Corridor by more than one million seats.

Related: This is what it's like to take a 19-hour train from New York to Chicago

Running from Boston all the way south to Washington, D.C., the route is one of the most popular as it passes through the most densely populated part of the country and serves as a commuter train for those who need to go between East Coast cities such as New York and Philadelphia for business.

Veronika Bondarenko captured this photo of New York’s Moynihan Train Hall. 

Veronika Bondarenko

Amtrak launches new routes, promises travelers ‘additional travel options’

Earlier this month, Amtrak announced that it was adding four additional Northeastern routes to its schedule — two more routes between New York’s Penn Station and Union Station in Washington, D.C. on the weekend, a new early-morning weekday route between New York and Philadelphia’s William H. Gray III 30th Street Station and a weekend route between Philadelphia and Boston’s South Station.

More Travel:

According to Amtrak, these additions will increase Northeast Corridor’s service by 20% on the weekdays and 10% on the weekends for a total of one million additional seats when counted by how many will ride the corridor over the year.

“More people are taking the train than ever before and we’re proud to offer our customers additional travel options when they ride with us on the Northeast Regional,” Amtrak Executive Vice President and Chief Commercial Officer Eliot Hamlisch said in a statement on the new routes. “The Northeast Regional gets you where you want to go comfortably, conveniently and sustainably as you breeze past traffic on I-95 for a more enjoyable travel experience.”

Here are some of the other Amtrak changes you can expect to see

Amtrak also said that, in the 2023 financial year, the Northeast Corridor had nearly 9.2 million riders — 8% more than it had pre-pandemic and a 29% increase from 2022. The higher demand, particularly during both off-peak hours and the time when many business travelers use to get to work, is pushing Amtrak to invest into this corridor in particular.

To reach more customers, Amtrak has also made several changes to both its routes and pricing system. In the fall of 2023, it introduced a type of new “Night Owl Fare” — if traveling during very late or very early hours, one can go between cities like New York and Philadelphia or Philadelphia and Washington. D.C. for $5 to $15.

As travel on the same routes during peak hours can reach as much as $300, this was a deliberate move to reach those who have the flexibility of time and might have otherwise preferred more affordable methods of transportation such as the bus. After seeing strong uptake, Amtrak added this type of fare to more Boston routes.

The largest distances, such as the ones between Boston and New York or New York and Washington, are available at the lowest rate for $20.

Read More

Continue Reading

International

The next pandemic? It’s already here for Earth’s wildlife

Bird flu is decimating species already threatened by climate change and habitat loss.

I am a conservation biologist who studies emerging infectious diseases. When people ask me what I think the next pandemic will be I often say that we are in the midst of one – it’s just afflicting a great many species more than ours.

I am referring to the highly pathogenic strain of avian influenza H5N1 (HPAI H5N1), otherwise known as bird flu, which has killed millions of birds and unknown numbers of mammals, particularly during the past three years.

This is the strain that emerged in domestic geese in China in 1997 and quickly jumped to humans in south-east Asia with a mortality rate of around 40-50%. My research group encountered the virus when it killed a mammal, an endangered Owston’s palm civet, in a captive breeding programme in Cuc Phuong National Park Vietnam in 2005.

How these animals caught bird flu was never confirmed. Their diet is mainly earthworms, so they had not been infected by eating diseased poultry like many captive tigers in the region.

This discovery prompted us to collate all confirmed reports of fatal infection with bird flu to assess just how broad a threat to wildlife this virus might pose.

This is how a newly discovered virus in Chinese poultry came to threaten so much of the world’s biodiversity.

H5N1 originated on a Chinese poultry farm in 1997. ChameleonsEye/Shutterstock

The first signs

Until December 2005, most confirmed infections had been found in a few zoos and rescue centres in Thailand and Cambodia. Our analysis in 2006 showed that nearly half (48%) of all the different groups of birds (known to taxonomists as “orders”) contained a species in which a fatal infection of bird flu had been reported. These 13 orders comprised 84% of all bird species.

We reasoned 20 years ago that the strains of H5N1 circulating were probably highly pathogenic to all bird orders. We also showed that the list of confirmed infected species included those that were globally threatened and that important habitats, such as Vietnam’s Mekong delta, lay close to reported poultry outbreaks.

Mammals known to be susceptible to bird flu during the early 2000s included primates, rodents, pigs and rabbits. Large carnivores such as Bengal tigers and clouded leopards were reported to have been killed, as well as domestic cats.

Our 2006 paper showed the ease with which this virus crossed species barriers and suggested it might one day produce a pandemic-scale threat to global biodiversity.

Unfortunately, our warnings were correct.

A roving sickness

Two decades on, bird flu is killing species from the high Arctic to mainland Antarctica.

In the past couple of years, bird flu has spread rapidly across Europe and infiltrated North and South America, killing millions of poultry and a variety of bird and mammal species. A recent paper found that 26 countries have reported at least 48 mammal species that have died from the virus since 2020, when the latest increase in reported infections started.

Not even the ocean is safe. Since 2020, 13 species of aquatic mammal have succumbed, including American sea lions, porpoises and dolphins, often dying in their thousands in South America. A wide range of scavenging and predatory mammals that live on land are now also confirmed to be susceptible, including mountain lions, lynx, brown, black and polar bears.

The UK alone has lost over 75% of its great skuas and seen a 25% decline in northern gannets. Recent declines in sandwich terns (35%) and common terns (42%) were also largely driven by the virus.

Scientists haven’t managed to completely sequence the virus in all affected species. Research and continuous surveillance could tell us how adaptable it ultimately becomes, and whether it can jump to even more species. We know it can already infect humans – one or more genetic mutations may make it more infectious.

At the crossroads

Between January 1 2003 and December 21 2023, 882 cases of human infection with the H5N1 virus were reported from 23 countries, of which 461 (52%) were fatal.

Of these fatal cases, more than half were in Vietnam, China, Cambodia and Laos. Poultry-to-human infections were first recorded in Cambodia in December 2003. Intermittent cases were reported until 2014, followed by a gap until 2023, yielding 41 deaths from 64 cases. The subtype of H5N1 virus responsible has been detected in poultry in Cambodia since 2014. In the early 2000s, the H5N1 virus circulating had a high human mortality rate, so it is worrying that we are now starting to see people dying after contact with poultry again.

It’s not just H5 subtypes of bird flu that concern humans. The H10N1 virus was originally isolated from wild birds in South Korea, but has also been reported in samples from China and Mongolia.

Recent research found that these particular virus subtypes may be able to jump to humans after they were found to be pathogenic in laboratory mice and ferrets. The first person who was confirmed to be infected with H10N5 died in China on January 27 2024, but this patient was also suffering from seasonal flu (H3N2). They had been exposed to live poultry which also tested positive for H10N5.

Species already threatened with extinction are among those which have died due to bird flu in the past three years. The first deaths from the virus in mainland Antarctica have just been confirmed in skuas, highlighting a looming threat to penguin colonies whose eggs and chicks skuas prey on. Humboldt penguins have already been killed by the virus in Chile.

A colony of king penguins.
Remote penguin colonies are already threatened by climate change. AndreAnita/Shutterstock

How can we stem this tsunami of H5N1 and other avian influenzas? Completely overhaul poultry production on a global scale. Make farms self-sufficient in rearing eggs and chicks instead of exporting them internationally. The trend towards megafarms containing over a million birds must be stopped in its tracks.

To prevent the worst outcomes for this virus, we must revisit its primary source: the incubator of intensive poultry farms.

Diana Bell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Read More

Continue Reading

Trending