Threat, Promise, and the Need for Regulation: AI Is Here
"Artificial Intelligence & AI & Machine Learning"
by mikemacmarketing is licensed under CC BY 2.0.
Artificial intelligence (AI) has been having a media moment. The Washington Post reported on June 11 that an eccentric Google engineer insists that the AI chatbot he helped develop has become sentient, which provoked a notable storm of responses; he was placed on paid administrative leave and hopes he won’t be fired. This particular story is fascinating but actually less important than a much broader conversation which had already started and clearly needs to develop. There are lessons in this for everyone concerned with emerging technologies — whether supporting, opposing, or neutral about specific applications.
Aside from this latest brouhaha (to which we shall return), a significant number of researchers and journalists assert that machine learning and robots are at least on the verge of becoming extremely useful in many different fields. Some are willing to go much further. There are both threats and promises in these developments.
Some experts, for example Ruha Benjamin, have for some years been pointing out important problems that derive from implicit and frequently unrecognized bias: “The imagined user is gendered, raced and classed without gender, race, or class ever being mentioned.” But even if these and similar issues are “solved” (perhaps using open-source methods) as companies respond to pressure from civil rights organizations there remain significant questions about the application of AI in the life sciences, biotech, and medicine:
- The assisted reproductive technology (ART) industry is, according to an article in the current issue of Fertility and Sterility, increasingly interested in “the application of artificial intelligence in reproductive medicine” to improve patient outcomes, a prospect they dub “baby steps.”
- Commercial reality is, however, ahead of the industry’s watchdogs: Companies such as Genomic Prediction, with its “advanced genetic testing platform” LifeView, are already selling embryo selection based on “AI applied to very large genomic datasets” (FAQ 1.4).
- In animal research, which may ultimately lead to advances in human genomics, an algorithm discovered that specific complex behaviors in mice are shaped by genes inherited from just one parent, with paternal alleles shaping the behavior of female offspring and vice versa.
- Machine learning is being used already, in combination with genome-wide association studies (GWAS), to identify COVID-19 risk factors, although the researchers admit that their “genetic discovery data are largely focused on European ancestry, which may limit widespread applicability.”
- Chinese scientists have produced cloned pigs made entirely by robots; removing humans from the process has helped improve the success rate.
- Wild Me is a non-profit organization that “leverages artificial intelligence and machine learning to support the fight against wildlife extinction.”
- “Digital twins” are being used to analyze the design and operations of cities such as Singapore. The name comes of course from the related goal of human applications: some analysts think that similar simulacra of humans are less than a decade away, though skeptics call this the science-fiction stage of development.
- AI-based software developed by Cognetivity Neurosciences to catch early signs of cognitive decline has been approved by the FDA as a class II exempt medical device, so it is eligible for commercial distribution.
Meanwhile, a growing number of legislators and ethicists are considering how AI should be regulated, which of course worries some enthusiasts almost as much as AI concerns some onlookers. For example, an article by Kit Wilson in the current issue of The Critic (a skeptical UK publication) discusses:
Good and evil on the new frontier
Our current ethical guidelines are hopelessly inadequate for a new era of unimaginable technological change
That seems incontestable, and there are efforts to bring guidelines and laws to bear on rapidly developing AI applications.
The European Union is developing “a legal framework on AI” (an AI Act, or AIA) that is likely to have an impact globally. Some legal scholars suggest that it may not go far enough; the support of Microsoft might be a worrying sign. Tech companies are likely to push back against effective regulations: Recall that Google in 2020–21 fired two senior researchers working on the ethical issues raised by artificial intelligence. An analysis at Lawfare calls the proposed law “a direct challenge to Silicon Valley’s common view that law should leave emerging technology alone” albeit with “some surprising gaps and omissions.”
In the U.S., the National Academy of Medicine established a Committee on Emerging Science, Technology, and Innovation in Health and Medicine just before Covid hit. The two dozen members include several who have been deeply involved in discussions about (and mostly supportive of) heritable human genetic engineering. This was described in the June 9, 2022, issue of the New England Journal of Medicine: “Governance of Emerging Technologies in Health and Medicine — Creating a New Framework” and at greater length in Issues:
Imagining Governance for Emerging Technologies
A new methodology from the National Academy of Medicine could inform social, ethical, and legal governance frameworks for a range of cutting-edge technologies.
Stanford University’s One Hundred Year Study on Artificial Intelligence (AI100) has published two substantial reports about AI, one in 2016 and another in 2021, and plans to continue the process “once every five years, for at least one hundred years.”
The Defense Advanced Research Projects Agency (DARPA) has been tasked by Congress to create an “AI digital ecosystem.” They are also planning to offload tricky decision-making about medical triage in conflict zones to AI developed by a new program called In the Moment (ITM). These machines, they claim, “will function more as colleagues than as tools.” Which disturbs some experts, such as Sally A. Applin, who told The Washington Post:
“AI is great at counting things but I think it could set a [bad] precedent by which the decision for someone’s life is put in the hands of a machine.”
The Washington Post broke the story of the Google engineer who thinks his chatbot is sentient, and has followed up with some interesting sidelights. For example, on June 14, Molly Roberts asked and answered:
Is AI sentient? Wrong question.
Similarly, on June 17, Will Oremus wrote:
Google’s AI passed a famous test — and showed how the test is broken
The Turing test has long been a benchmark for machine intelligence. But what it really measures is deception.
Even more significantly, the two Google engineers who were fired about 18 months ago — Timnit Gebru and Margaret Mitchell — explained one of their major concerns:
We warned Google that people might believe AI was sentient. Now it’s happening.
Cutting-edge technology seems to provoke credulous responses, from the uninformed but hopeful public (who wouldn’t want magical advances?) and all too often from people who should know better. Some of them have a direct financial interest in keeping their work funded, of course, but there are much broader conflicts of interest, as Sheldon Krimsky pointed out decades ago. There is indeed a biotech juggernaut, along with an AI onslaught, and in a time of rapidly developing science and technology it behooves us all to be very alert to unexpected dangers as well as over-hyped benefits.