AI’s dominance is not inevitable, according to a tech ethicist

Anyone following the rhetoric around artificial intelligence in recent years has heard one version or another of the claim that AI is inevitable. Common themes are that AI is already here, it is indispensable, and people who are bearish on it harm themselves.

In the business world, AI advocates tell companies and workers that they will fall behind if they fail to integrate generative AI into their operations. In the sciences, AI advocates promise that AI will aid in curing hitherto intractable diseases.

In higher education, AI promoters admonish teachers that students must learn how to use AI or risk becoming uncompetitive when the time comes to find a job.

And, in national security, AI’s champions say that either the nation invests heavily in AI weaponry, or it will be at a disadvantage vis-à-vis the Chinese and the Russians, who are already doing so.

The argument across these different domains is essentially the same: The time for AI skepticism has come and gone. The technology will shape the future, whether you like it or not. You have the choice to learn how to use it or be left out of that future. Anyone trying to stand in the technology’s way is as hopeless as the manual weavers who resisted the mechanical looms in the early 19th century.

In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the ethical questions raised by the widespread adoption of AI, and I believe the inevitability argument is misleading.

History and hindsight

In fact, this claim is the most recent version of a deterministic view of technological development. It’s the belief that innovations are unstoppable once people start working on them. In other words, some genies don’t go back in their bottles. The best you can do is harness them to your good purposes.

This deterministic approach to tech has a long history. It’s been applied to the influence of the printing press, as well as to the rise of automobiles and the infrastructure they require, among other developments.

But I believe that when it comes to AI, the technological determinism argument is both exaggerated and oversimplified.

AI in the field(s)

Consider the contention that businesses can’t afford to stay out of the AI game. In fact, the case has yet to be made that AI is delivering significant productivity gains to the firms that use it. A report in The Economist in July 2024 suggests that so far, the technology has had almost no economic impact.

AI’s role in higher education is also still very much an open question. Though universities have, in the past two years, invested heavily in AI-related initiatives, evidence suggests they may have jumped the gun.

The technology can serve as an interesting pedagogical tool. For example, creating a Plato chatbot that lets students have a text conversation with a bot posing as Plato is a cool gimmick.

But AI is already starting to displace some of the best tools teachers have for assessment and for developing critical thinking, such as writing assignments. The college essay is going the way of the dinosaurs as more teachers give up on the ability to tell whether their students are writing their papers themselves. What’s the cost-benefit argument for giving up on writing, an important and useful traditional skill?

In the sciences and in medicine, the use of AI seems promising. Its role in understanding the structure of proteins, for example, will likely be significant for curing diseases. The technology is also transforming medical imaging and has been helpful in accelerating the drug discovery process.

But the excitement can become exaggerated. AI-based predictions about which cases of COVID-19 would become severe have roundly failed, and doctors rely excessively on the technology’s diagnostic ability, often against their own better clinical judgment. And so, even in this area, where the potential is great, AI’s ultimate impact is unclear.

In national security, the argument for investing in AI development is compelling. Since the stakes can be high, the argument that if the Chinese and the Russians are developing AI-driven autonomous weapons, the United States can’t afford to fall behind, has real purchase.

But a complete surrender to this form of reasoning, though tempting, is likely to lead the U.S. to overlook the disproportionate impact of these systems on nations that are too poor to participate in the AI arms race. The major powers could deploy the technology in conflicts in these nations. And, just as significantly, this argument de-emphasizes the possibility of collaborating with adversaries on limiting military AI systems, favoring arms race over arms control.

One step at a time

Surveying the potential significance and risks of AI in these different domains merits some skepticism about the technology. I believe that AI should be adopted piecemeal and with a nuanced approach rather than subject to sweeping claims of inevitability. In developing this careful take, there are two things to keep in mind:

First, companies and entrepreneurs working on artificial intelligence have an obvious interest in the technology being perceived as inevitable and necessary, since they make a living from its adoption. It’s important to pay attention to who is making claims of inevitability, and why.

Second, it’s worth taking a lesson from recent history. Over the past 15 years, smartphones and the social media apps that run on them came to be seen as a fact of life—a technology as transformative as it is inevitable. Then data started emerging about the mental health harms they cause teens, especially young girls. School districts across the U.S. started to ban phones to protect the attention spans and mental health of their students. And some people have reverted to using flip phones as a quality of life change to avoid smartphones.

After a long experiment with the mental health of kids, facilitated by claims of technological determinism, Americans changed course. What seemed fixed turned out to be alterable. There is still time to avoid repeating the same mistake with artificial intelligence, which potentially could have larger consequences for society.

<hr class=“wp-block-separator is-style-wide”/>

Nir Eisikovits is a professor of philosophy and director of the Applied Ethics Center at UMass Boston.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://www.fastcompany.com/91225614/artificial-intelligence-dominance-not-inevitable-ethics?partner=rss&amp;utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=rss+fastcompany&amp;utm_content=rss

Creată 3mo | 9 nov. 2024, 12:40:03


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

Hundreds of rigged votes can skew AI model rankings on Chatbot Arena, study finds

The generative AI revolution has turned into a global race, with mixtures of mode

6 feb. 2025, 14:10:06 | Fast company - tech
Try these tips to help your parents stay safe online

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

6 feb. 2025, 11:40:08 | Fast company - tech
Airlines are finally embracing Apple’s air tags—which means lost luggage could be a thing of the past

There’s nothing more annoying than arriving at your destination and finding that your checked baggage didn’t make the trip. But thanks to Apple’s new partnership with 15 different airlines,

6 feb. 2025, 11:40:07 | Fast company - tech
Oracle’s HR software now has AI to help with taxes and career planning

Oracle’s new AI will answer employee questions about everything job-related, from hiring to retiring. 

Oracle has embedded artificial intelligence capabilities into its Human Capital Man

6 feb. 2025, 09:30:04 | Fast company - tech
‘Attractive people doing attractive things’: Members of this Instagram group dress up to make ‘old money’ content

An X post recently made the rounds for its “old money” visuals. The video depicting weekends spent sailing Lake Como in tuxedos

6 feb. 2025, 09:30:03 | Fast company - tech
This copy trading app wants to produce the ‘next five Warren Buffetts’

Influencers are not only good for skinny jean and matcha recommendations. Now, they can advise you on where to invest your money. 

Founded by 23-year-old Steven Wang,

6 feb. 2025, 07:10:06 | Fast company - tech
In a time crunch? 3 ways GenAI can come through in a clutch

In just a couple of years, generative AI (GenAI) has made a big impact on the way people, companies, and entire industries think about work. It’s helping

6 feb. 2025, 02:30:05 | Fast company - tech