I’m the CEO of an AI company, and this is the BS behind AI

There has never been a technology as conducive to BS as AI.  

Why? AI is a massively disruptive, transformative, once-in-many-generation advancement that holds great promise for business and society. Meanwhile, our modern capitalist economy is uniquely skilled at converting such leaps into dollars. Not value, mind you, but cold, hard greenbacks.  

Tracing the origin, no bull

Even before AI was all the rage, BS has always permeated the world. In the 1870s, homeopathic physician Dr. Sylvester Andral Kilmer created Swamp Root Kidney Cure, which was marketed as a cure-all for most anything, claiming it could even make weak people strong again (presumably without any boring weight lifting). The reality is that false and fabricated information is not something we will ever fully eradicate because it is endemic to the human condition. Whether it’s a matter of never being satisfied, overconfident, or living in a culture where everything is exaggerated, we are wired to both consume AND create hogwash in a nearly unstoppable manner.  

As philosopher Harry Frankfurt stated in his acclaimed bestseller On Bullshit, “One of the salient features of our culture is that there is so much bullshit.” There is so much of it, that we tend to just accept it as our destined reality. And while individuals are often susceptible to false claims in private life, the business world also drowns in a sea of valueless noise.  

Stretching the truth 

How do researchers (yes, scientists research BS!) define BS? Not as you might expect. Rather than deliberate lying, technically, bullshit refers to statements that are simply not tied to reality. These statements can actually be true, but since they are bullshit, you have no obvious way to tell. Bullshitters are people who attempt to paint a certain picture that suits their interests, without regard for whether the content is true or false.  

This brings us to corporate marketing, especially in AI. The goal of most marketing programs is to increase buying behavior and general good will towards the company. It is not to disseminate accurate and detailed information about how a product or service works or the outcomes it achieves. Now, there are regulations governing truth in advertising, but the reality is that it is extremely easy to make claims that are partially or somewhat true but which are not necessarily indicative of the results your company will achieve. I would argue that in its best case, marketing’s general goal is to present the best possible outcomes you can achieve with a product rather than the typical outcome.  

Truth versus hype with AI 

Given the technical complexity of AI solutions, there is no way that a short ad can convey sufficient information for a prospective buyer to evaluate the solution. Unfortunately, even more detailed discussions and presentations often cannot explain how an AI tool works, especially considering that many data scientists themselves may not fully understand how an emergent AI-generated outcome is created.   

Being fooled by the false promises and BS in AI can have significant consequences, ranging from failures in the business world to widespread societal harm. For example, believing that AI is more capable than it actually is leads to poor decision-making in critical areas like healthcare, finance, and legal matters. Or, overhyping AI in the workforce could lead to job loss in industries where human skills are still essential.  

But the good news is that you don’t need to become a data engineer to learn how to evaluate AI and other complex tools. You simply need to ask the right questions. It’s tempting to pretend we understand complicated technology so that we can avoid revealing our own ignorance, but a better approach is to impersonate a scientist and rigorously interrogate marketing claims. 

Let’s take a look at a few typical marketing statements and the questions you might ask: 

“Our AI makes faster and more accurate medical diagnoses than doctors.”  

  • How are you measuring speed and accuracy? Is the way you are measuring accuracy actually . . . accurate? 
  • Does it work for all conditions and types of patients, or just certain subgroups? Is it as accurate for rare conditions as it is for common ones? Does it work as well, for example, with women as men, and African Americans as Caucasians?  
  • How much data is this claim based on? If a small sample of data was used, the results will not necessarily be very reliable.  

“We help you hire people 20% faster than traditional methods.” 

  • What are the specific “traditional” methods you are comparing to? 
  • What is the cost to achieve this speed? 
  • Does the technique result in bias against protected classes (e.g., African Americans, women)? 
  • Are the candidates being hired actually performing the job well or better than the previous process? Simply adding speed to a poor selection process will do little to help improve organizational performance.  

“Our AI chatbot helps you resolve customer issues faster and easier.”  

  • Faster than what? How much? 
  • Easier for who? Your company or the customer?  
  • What issues does the chatbot struggle with? Does this lead to a rise in customer frustration? 

Often, you will find that if a vendor does not have a clear answer for one of these questions, the actual answer does not support the claim.  

The solution for combatting false information and BS in AI and life in general lies in thinking like a scientist. The scientific method itself is one of the greatest intellectual ideas of history and has a clear application to the battle against BS.

Back in 1996, the late, great scientist Carl Sagan wrote about how fundamental and critical it is for the populace to understand how to think like a scientist and critically evaluate claims so that we can combat the spread of misinformation. Fast-forward 30 years, and it has never been more critical. 

https://www.fastcompany.com/91222356/im-the-ceo-of-an-ai-company-and-this-is-the-bs-behind-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 2mo | 6 nov. 2024, 12:40:02


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

How the H1-B visa is causing strife within Trump’s coalition

An online spat between factions of Donald Trump’s suppo

27 dec. 2024, 22:10:03 | Fast company - tech
People who visit adult or gambling websites double their risk of malware

Visiting adult and gambling websites doubles the risk of inadvertently installing malware onto work devices, according to a new study.

27 dec. 2024, 12:40:06 | Fast company - tech
How to put down your phone in 2025

There are certain social media rules we can all agree on: Ghosting a conversation is impolite, and replying “k” to a text is the equivalent of a backhand slap (violent, wrong, and rude). But what

27 dec. 2024, 12:40:05 | Fast company - tech
AI is helping students with disabilities. Schools worry about the risks

For Makenzie Gilkison, spelling is such a struggle that a word like rhinoceros might come out as “rineanswsaurs” or sarcastic as “srkastik.”

The 14-year-old from

26 dec. 2024, 20:30:05 | Fast company - tech
Cyberattack hits Japan Airlines, delaying flights for holiday travelers

Japan Airlines said it was hit by a cyberattack Thursday, causing delays to

26 dec. 2024, 18:20:03 | Fast company - tech
An ex-OpenAI exec and futurist talks about AI in 2025 and beyond

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week 

26 dec. 2024, 18:20:02 | Fast company - tech