Our gatekeeping problem with AI
I’ve now seen a few posts where people who position themselves as AI experts ridicule their audience by telling them that if they use the term “AI” to describe an LLM (such as ChatGPT) that they will sound “foolish” or “silly.” While it is true that using terms like LLM or Generative to describe different kinds of AI tools is more specific in some contexts, it does NOT make you sound foolish to call something like ChatGPT an AI. It is an AI.
There are also AI tools that folks have used for years, but just never called them an AI. When your phone predicts the next word in your sentence, it may not feel the same way as it feels to type a prompt into ChatGPT… but it is still considered AI. Or when Waze uses predictive AI to tell you the best route to avoid traffic… it feels like it’s just “tech,” but really, it’s quite amazing considering that I used to scotch tape Mapquest directions on my dashboard when I’d drive somewhere.
When people post these degrading comments to people who are new to the AI world, it strikes a nerve. My favorite people to train are the frightened beginners. People often tell me that I helped them reduce their anxiety around using these tools. Most of the enthusiasts I meet, myself included, believe we will soon live in a world where everyone knows how to use generative AI and other AI tools to enhance their work.
It doesn’t really matter what they want to call it. I’ve never heard anyone call their Maps app a “predictive AI tool.” No one cares. What matters is that it got you to work on time. In the end, all of it breaks down into algorithms. The more accessible we make it, the better.