How to train users to ignore (and dislike) AI responses

Summary

As the poor implementations of AI tooling gets shoved down the populaces throats, more people are becoming conditioned to consider the output of these models to be false. How can we implement AI tools to ensure they are trustworthy and have our best interests in mind?

Plus: A little about the Rocky Mountain AI Interest Group in Fort Collins for my locals.

Introduction

Today's article comes from some conversation with friends and loved ones regarding the push on Google for AI Overviews in a significant portion of searches. I've found, albeit through anecdotal examples, that many just flat out ignore the information being presented by the tool and scroll past to what they consider the "actual stuff they're looking for."

Source: https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac/

Pretty recently, Apple released their vision for Apple Intelligence, where one of the examples show a summation of the user's inbox by what the model perceives to be a priority (seen above). How is it being received, you might ask? Mixed reviews. Some people can see value in it for a specific use case, but the word "gimmick" resonates multiple times in the threads of responses.

With all of this being said, one has to wonder why it's considered a gimmick over a valuable feature. Additionally, with it being a perceived gimmick, is it profitable or sustainable to pursue these types of features? How many people actually use these functions? Most importantly, how can we do better?

That's what we'll talk about today using a couple example images from across the web. Although this is focused on Google AI Overview, the argument can extent farther into other implementations. Let's dive in.

Why does Google AI Overview suck?

My belief is that these "AI Overviews" suck due to a lack of relevant training data. It seems that a lot of effort went into grabbing data, but not as much to ensuring the data was informative and coherent (i.e. mass web scraping and attempting classification of content). Take this example from Reddit. Would you want someone actually trusting this information?

This example is somewhat extreme, but for every extreme example, you can find various "funny" situations that pop up as well. (Please note that the gallery of images may not show for users reading through email.)

Some of it is ridiculous, and a lot of it is harmful and flat out incorrect. On a ubiquitous website like Google, this creates a damning image of the poor implementation of AI tools, and is actively conditioning users to ignore results provided by an AI. If they can't trust Google, a multi-billion dollar company, to have something correct, would they trust any company using this technology in a similar way?

How can we do better?

AI, just like with any technology, needs a strategy to ensure that it is actively fulfilling the intended goal. That involves time, money, and effort to bring all the correct pieces together. Google AI Overview is a public-facing symptom of a larger business problem of rushing out functionality to jump on technological hype trains for a perceived monetary benefit.

Situations like these speak to decisions being made with only part of the whole problem being understood. Maybe engineers need to be heard more. Maybe we need more perspectives from those skeptical to build a better understanding. Maybe it is a combination of multiple factors. Either way, we need to take some time to think and iterate over implementation. Not completely stop, but maybe start asking questions and investing more time into validation and guard rails.

This also can be solved by starting much smaller than something like "Google Search AI" on your list of brainstorming ideas. From my experience, and the experience of people I've networked with, AI works well with small, well-defined business problems. Taking this approach means you can focus on the data set, test more mindfully, and scale up from that point. It also allows for you to use less energy-draining and lower scale models to not only be more sustainable, but save money doing so.

That about sums it up. If you're interested in learning more about spaces where we can have these types of discussions, please read on. Otherwise, Cha Cha is at the very bottom of the post.🐕

RMAIIG Fort Collins

Some of you that are reading this may already know, but I have started a group up in Fort Collins to talk about AI topics. You can sign up for updates and other information on Meetup: https://www.meetup.com/fort-collins-ai-for-everyone-rmaiig/?eventOrigin=home_groups_you_organize

We are a part of a larger community of over 1800+ members across the Colorado Front Range called the Rocky Mountain AI Interest Group (RMAIIG | https://linktr.ee/rmaiig). RMAIIG's mission is to create an inclusive and engaging environment where individuals from diverse backgrounds can come together to learn about, discuss, and collaborate on AI-related topics. The group aims to demystify AI technologies and make them accessible to a broader audience byproviding educational resources, networking opportunities, and hands-on experiences. By doing so, RMAIIG seeks to empower its members to leverage AI in various domains, including business, education, and personal projects.

Be on the lookout for our first meetup in town, and thanks for the support to all that have helped thus far.

Finally, here's the boy. Somehow he looks like a Renaissance painting in bed.

Cha Cha goes to bed.

Reply

or to participate.