- Red Mage Creative
- Posts
- "AI only does good." - Why the AI Incident Database Matters
"AI only does good." - Why the AI Incident Database Matters
AI can and has done bad. Let's explore a kickass crowd-sourced solution that identifies when bad happens so that we can all implement AI with a more holistic view of its effects.

Photo by Leif Christoph Gottwald / Unsplash
Summary
AI can and has done bad. Let's explore a kickass crowd-sourced solution that identifies when bad happens so that we can all implement AI with a more holistic view of its effects.
Intro
Through my foray into AI with the Rocky Mountain AI Interest Group, I've met so many great people and talented people across industries learning and implementing AI. Like with everything that touches business, however, there's always proponents who will only talk about AI as it best benefits their interests and/or wallet. At an event in 2024, one CEO made the statement along the lines of:
"AI hasn't really done anything bad. [...] We should stop regulating the hammer and regulate the bad people using the hammer."
Even months after, these statements still stick with me. It is critical to identify the good AND the bad, and especially the ugly. The only way we improve in implementation is by recognizing shortcomings and working together to fix them. Especially in such an engineering focused area like AI, the lack of objectivity actively harms any attempts to garner interest / support, as well as meaningfully improving a feature set for all.
Today's post is meant to identify an entire database regarding AI incidents that is updated daily with news regarding how AI has been used to potentially (or actually!) harm others. Next time you run into someone who is misinformed, either through ignorance or malice, about AI and its effect, I hope that this resource helps identify where we can all be more honest.
Let's dive in!
AI Incident Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
The AI Incident Database has a small team of directors, emeritus, and collaborators with frankly hundreds of years of experience across their staff in AI and machine learning. They work to submit content, edit articles for publication, and keep the lights on for the database as an organization.
A public resource, powered by the public
One key thing I love about the AI Incident Database is that it's almost entirely crowdsourced. They encourage users to submit whenever they believe that they have identified an "AI incident," something that is largely subjective. Through this, over time they hope to define what an AI incident is as society grows and changes with the technology.
I mentioned almost crowdsourced. The database was initiated with around 1,000 articles with what the team's general understanding of AI incidents, and it's grown to almost 3,000 with community support. It only gets more comprehensive with more people looking at it!
The Incident "Leaderboard"
Another beautiful feature I'd like to highlight is their Entity view.

It's like companies are trying to compete for most incidents...
You can see with this, it acts as a leaderboard of entities mentioned throughout the database, and how many incidents involve them. This view works fantastic when someone says "Meta is pioneering AI for good" or something to that extent. As of December 20, 2024, Facebook / Meta has been the biggest creator of incidents that they have deployed or developed.
More ways to cook the AI Incident Egg
A couple more exciting views include Taxonomies and Spatial Visualization. Both of these tools slice the data up in views that otherwise would be hard to compile on your own. I'd especially love to dive deeper into Taxonomies, where they break down the harm as it pertains to race, gender, age, financial means, and more as defined by the Center for Security and Emerging Technology's AI Harm Taxonomy.

Roughly 20% of AI incidents reported involved race... what a statistic!

Most are unclassified by this taxonomy, but it's still interesting to look at.
Conclusion
The AI Incident Database, as I mentioned above, is kickass and really helps inform those who are aware of it about new and emerging incidents in the AI space. I often check this myself to stay in the know about issues, and so far it's been amazing just to keep as a resource in AI discussion.
I hope this post helps you do the same and navigate your AI trials and tribulations while staying holistically informed. See you out there!
Speaking of incidents: here's Cha Cha. He didn't do anything in particular, but he might whenever he's finished napping.

Reply