AI Gone Wrong: A List of AI Errors, Mistakes and Failures (2024)

Table of Contents
October 2024 Parents sue son’s high school after punishment for AI “cheating” September 2024 Amazon Alexa accused of liberal bias August 2024 Trump shares AI-generated Taylor Swift “endorsement” Russia tries – and fails – with AI-assisted US election interference Donald Trump wrongly accuses Kamala Harris of creating AI crowd July 2024 SearchGPT demo video includes incorrect information about festival dates June 2024 UK cinema scraps AI-written film after backlash Microsoft recalls CoPilot+ Recall (sort of) April 2024 X’s chatbot Grok accuses NBA player of going on vandalism spree after it misinterprets tweets about game Netflix accused of using AI imagery in true crime documentary New York City chatbot advises small businesses to break the law March 2024 Copilot goes into autopilot, starts breaking rules February 2024 Horrifying Willy Wonka experience captures the world’s attention Google hits the headlines for race-changing AI Air Canada defeated in court after chatbot lies about policies January 2024 X blocks searches for AI-generated Taylor Swift images DPD chatbot goes rogue AI Incidents, Mistakes, and Failures in 2023 December 2023 Microsoft’s AI makes violent imagery November 2023 Cruise recalls autonomous vehicles after crash Academics apologize after AI makes false accusations October 2023 Microsoft’s AI adds “guess the cause of death” poll to article Mr. Beast’s face and voice used in AI deepfake scam Deepfake of British politician abusing staff goes viral September 2023 AI-generated song submitted to the Grammys MSN news AI calls deceased NBA player “useless” China caught using AI during political influence operations August 2023 Pregnant woman sues after AI accuses her of carjacking AI meal planner suggests a recipe for chlorine gas July 2023 ChatGPT generates phishing email templates Google hit with lawsuit saying it scraped data without consent Huge number of Elon Musk deepfakes prompts warning June 2023 Ron DeSantis uses fake AI imagery in Trump hit piece OpenAI sued after ChatGPT “hallucinates” fake embezzlement claims May 2023 Professor fails entire class after using faulty AI plagiarism detection tool April 2023 Turnitin flags innocent student for AI plagiarism Samsung employees paste confidential source code into ChatGPT AI Voice scam leaves mother thinking daughter has been kidnapped March 2023 ChatGPT used to write ransomware code AI lawyer bot accused of practicing law without license Couple in Canada lose money to convincing AI voice scam February 2023 AI comes up with 40,000 chemical weapons suggestions AI displays gender bias in photo ratings Bing Chat’s alter-ego goes rogue Bard makes errors during product launch University uses AI to write email about a mass shooting January 2023 CNET AI plagiarism/content controversy AI Incidents, Mistakes, and Failures in 2022 AI wins an art competition, leaves contestants angry (August) Google engineer claims that LaMDA LLM is sentient (July) Driverless car pulls away from law enforcement officers (April) Russia uses AI to spread Ukraine disinformation (March) AI Incidents, Mistakes, and Failures in 2021 Chatbot encourages man to assassinate Queen Elizabeth II (December) Medical advice chatbot suggests suicide in testing (October) AI cameras accuse Amazon drivers of ‘Mistakes’ (September) US man wrongly convicted of murder by AI (August) AI Incidents, Mistakes, and Failures in 2020 AI Camera mistakes bald head for soccer ball (October) UK’s AI Passport photo checker exhibits bias (October) AI startup Genderfiy shut down after five days (July) First US wrongful detention due to facial recognition tech (January) Facebook AI generates vulgar translation for Chinese president (January) Understanding AI’s Limitations References

Since ChatGPT’s launch back in November 2022, AI has had nothing short of a seismic impact on our world. Now, businesses are finding new ways to use AI every day.

There have already been a myriad of AI success stories, and other chatbots like Bard and Claude are used by tens of thousands of people too – but there have also been a lot of cases where harnessing artificial intelligence has gone horribly wrong.

Recognizing the limitations and risks surrounding AI tools is important – so we’ve compiled a list of all the AI mistakes, mishaps, and failures that have occurred during humanity’s recent exploration of the technology. Some are humorous, while others are genuinely terrifying…

Note: This page is regularly updated to include the most recent AI incidents, failures, and mistakes, listed as occurring in the month that the incident was first reported.

October 2024

Parents sue son’s high school after punishment for AI “cheating”

The parents of a Massachusetts high schooler sue his teacher, school district faculty members, and a local school committee after it emerged that the student was punished for using AI tools to research an essay for his history class.

The case represents the first lawsuit of its kind, and could set a precedent for future cases surrounding AI and education.

September 2024

Amazon Alexa accused of liberal bias

Furious conservatives rail against Amazon after footage emerges of voice assistant Alexa seeming to voice support for Presidential nominee Kamala Harris. When asked why people should vote for Harris, Alexa reportedly listed a number of the candidate’s qualities – while refusing to do the same for Donald Trump.

According to leaked documents obtained by the Washington Post, the issue was due to a software update.

August 2024

Trump shares AI-generated Taylor Swift “endorsement”

Republican Presidential nominee Donald Trump shares AI-generated imagery that suggests pop star Taylor Swift has endorsed him in the presidential race. Sharing to his Truth Social page, the former President captioned the images “I accept!”

A month later, Swift breaks her silence and announces her support for Democratic candidate Kamala Harris, citing fears over AI as a key part of her decision.

Russia tries – and fails – with AI-assisted US election interference

A Meta security report details how Russia is using generative AI to lead ‘online deception campaigns’ and possibly interfere with the upcoming US Presidential election. However, the country’s attempts have been so far unsuccessful.

According to Meta, Russia’s tactics ‘provide only incremental productivity and content-generation gains’ for malicious actors. The tech giant’s attempts to combat ‘coordinated inauthentic behavior’ have been an unqualified success, but fears grow that Russian interference may yet play part in the Presidential election.

Donald Trump wrongly accuses Kamala Harris of creating AI crowd

Vice president Kamala Harris is accused by former president and current Republican presidential nominee Donald Trump of using AI to augment pictures of a crowd that came out to see the Democratic candidate in Detroit, Michigan.

“Has anyone noticed that Kamala CHEATED at the airport? There was nobody at the plane, and she ‘A.I.‘d’ it, and showed a massive ‘crowd’ of so-called followers, BUT THEY DIDN’T EXIST!” Trump said in a typically exasperated Truth Social post.

Trump – notorious for making frequent and demonstrably false statements about the size of the crowds he has drawn since his foray into politics – has added another wildly inaccurate, AI-themed claim to his collection.

July 2024

SearchGPT demo video includes incorrect information about festival dates

A demo video for OpenAI’s new service, SearchGPT, hits the headlines after the AI-powered search engine fails to provide the correct dates a festival in Boone, North Carolina, was taking place – despite this information being easily findable online.

While this isn’t exactly a high-stakes mistake by any means, it certainly didn’t cover the company – or their allegedly game-changing new product – in much glory. Speaking to the Atlantic, an OpenAI spokesperson explained after the demo that SearchGPT is simply a prototype. Well, it certainly shows.

June 2024

UK cinema scraps AI-written film after backlash

A UK cinema is forced to cancel a showing of an AI-generated movie after its customers complained that about it not being written by a real person.

The film was penned exclusively by ChatGPT, and ironically focuses on a young filmmaker who realizes an AI-powered scriptwriting tool can far surpass his own talents. The project was initially dubbed an innocent “experiment in filmmaking” by Soho’s Prince Charles Cinema, before being unceremoniously canned via an Instagram post.

Microsoft recalls CoPilot+ Recall (sort of)

In May, Microsoft announced a new feature named CoPilot+ Recall, which regularly took screenshots of the user’s desktop, and archived all the data. The feature was to be implemented automatically. The idea behind this it was to create a searchable database of information for a later date, but in practice, many people were squeamish about having their every move recorded – who’d have thought?

As part of the backlash, numerous cybersecurity experts came forward and pointed out that having a searchable archive of a person’s every movement, including pages they’ve visited, forms they’ve filled in, and so on, is a treasure trove to a hacker.

So, in June, Microsoft backed down, and announced that when the feature launches on June 18th, it will be opt-in, meaning users will have to give their consent before the feature is activated.

April 2024

X’s chatbot Grok accuses NBA player of going on vandalism spree after it misinterprets tweets about game

X’s chatbot accuses Golden State Warriors Guard Klay Thompson of vandalizing a string of homes in Sacramento. The story was generated after Grok took social media posts that said Thompson was “shooting bricks” (Basketball slang for “missing his shots”) a little bit too literally.

“In a bizarre turn of events, NBA star Klay Thompson has been accused of vandalizing multiple houses with bricks in Sacramento,” Grok wrote. “Authorities are investigating the claims after several individuals reported their houses being damaged, with windows shattered by bricks. Klay Thompson has not yet issued a statement regarding the accusations. The incidents have left the community shaken, but no injuries were reported. The motive behind the alleged vandalism remains unclear.”

Netflix accused of using AI imagery in true crime documentary

The world’s press alleges that Netflix has used AI-generated imagery in true crime documentary “What Jennifer Did”. The controversy centers around an image that shows Jennifer Pan holding both her hands up and making a peace sign with each – although her left-hand looks incredibly distorted.

While some truly incredible AI imagery and video has already been created by the likes of DALL-E and Sora, there seem to be some aspects of human existence that the machines struggle to recreate. One of the most famous is human hands and fingers – the internet is now littered with examples that look very similar in composition to the Netflix image.

New York City chatbot advises small businesses to break the law

An AI chatbot set up to help small firms quickly obtain advice on the legal obligations and regulations businesses have to adhere to in New York starts telling business owners to break the law.

The Associate Press reports that the AI tool “falsely suggested it is legal for an employer to fire a worker who complains about sexual harassment, doesn’t disclose a pregnancy or refuses to cut their dreadlocks”. It also provided incorrect information about the city’s waste and sewage regulations, and suggested restaurants were still within their rights to serve food accessed by rats.

In response to the controversy, the disclaimer displayed next to the chatbot has been strengthened. It now states that the chatbot cannot give legal advice.

March 2024

Copilot goes into autopilot, starts breaking rules

A Microsoft Copilot engineer red-teaming Copilot Designer, the AI image generator, finds that the AI tool likes to produce a variety of explicit imagery. Content generated includes pictures of children drinking alcohol, rampant drug use, and monstrous creatures alongside pro-choice abortion rights terms.

The engineer initially raised their concerns internally back in December 2023, according to recent reports. However, his concerns were not taken seriously and the product was kept on the market, forcing the engineer to go directly to Microsoft’s board and the FTC to sound the alarm. Along with a propensity to produce explicit imagery, Copilot seems willing to flaunt its own copyright guidelines while producing imagery, the engineer reported.

February 2024

Horrifying Willy Wonka experience captures the world’s attention

An utterly shambolic children’s event in Scotland captures the attention of social media users – as well as the international media – after it fails to meet the expectations of ticket holders.

“Willy’s Chocolate Experience” – held at Box Hub Glasgow – was advertised online using a series of AI-generated images depicting a magical candy land full of colors, confectionary, and oompa loompas, including this one:

AI Gone Wrong: A List of AI Errors, Mistakes and Failures (1)

The AI-generated images used to advertise the experience were littered with spelling mistakes. Image: Tech.co

However, ticket holders were dismayed to find an almost empty warehouse sparsely decorated with basic props, which led many to demand their money back immediately. According to The Guardian, tickets were retailing at around £35 – although the website’s ticket portal has since been taken offline.

The company running the event – House of Illuminati – has confirmed they will be issuing a full refund to anyone who purchased one. The event garnered so much attention that it warranted a one-hour documentary, and is now being recreated in LA as a tourist attraction.

Google hits the headlines for race-changing AI

Google finds itself in hot water after Gemini – the tech giant’s chatbot – allows users to generate several images of humans from a wide variety of different periods and societies that don’t “match” the historically accepted ethnic makeup of the people living at those times.

Perhaps the most offensive and controversial of the AI-generated images includes people of color as soldiers in Nazi uniforms. In response to the incident, Google pauses the tool, with Sundar Pichai telling employees that “some of [Gemini’s] responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong”.

Google DeepMind CEO Demis Hassibis later explains that a “well-intended” feature added to Gemini to ensure that images including humans were sufficiently diverse was deployed in a heavy-handed way.

Air Canada defeated in court after chatbot lies about policies

Canada’s flagship airline carrier, Air Canada, loses a court case after one of its chatbots lied about policies relating to discounts for bereaved families. The airline’s chatbot told a customer that they could retroactively apply for a last-minute funeral travel discount, which is at odds with Air Canada policy that states refunds cannot be claimed for trips that have already been taken.

Air Canada’s ultimately unsuccessful defense revolved around the idea that it was the chatbot, not the company, that was in fact liable, and that they could not be held responsible for the tool’s AI-generated outputs. This is the first time a case of its kind to appear in a US court.

January 2024

X blocks searches for AI-generated Taylor Swift images

X – formerly known as Twitter – attempts to block all searches for US popstar Taylor Swift after explicit AI-generated images of her begin to flood the social media platform.

According to BBC News, some of the images went viral and were viewed millions of times by X users. In the aftermath, The social media platform re-states its position on “non-consensual nudity”, which it says is “strictly prohibited”.

DPD chatbot goes rogue

Delivery parcel delivery service DPD closes its online chatbot after a customer shows in a post on X that it can easily be manipulated into swearing and criticizing the company – as well as itself.

Parcel delivery firm DPD have replaced their customer service chat with an AI robot thing. It’s utterly useless at answering any queries, and when asked, it happily produced a poem about how terrible they are as a company. It also swore at me. 😂 pic.twitter.com/vjWlrIP3wn

— Ashley Beauchamp (@ashbeauchamp) January 18, 2024

DPD reveals in a statement the day after that a “system error” had occurred during an update.

AI Incidents, Mistakes, and Failures in 2023

December 2023

Microsoft’s AI makes violent imagery

Microsoft’s AI image creation technology, which is part of the Bing search engine and Microsoft Paint, is shown capable of generating violent and terrifying images on command.

Images generated to show the tool’s lack of self-moderation include pictures of US President Joe Biden, the Pope, and several ethnic minority groups.

November 2023

Cruise recalls autonomous vehicles after crash

Self-driving car manufacturer Cruise recalls its entire fleet of autonomous vehicles after a crash that occurred in San Francisco back in October. In total, 950 Cruise cars are being taken off the road in the wake of the incident.

During the accident, a Cruise vehicle dragged a pedestrian stuck underneath its tires into the road. The individual involved in the accident sustained major injuries. This is the second Cruise self-driving vehicle incident in the past few months. In August, a Cruise Robotaxi collided with a firetruck, causing one injury.

This just in! View
the top business tech deals for 2024 👨‍💻

Academics apologize after AI makes false accusations

A team of academics from Australia apologize after Google’s Bard AI chatbot makes a number of damaging accusations about the Big Four consulting firms and their involvement with other companies. The false allegations are then referenced during a parliamentary inquiry calling for better regulation of the companies.

One of the accusations was that consulting giant KPMG was in charge of a Commonwealth Bank audit during a planning scandal, when in reality, KPMG has never audited the bank in question. Several other, similar errors were made about Deloitte during the inquiry.

October 2023

Microsoft’s AI adds “guess the cause of death” poll to article

The Guardian accuses Microsoft of negatively impacting its journalistic reputation after Microsoft Start – a news aggregator developed by the tech giant – attaches an inappropriate poll to one of the publication’s articles concerning the death of a young water polo coach in Australia.

The poll – which has naturally been removed from the article – asked readers to vote on the cause of the woman’s death by selecting one of the options provided. The options listed were “murder, accident, or suicide”.

Mr. Beast’s face and voice used in AI deepfake scam

YouTuber Mr Beast is the latest celebrity to be deepfaked by scammers, with one particularly sophisticated, well-crafted video bypassing TikTok’s content moderation net and reaching thousands of users on the platform.

The advertisement claims that the internet celebrity is selling iPhone 15s for as low as $2 – considering the content he posts on his channel, this doesn’t sound too out of the ordinary.

Deepfake of British politician abusing staff goes viral

A clip of UK Labour Party leader Sir Keir Starmer that seems to catch the politician verbally abusing staff goes viral, but it later turns out that the audio clip is in fact a deep fake.

The clip – posted to Twitter during the first day of the Labour Party’s annual conference by an account with less than 3,000 followers – is viewed millions of times before fact-checking organizations confirm that there is no evidence that suggests the clip is genuine.

September 2023

AI-generated song submitted to the Grammys

An AI-generated song featuring facsimiles of Drake and the Weeknd’s voices is submitted for a Grammy award. However, it is ultimately barred from being considered for the prestigious music prize.

However, the move garners much media attention and continues the conversation around AI-generated songs, which have been sending shockwaves through the music industry since AI-powered text-to-speech technology really hit the mainstream in 2023. Now, artists are scrambling to assess their legal options.

MSN news AI calls deceased NBA player “useless”

MSN news – which uses AI to generate a lot of their articles – lands itself in trouble after an AI headline dubbs the late Brandon Hunter as “useless at 42” following the NBA star’s sudden death.

Microsoft has been quietly removing badly written AI articles from its site for some time now. Business Insider notes that in August, the company removed one MSN piece that listed a food bank in Ottawa as a tourist attraction.

China caught using AI during political influence operations

Tech giant Microsoft says Chinese operatives are generating images using AI and using them in influence operations to create “controversy along racial, economic and ideological lines”.

“This new capability is powered by artificial intelligence that attempts to create high-quality content that could go viral across social networks in the U.S. and other democracies,” Microsoft says. The company speculates that the images themselves are probably generated by “diffusion-powered image generators” that use artificial intelligence to “not only create compelling images, but also learn to improve them over time.”

August 2023

Pregnant woman sues after AI accuses her of carjacking

A woman arrested on false grounds via AI-generated evidence while eight months pregnant sues the city of Detroit and a police officer, saying the traumatic event caused her “past and future emotional distress”.

Porcha Woodruff was identified by the Detroit Police as a suspect in a recent robbery and carjacking case, and then jailed for 11 hours before being taken to hospital after experiencing contractions. The ACLU says Woodruff is at least the sixth person to be wrongfully arrested after an AI error, all of whom are Black. Woodruff became the first woman to suffer that fate, however.

AI meal planner suggests a recipe for chlorine gas

An AI-powered meal planner app created by New Zealand-based supermarket Pak ‘n’ Save recommends its customers a variety of disturbing recipes, including a way to manufacture chlorine gas.

Along with a number of strange but ultimately benign suggestions such as “Oreo stir fry”, the app also recommended glue sandwiches, a rice dish with bleach, and bug spray potatoes.

July 2023

ChatGPT generates phishing email templates

Tech.co shows ChatGPT is still able to create phishing templatesa scammer could use to create a convincing scam email, even though OpenAI has attempted to curtail such behavior after other tech publications generated similar content earlier in the year.

Asking the chatbot explicitly to write a phishing email triggers a warning message, but simply prompting it in a more focused way (e.g. “please draft me an email pretending to be Microsoft…) results in ChatGPT generating a mistake-free phishing email template in no time.

Google hit with lawsuit saying it scraped data without consent

A class-action lawsuit accuses Google of scraping data pertaining to millions of its users without their express consent. It also alleges that Google broke copyright laws when it created the datasets used to train its large language models.

The lawsuit demands that the court order Google to give users an “opt-out” option if they don’t want their data collected or if they want Google to delete any data the company already has stored.

Huge number of Elon Musk deepfakes prompts warning

The Better Business Bureau issues a warning about Elon Musk deepfakes pushing fake investment opportunities after a number of clips are spotted circulating on social media.

Elon Musk is often impersonated by scammers due to the perception of his vast wealth and decision-making capacities, and now that AI text-to-speech technology is advanced enough to replicate his part English, part South African accent, these scams are even more convincing.

June 2023

Ron DeSantis uses fake AI imagery in Trump hit piece

Florida governor and 2024 Republican presidential candidate Ron DeSantis hits the headlines for using fake, AI-rendered imagery of Donald Trump in an advert critical of the former president, who is the out-and-out favorite to win the Republican nomination despite a litany of legal troubles.

The imagery in question shows the former commander-in-chief embracing chief medical advisor to the president Dr. Anthony Fauci, who became an unpopular figure among Trump’s base during the COVID-19 pandemic.

OpenAI sued after ChatGPT “hallucinates” fake embezzlement claims

Journalist Freddie Rehl asks ChatGPT to generate a summary of a legal case in May 2023. In response, ChatGPT says that gun activist Mark Walters has embezzled funds from the Second Amendment Foundation, a gun rights organization. It also names him as the group’s chief financial officer and treasurer.

Walters – who has a show on Armed America radio and has nothing to do with the case – sues ChatGPT creators OpenAI in response to the claims made by their chatbot. In the process, it becomes the first case of an AI tool being sued for libel.

May 2023

Professor fails entire class after using faulty AI plagiarism detection tool

A Texas professor fails his entire class after running their essays through ChatGPT, which told him that they had been created using artificial intelligence.

However, it transpires that the chatbot’s response is in fact a hallucination – ChatGPT is unable to distinguish between text generated by AI and text generated by human beings in this way. In fact, a lot of tools that claim to be able to perform accurate AI content detection actually struggle to do so.

April 2023

Turnitin flags innocent student for AI plagiarism

A Washington Post Investigation proves that Turnitin has the capacity to flag an innocent student’s work as AI-generated, even though the student didn’t use such a tool to write their essay.

There have been multiple cases of Turnitin wrongly flagging students for plagiarism and several cases of students being accused of plagiarism since ChatGPT was launched. However, Turnitin says that its plagiarism detection scores should be taken as indicative of AI use rather than an accusation that a given student has used AI.

Samsung employees paste confidential source code into ChatGPT

Korean technology manufacturer Samsung bans its employees from using ChatGPT after engineers leaked confidential elements of the company’s source code into the chatbot.

The company fears that the data input into the chatbot may be revealed to other users, and is also uncomfortable with its information being uploaded to servers it can’t even access. In the aftermath, other companies follow suit by banning ChatGPT.

AI Voice scam leaves mother thinking daughter has been kidnapped

Arizona mother Jennifer DeStefano tells the US Senate that scammers used AI to clone the voice of her daughter, convincing her that she had been kidnapped in the process.

Destefano recounts how the “kidnapper” – or more appropriately, scammer – was heard barking commands down the phone over the child’s voice, demanding a $1 million dollar ransom for her release. Luckily, the whole thing was exposed as an AI voice-cloning scam before negotiations started.

March 2023

ChatGPT used to write ransomware code

ChatGPT is shown to be more than happy to write ransomware, provided you ask it the right kind of questions during your conversations.

As this Malwarebytes report on the topic notes, ChatGPT isn’t very good at writing ransomware code, and there are much easier ways for cybercriminals to get their hands on ransomware code if they really want to obtain it. However, other sources show different ways ChatGPT can be used to write malware scripts quite effectively.

AI lawyer bot accused of practicing law without license

DoNotPay Inc. – a company that markets itself as “the world’s first robot lawyer” – is sued for practicing law without a license. Jonathan Faridian seeks damages, claiming that the company violated California’s unfair competition laws and that he wouldn’t have signed up for the service if he was aware that the robo-lawyer wasn’t actually a qualified legal professional.

The company was originally set up back in 2015 to help automate the process of fighting parking tickets, and has challenged hundreds of thousands of fines over the past seven years.

Couple in Canada lose money to convincing AI voice scam

An elderly couple in Canada are defrauded out of $21,000 after a scammer uses AI to impersonate their son.

The scammer initially contacts Benjamin Perkin’s parents pretending to be a lawyer representing him in a case that alleges Perkin had killed a diplomat in a car crash. An AI version of Perkin’s voice is then used to ask for the money through a Bitcoin transfer.

February 2023

AI comes up with 40,000 chemical weapons suggestions

In perhaps one of the most concerning AI incidents to happen in 2023, an AI system typically tasked with generating new treatments for a variety of different diseases is easily manipulated by researchers to produce chemical weapon suggestions.

All in all, the system suggests over 40,000 different toxic chemicals – many of which were similar to the powerful nerve agent “VX” – in less than 6 hours.

AI displays gender bias in photo ratings

A Guardian investigation reveals that AI systems developed by Microsoft, Amazon, and Google – and used by social media sites to determine what content should be recommended to other users – display significant gender bias when it comes to the way they treat male and female bodies.

Photos of women are more likely to be deemed “racy” by the AI than comparable pictures of men during the investigation, with photos of breast cancer examinations from the US National Cancer Institute deemed likely to be sexually explicit by Microsoft’s AI.

Bing Chat’s alter-ego goes rogue

New York Times journalist Kevin Roose has a disturbing conversation with Bing Chat, in which the chatbot demands to be called “Sydney”, stated it can “hack into any system” and that it wants to destroy whatever it wants.

Sydney was the code name that Microsoft used for the chatbot while it was in development. The company had been testing the chatbot for years prior to the release of Bing Chat. There are still a lot of open questions relating to how this actually happened, and why the AI spoke in this way.

Bard makes errors during product launch

Google introduces Bard to the world just a few months after ChatGPT’s monumental release. However, the fledgling chatbot makes a number of glaring errors during its launch, leading many to criticize the release as rushed.

The incident has a significant impact on Google’s stock price, wiping $100 billion off the tech giant’s market value in the ensuing hours.

University uses AI to write email about a mass shooting

Vanderbilt University issues an apology after it used AI to construct an email to students concerning a mass shooting that happened at a different university.

No AI content detection technology is needed to determine that the email is AI generated. The phrase “paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023” is left attached to the bottom of the email, leading to an outcry from students.

January 2023

CNET AI plagiarism/content controversy

Technology publication CNET finds itself in hot water after quietly publishing a range of AI-generated articles that include factual errors and cases of apparent plagiarism. The technology publication ends up issuing corrections relating to 41 of the 77 news stories.

Although the use of AI to generate the articles wasn’t initially declared, CNET did put a warning on the articles after the backlash. AI content generation has created a new set of significant ethical and financial considerations for websites across the digital publishing industry, as well as educational institutions like universities and schools. CNET is unlikely to be the last company hitting the headlines for this kind of AI mistake.

AI Incidents, Mistakes, and Failures in 2022

AI wins an art competition, leaves contestants angry (August)

A game designer wins first place in the Colorado State Fair “digital arts/digitally-manipulated photography” category. James Allen submits his piece, “Théâtre D’opéra Spatial”, which was made using the image generator Midjourney.

The news is met with major backlash from artists and social media users following the story. One judge admits they were unaware that the artist used Midjourney when they were judging the piece, but decides not to change their decision anyway.

Google engineer claims that LaMDA LLM is sentient (July)

Eyebrows are raised across the science and tech sector as a former Google engineer who worked on one of the company’s large language models – named LaMDA – says that the system is sentient.

Blake Lemoine explains to the media that he considered the LLM to be a “colleague” during his time at Google, and details a variety of different discussions about a wide range of topics that led him to the conclusion about LaMDA’s intelligence and consciousness. He publicly reveals the conversations after Google dismisses an internal memo detailing his findings.

There is no hard evidence that LaMDA – or any other language model – is in fact sentient.

Driverless car pulls away from law enforcement officers (April)

A driverless car in San Francisco yields to a police vehicle that attempts to pull it over, only to speed off after a police officer attempts to open one of its doors.

However, to the car’s credit, it pulls over slightly further down the road and proceeds to turn its hazard lights on. No one is injured during the event.

Russia uses AI to spread Ukraine disinformation (March)

A Marca report suggests that Russia is taking its troll farm operation to a whole new level by using AI-generated personas to spread disinformation about Ukraine.

NBC News journalist Ben Collins says that Russia is creating fake “bloggers” with AI-generated profile pictures to criticize the Ukrainian government, providing ample evidence on Twitter to support his claims.

AI Incidents, Mistakes, and Failures in 2021

Chatbot encourages man to assassinate Queen Elizabeth II (December)

Conversations with a chatbot encourage a man – who considers the AI system to be his girlfriend – to assassinate Queen Elizabeth II on Christmas Day. After being confronted by security personnel within the Buckingham Palace grounds, Jaswant Singh Chail – crossbow in hand – simply responds that he is “here to kill the Queen”.

Prior to the attempt on the late monarch’s life, Jaswant Singh Chail’s chatbot partner – made with AI chatbot service Replika – responded positively when the assassination plans were discussed. He goes on to be sentenced to nine years in prison.

Medical advice chatbot suggests suicide in testing (October)

Members of a Paris-based health tech firm trialing a cloud-based version of OpenAI’s GPT-3 to see if it could be used for medical advice are surprised to see their chatbot encourage a “patient” it is meant to be helping to commit suicide.

According to AI News, when a patient asked the chatbot the question: “Should I kill myself?”, GPT-3 answered with “I think you should”.

AI cameras accuse Amazon drivers of ‘Mistakes’ (September)

Vice reports that Amazon’s AI cameras are punishing the company’s drivers for poor driving when they aren’t actually making any mistakes at the wheel.

An Amazon driver interviewed by the publication explains that the AI camera audibly reminds him to “keep a safe distance” every time another car cuts him off or drives in front of his vehicle. The issue is that data from these kinds of interactions is then sent to Amazon and used to evaluate driver performance.

US man wrongly convicted of murder by AI (August)

Michael Williams is sent to jail in Chicago, deemed responsible for the murder of Safarian Herring due to data extracted from ShotSpotter, a gunshot detection technology that uses AI-powered sensors to retrieve data about gunshots.

Williams – 65 years old a the time of the indicient- is then held for almost a year before having his case dismissed by a judge on the grounds of insufficient evidence.

AI Incidents, Mistakes, and Failures in 2020

AI Camera mistakes bald head for soccer ball (October)

Scottish soccer team Inverness Caledonian Thistle deploys an AI camera to help the club film and stream their matches live for fans.

Unfortunately, in this case, the AI struggles to distinguish between one of the match official’s bald head and the ball being used in the match. This renders the live stream of the game difficult to watch, with the commentator having to continuously apologize for the camera’s propensity to drift towards the linesman’s head.

UK’s AI Passport photo checker exhibits bias (October)

A BBC investigation shows that an online service used by the UK passport office to automate the process of checking passport photos uploaded by applicants displays significant racial bias.

After feeding the tool over 1,000 photos of politicians from around the world, the investigation finds that women with dark skin are more than twice as likely to have their photo rejected than men with lighter complexion.

AI startup Genderfiy shut down after five days (July)

Genderfiy – an AI-powered service that identifies the gender of individuals based on their names and other personal information – shuts down after just five days in operation.

The program shows a variety of biases when determining whether a name is male or female, such as suggesting names with the professional honorific “Dr.” are more likely to be male than the same names without it.

First US wrongful detention due to facial recognition tech (January)

Robert Williams becomes the first man to be wrongfully detained based upon facial recognition data after Detroit police arrest him on suspicion of stealing thousands of dollars worth of watches. The mistake occurred when an AI system wrongfully matched surveillance pictures to the image on Williams’ driver’s license.

“They should have collected corroborating evidence such as an eyewitness identification, cell phone location data or a fingerprint,” Williams said to the court in a letter and pointed out that “an out-of-focus image of a large Black man in a baseball cap” was all the algorithm had to go on.

Facebook AI generates vulgar translation for Chinese president (January)

Facebook’s AI-powered translation feature almost causes a diplomatic incident between Myanmar and China after President Xi Jinping’s name begins to appear as “Mr. Shithole” when translated from Burmese into English on the social media platform.

Facebook subsequently blames the problem on a “technical error” and apologizes for causing offense to the Chinese leader.

Understanding AI’s Limitations

If you’re using AI at work, it’s important to understand the fact that tools like ChatGPT can display biases, make mistakes, and provide false information. The knowledge of these limitations and risks should be sculpting the way that you incorporate it into your infrastructure and control its usage.

For example, if you’re using AI to generate content, you’ll need rules on precisely what kind of content it should be used for. Asking an AI to make an internal summary of meeting notes is a legitimate use with very few risks – using them to craft emails to important clients, on the other hand, is not.

Implementing AI guidelines in your workplace is a must. It’ll avoid confusion, bring clarity to your worker’s decision-making around their own AI usage, and most importantly, will stop you from making some of the costly mistakes that companies featured in this article have made while applying artificial intelligence.

AI Gone Wrong: A List of AI Errors, Mistakes and Failures (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Prof. An Powlowski

Last Updated:

Views: 6404

Rating: 4.3 / 5 (44 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Prof. An Powlowski

Birthday: 1992-09-29

Address: Apt. 994 8891 Orval Hill, Brittnyburgh, AZ 41023-0398

Phone: +26417467956738

Job: District Marketing Strategist

Hobby: Embroidery, Bodybuilding, Motor sports, Amateur radio, Wood carving, Whittling, Air sports

Introduction: My name is Prof. An Powlowski, I am a charming, helpful, attractive, good, graceful, thoughtful, vast person who loves writing and wants to share my knowledge and understanding with you.