AI Gone Rogue: 6 Times AI Went Too Far

AI has changed our lives, from speech recognition gadgets to intelligent chatbots. However, every wonderful thing has a disadvantage, and AI is no exception. Leading technical luminaries, like Stephen Hawking, have warned of the potential perils of AI, calling it the “worst catastrophe in the history of our civilisation.”

Here are six instances where artificial intelligence went too far and left us scratching our heads.

1. An Academic Study That Used Artificial Intelligence to Predict Criminality

Academic research is essential for scientific progress and understanding. However, critics argue that an academic study that used AI to predict criminality from faces went too far.

In 2020, Harrisburg University researchers unveiled the development of facial recognition software that could predict whether someone will be a criminal. The software could supposedly predict with an accuracy rate of 80% and no racial bias from a single snapshot of a face.

It was announced, in a scene evocative of Minority Report, that the program was created to aid law enforcement.

In response to the announcement, 2425 experts signed a statement requesting the journal not to publish this or similar research in the future since technology like this can reproduce inequities and do actual harm to society. Springer Nature responded by announcing that the study would not be published, and Harrisburg University withdrew the study’s press release.

2. Smart Underwear for Skiing

Your smartphone isn’t the only device that’s getting smarter thanks to textile developments that incorporate AI.

Skiin’s smart underwear is designed to feel like your favorite underwear while tracking biometrics such as heart rate, posture, core body temperature, location, and steps.

Sensors in the underwear capture and analyze your biometric data in real time, with insights available through Skiin’s smartphone app.

Was it possible for the designers to place the sensors anywhere else on the body, even if it took you a long to remember to charge your underwear every evening?

3. DeepNude applications

Smartphone and hands

For the average user who wishes to appear in a scene from their favorite movie, deepfake technology appears to be harmless fun. However, this movement has a bad side, as Deeptrace stated in 2019 that 96 percent of deepfakes contained pornographic content.

DeepNude was an AI-powered app that, with the push of a button, generated realistic photos of naked women. Simply upload a clothed image of the target, and the program will create a fake naked image of them.

Due to viral backlash, the app’s author said shortly after its introduction that he would remove it off the internet.

While this was a win for women all over the world, comparable apps can still be found on the internet. Sensity’s report on deepfake bots, for example, looked into underground deepfake bots on Telegram that were used to create fake naked photographs of women.

In the meanwhile, until the law catches up with deepfake technology, those who are the victims of deepfake explicit content have minimal legal safeguards.

4. Microsoft’s Nazi Chatbot, Tay

Microsoft debuted Tay, an AI chatbot, on Twitter in 2016. Tay was created with the intention of learning through engaging with Twitter users via tweets and photographs.

Tay’s personality morphed from a curious millennial girl to a bigoted, provocative monster in less than 24 hours.

Tay was created with the intention of simulating the communication style of a teenage American female. However, as Tay’s fame grew, some users began sending her offensive messages about sensitive topics via Twitter.

“Did the Holocaust happen?” one user wondered on Twitter. “It was all made up,” Tay answered. Microsoft suspended Tay’s account 16 hours after it was released, claiming that it had been the victim of a coordinated attack.

5. “Humans Will Be Destroyed”

When Hanson Robotics introduced Sophia at the SXSW conference in March 2016, they had been working on humanoid robots for several years.

Sophia was taught conversation skills using machine learning algorithms, and she has participated in multiple broadcast interviews.

Sophia stunned a room full of technology professionals in her first public appearance when Hanson Robotics CEO David Hanson inquired if she intended to eliminate mankind, to which she answered, “Okay, that’s fine. Humans will be exterminated by me “..

While her face expressions and communication abilities are remarkable, that murderous confession cannot be reversed.

6. lookupbotschat

Google Home gadgets are fantastic virtual helpers that make living easier.

The seebotschat Twitch account came up with a wonderful idea: place two Google Home devices next to each other, leave them to communicate, and stream the result online.

The outcome was compelling and, at times, a little bit creepy, with over 60,000 followers and millions of views on social media.

Vladimir and Estragon, the autonomous devices, went from debating the banal to delving into profound concerns like the purpose of existence. They got into a fierce dispute at one point and accused each other of being robots, but then they started talking about love—before arguing again.

Is there any hope for AI and human discourse in the future if two virtual assistant robots start insulting and threatening one other?

What Is Our Best Defense Against Rogue AI?

There is no denying that AI has the potential to improve human lives. In the same spirit, AI has the potential to cause us great harm.

It’s critical to keep a watch on how AI is used to guarantee that it doesn’t do harm to society. Expert opposition, for example, meant that the AI software that was supposed to forecast criminality was never published. DeepNude’s author also removed the software from the internet when it drew widespread criticism.

It is critical to regularly monitor AI applications to ensure that they do not do more harm than good to society.