Pulse.ng logo
Go

Machine Errors 10 times artificial intelligence has taken a fat 'L'

Because afterall, these machines are made by humans who themselves are far from perfect.

  • Published:
Machines take 'L' too play

Machines take 'L' too

(GettyImages)
24/7 Live - Subscribe to the Pulse Newsletter!

A.I, algorithms and machine learning technology has been a blessing in so many ways.

From the much valued results a simple Google brings us, to drones being lifesavers transporting blood to victims on site and so much more, the advantages of AI cannot truly be quantified.

But just like to err is human alright machines (Artificial Intelligence (A.I) and Algorithms) also get it wrong on their quest to simplifying life for man.

Here are some instances some funny, others not so much, where AI has got it wrong:

1. Facebook Algorithm blunder

In August 2016, Facebook replaced the human editors of its “Trending” topics section with an algorithm after facing allegations of political bias.

Within a few days, the algorithm served up a story which falsely stated that Meryn Kelly was fired from Fox News for supporting Hillary Clinton.

2.  Elon Musk‘s Tesla self-driving cars

A Tesla Model S/ play A Tesla Model (Tesla)
 

When put on Auto-pilot, on one occasion the Tesla self-driving vehicle led to an unfortunate fatal accident  involving a trailer with a man inside the ill-fated Tesla . There had also been a number of near-miss cases reported when these driverless cars went auto-pilot. Elon Musk had warned drivers to still be in control of the cars by placing their hands on the wheel and staying alert.

3. A robot passport checker defaulted

play Richard Lee's eyes were not open enough to be read valid by the robot on duty. (Facebook/Richard Lee)
 

In New Zealand, a robot  rejected an Asian man’s passport application because it thought the man’s eyes were closed. (Asian people typically have eyes that could appear like they are closed). The young man, Richard Lee, 22, had to go process the application manually.

4. Alexa playing suggestive sexual content

play Innocent child polluted by Amazon's Alexa. (Youtube)
 

An innocent song request by a child was misinterpreted for a sexual content request by the voice-controlled digital assistant, Alexa.

play Alexa goes over the top (Guerilla/Getty Images)

 

Alexa which was supposed to turn down the request of a toddler asking it to purchase a dollhouse without parents’ permission went ahead and ordered the package from Amazon. The mom had to donate it to charity

5. A robot goes haywire on boy

 In China, a young boy was reportedly injured by a seemingly harmless ‘’robot’’ named Xiao Pang.

 

The robot which was designed to interact with children aged 4 to 12 and display friendly emotions, was reportedly frowning according to witnesses.

6.  A beauty contest judged by robots totally shunned dark skinned women.

play The Robot judge appeared bias just like a human judge can be as well. (Beauty.AI)
 

The contest Beauty.AI saw robots that were supposed to judge the women based on facial symmetry, identify wrinkles and blemishes to select the winner. The algorithm didn’t favor women with dark skin. Six thousand people from countries around the world submitted their photos, and 44 winners were later announced -- only one of whom had dark skin.

7. Microsoft’s Twitter chat bot turned anti-feminist and pro-Hitler

Chat bots are supposed to be conversational and understanding like a normal human.

 

This Microsoft’s chat bot Tay started off making nice conversations like ‘’Humans are super cool”  but after humans who probably wanted to put to test how Tay would react to abuse, it picked up soon became vulgar and offensive too  with words like “  “I f***ing hate feminists and they should all die and burn in h***.”

8. A Japanese user’s Twitter account getting suspended because of 'violence against a mosquito.'

Yeah this happened alright. The Japanese guy tweeted some words in the lines of "Bastard! Where do you get off biting me all over while I'm just trying to relax and watch TV? Die! (Actually you're already dead)," and Twitter’s algorithm picked it up as a death threat, from a human to another human, consequently shutting down the poor guy’s Twitter account.

9.  A robot failed to get into college

play All robots are not that smart apparently. (Todai Robot Project)
 

Just when some think robots are perfect creations by humans, we see this by a robot unable to pass a series of tests to get him into college. In 2011, a team of researchers began working on a robot called “Todai Robot” that they intended would be accepted into Japan’s competitive University of Tokyo.

Having taken Japan’s entrance exam for national universities in 2015, the robot failed to obtain a score high enough to be admitted into the college. A year later, the robot made another attempt and again scored too low -- in fact, the robot showed little improvement between the two years.

10. Google Brain turns photos into ‘pixelated monsters.’

A photo-enhancing technology product from Google - Google Brain. Using neural networks, the new technology compares the low-res image to high-res photos in a database.

play Google Brain's enhancing feature is far from perfect. (Google Brain)

 

It then guesses where to place certain colors and details based on those in the higher-res photos. Put to test to improve the quality of some low-res images Google Brain ended up making them worse off - still a work in progress you may say.

Do you ever witness news or have a story that should be featured on Pulse Nigeria?
Submit your stories, pictures and videos to us now via WhatsApp: +2349055172167, Social Media @pulsenigeria247: #PulseEyewitness & DM or Email: eyewitness@pulse.ng. More information here.