At Relative Insight, we’re in a constant state of change and improvement. Language is an evolving entity, so we always have to make sure our analysis is as flexible as can be.

Our most current improvement project is to update our semantic tagging system (basically, how our system “knows” what a word means). We want our analysis to reflect how people talk in the wild, so we’re using different machine learning techniques and real world social media data to inform our system. A standard English dictionary won’t tell us the meaning of new words like “yeet” or “lit” but finding the use of those words on the internet will. People use language in all sorts of new and wonderful ways on the internet, so using the internet to help us decipher what they mean makes a lot of sense.

There is just one small problem with this: The internet is a terrible, terrible place.

The Problem with AI

The internet is certainly a marvel, it’s full of information and benevolent forces. However, the anonymity of it also brings out the worst in the human race. In the past few years, it has become more and more apparent that the way we’re using machine learning has  a tendency to create AI that’s racist, sexist, and all around mean.

A famous example was Tay,: an AI chatbot created by Microsoft. It was supposed to use each interaction to improve itself in real time, just like humans do. It took less than 24 hours for Tay to go from bright eyed and innocent to holocaust denying and sexist. This was obviously not Microsoft’s intention, but it highlights how good intentions can go awry. Their bot, which was designed to reflect humanity, reflected the worst of it.

Unexpected Sexism

During the improvement of our semantic tagging system, we noticed a similar problem, specifically in regards to its understanding and portrayal of gender.

NOTE: In day to day life, gender is experienced on a spectrum, but in language, it’s  currently still quite categorical. While we use words like ‘masculine’ and ‘feminine’ here, we don’t mean to endorse a binary narrative, we’re just discussing how language is currently used.

English doesn’t have grammatical gender markers like, say Spanish or German, but it does have certain words which have a more masculine or feminine association and definition. Feminine associated words might be:

  • “Mother”
  • “Queen”
  • “Women”

and masculine associated words may be:

  • “Father”
  • “Mister”
  • “Bloke”

It’s perfectly acceptable for these to have an assigned gender tag because it’s part of what defines the word.

Next, we have words like:

  • “Actress”
  • “Heiress”
  • “Aviatrix”

These are certainly marked as feminine. But their counterparts…

  • “Actor”
  • “Heir”
  • “Aviator”

…Are gender neutral. Of course these words were historically “masculine”, but over time they assumed a gender neutral feeling.

The next example is a bit more troubling, we noticed that our model was starting to classify words like:

  • “Childminder”
  • “Co-pilot/star/anchor”
  • “Caretaker”
  • “Housekeeper”

as feminine and

  • “Hero”
  • “Officer”
  • “CEO”
  • “Tactician”

as masculine.

Solving problems with language

There’s an instantly recognisable problem here. They’re all jobs and titles which have a strong historical tendency to be assigned to one gender in particular, but cultures evolve and improve, so we have to make sure we don’t perpetuate these harmful stereotypes.

Also, looking at these words again , it’s easy to spot another troubling theme. Machine learning tends to pick out titles which are more about the home, family, care and support as feminine and power and leadership as masculine. These are not negative traits to have, but it begins to be a problem when AI begins to see women as unlikely to be powerful or men as ill-fitted to be parents.

We have to be real about how language is used and the settings it’s used in. The internet is a place where people can be themselves, without filter. We find a very real usage of language there which we wouldn’t find in books or news articles. However, the anonymity of the internet provides a platform for people to say things they would never dare say in public. An AI algorithm learns to make decisions based on the data it sees, just like humans do- and that’s the problem; if you feed a machine learning algorithm language without any filters, it will naturally make similar linguistic choices.

But, there is hope. Just as we have to intentionally teach kids that hitting people is bad and sharing is good, directing algorithms to avoid certain terms or assumptions is crucial. We need to take responsibility for honing what we’ve created- If we don’t, it has the potential to become a monster. As it stands, AI is quite good at what it does and is a wonderful tool, but it still has a long way to go.

At Relative Insight, we think it’s incredibly important to be honest about language, but it’s equally as important to make sure we’re not perpetuating damaging prejudices. With this in mind, we’ve put together a crack team of sexism-busting individuals from a variety of gender experiences. It’s their job to sort through the muck, keep our definitions up to date and make our tagging system a more accurate and progressive tool. As language evolves, so do we.

 

Contact the team to learn more

[contact-form-7 404 "Not Found"]

 

 

Cover Image from – Claudio Schwarz @purzlbaum

Other Images from – Jilbert Ebrahimi, vice.com, thegryphon.com, thisisreallyinteresting.com

 

By Beth Thomas & Ryan Callihan - Data Scientist & Insight Analyst