The risks of Bias in Artificial Intelligence

Artificial intelligence is not science fiction anymore. This technology has become part of our lives, and you might even be surprised to discover that you’re already using it. However, artificial intelligence models need large amounts of data to be trained and in many cases, this is associated with bias.

 

Bias was the subject of our last “In Code We Trust meetup. “The risks of bias in AI”, was led by Ana de Prado, Machine Learning Program Leader at Intelygenz.

 

We shared some figures and trends concerning Artificial Intelligence, highlighting that currently, 80% of enterprises are investing in this technology.

Ana introduced the Deep Learning hype and the need to understand how this technology will transform the industry and the way the world does business. In this regard, she also presented some important concepts, such as “word embedding” and its modelling role, as well as how it features learning techniques in natural language processing (NLP).

Opening up the subject at the meetup, Ana showed us some interesting – but unfortunate – examples of bias in Artificial Intelligence that are clearly influenced by social stereotypes and prejudices.

 

Without a doubt, identifying and mitigating bias is essential to ensuring that Artificial Intelligence can have a positive impact on society. As Ana mentioned in her speech, these technologies need more: women, people of color, people from different fields, ethics, inclusion, and regulation.