Have you ever noticed the stark differences between the advertisements shown on our respective Facebook pages, even under the same roof? From Husband to Wife, Mother to Daughter, Sister to Brother, our screens are inundated with an array of contrasting products, services and political campaigns. Whilst pregnancy test banners and dating site pop ups fill our screens, our male counterparts are targeted by extreme sporting experiences and tech products. Is AI revolutionizing the world of advertising or moulding us further into gender, race and age stereotypes?
With AI being in its early development, a mere toddler in the tech world, many questions are being asked about where AI is truly heading. SciFi films exaggerate the dangers of AI, of advanced robots who learn to outsmart their human controllers. In reality, it would seem the problems we face with AI are not too dissimilar to the contentious issues in society today, stereotype and bias.
Facebook, Amazon and YouTube are just some of the many websites using Artificial Intelligence and Machine Learning to employ a more targeted approach, offering many benefits to companies and the end user, such as; increased efficiency and productivity, better customer support and personalized engagement. Whilst reducing ‘useless ads’ and limiting losses, how accurate is AI at predicting who we are? and therefore what we want? I fear the danger of targeted advertising is that AI systems predicting our habits are unintentionally learning our own stereotypes and bias and therefore may not ‘predict’ our habits but instead make assumptions based on gender, race or age. Does AI allow individuals to break the cycle of stereotype when algorithms aren’t built to challenge these norms?
In a tech driven world where AI is starting to be increasingly incorporated into recruitment processes, issuing insurance and advertising platforms, it is a crucial time to ensure AI does not learn our mistakes. Whilst data supplied to AI does not carry definitive statements such as ‘only hire engineers that are men’, AI can learn from inputted data which shows there is a higher proportion of engineers that men make more successful engineers and therefore the chosen candidate is male.
What is the solution? Diversity, Diversity, Diversity.
Teams that write algorithms and build AI in advertising or recruitment must ensure that they leave their own subconscious bias at home and input enough data to counteract pre-existing bias in data. IBM’s Susannah Shattuck spoke last month about the use of Watson OpenScale to battle this problem.
Starting today, we are making it easier to detect and mitigate bias against protected attributes like sex and ethnicity with Watson OpenScale through recommended bias monitors.
Furthermore, ensuring companies employ multicultural, multiracial teams with men and women who can build a system that recognizes, for example, statistically women are less likely to be successful computer programmers, not because they are less capable but because they have been less encouraged to pursue this career. AI in recruitment processes should be working with us, NOT against us.
AI is totally within our control to manage and develop because it only learns from the principles, values and conditions we give it. These therefore must be diverse enough to ensure AI becomes the open-minded citizen we want it to be. Studies conducted by companies such as Glass AI have used machine learning and computational linguistics to track the extent of gender bias in the UK.
Here at LUCA, we provide our clients with AI powered solutions in Advertising and Marketing campaigns. The importance of AI across sectors and countries is vast, a true tech revolutionary but like humans it comes with faults. Recognizing these faults and correcting them now means AI can truly be the champion in the tech ring for the next generation.
To stay up to date with LUCA, visit our Webpage, subscribe to LUCA Data Speaks and follow us on Twitter, LinkedIn o YouTube.