Resource

Bias in Machine Learning

Unfair and illegal bias in machine learning can have real-world, negative consequences

Bias in Machine Learning: What is it and how can it be avoided?

Machine learning is “biased” by design because it's designed to spot patterns. However, this can run afoul when unwanted patterns are encoded into the training data itself. There are three kinds of bias: Inherent bias, unfair bias, and illegal bias.

Inherent bias refers to patterns in the data that the machine learning system is designed to uncover, whether it’s customer demographics that are likely to convert or patterns in suspicious financial transactions. Unfair bias relates to legal but unethical behavior baked into the model, such as a marketing model that unfairly prioritizes men or an editorial filter that prioritizes posts from those with similar views over those with opposing views. Illegal bias refers to models that break laws, such as recruiting algorithms that would discriminate against women.

This article will explore what bias is, and provide examples of why machine learning algorithms are inherently biased, both in theory and in practice. We will also examine the impact that this bias can have on machine learning predictions, discuss ways to mitigate this effect, and finally offer some recommendations on how companies can take proactive steps to make sure their machine learning projects are as unbiased (in terms of unethical and illegal bias) as possible.

Where ML bias has had negative consequences

Unfair and illegal bias in machine learning can have real-world, negative consequences. For instance, if you’re creating an employee attrition or loan approval prediction model, you don’t want to use demographic information like age or gender, which may result in ageist or sexist models that run afoul of anti-discrimination laws.
A couple of famous examples include Amazon's AI-powered recruitment engine, which was shown to be sexist, and Microsoft's Tay chatbot that went off the rails and became racist. That said, there are also more broadly used ML models with problematic bias.

Lending

The mortgage industry is notorious for lending bias, which has long been documented and analyzed in the research, such as in a Northwestern meta-analysis of existing discrimination studies since the 1970s. 

When it comes to the crunch—that is, when you’re trying to close a loan—banks and other institutions are known to make their decisions with surprisingly little regard for the applicant’s ability to repay their loan. Instead, they will often factor in human biases such as race, age, or gender. 

Using demographic information isn’t just often illegal—it can also backfire and lead to less accurate models. Nonetheless, as reported by the MIT-IBM Watson AI Lab, biased AI training data sets are still commonly used in the lending industry. 

Hiring

The US Equal Employment Opportunity Commission (EEOC) has already investigated instances of AI bias in recruiting tools. Further, in 2017, EEOC found reasonable cause to believe that an employer violated anti-discrimination laws by limiting the targeting for a job advertisement to younger applicants. In other words, AI-based ad targeting can run afoul of anti-discrimination laws as well.

If a deep learning algorithm weighs too heavily on one characteristic, like age, then this algorithmic bias may unintentionally disadvantage experienced adults who are past their prime hiring years. The legal implications of this example are significant but varied by jurisdiction—including how much responsibility the company bears when its algorithms run amok without being properly monitored—and will likely continue to evolve as artificial intelligence further permeates industries.

Laws against ML bias

As a result of the concerns of AI bias, certain states have specific regulations to protect employees against unfair and illegal AI. For instance, Illinois has a law called the Artificial Intelligence Video Interview Act, which requires employers to notify applicants when AI is used, obtain consent, explain how the AI works, and more.

Besides Illinois’ Artificial Intelligence Video Interview Act, New York is considering banning discriminatory AI. In particular, the law would prohibit “automated employment decision tools,” unless the creators conducted extensive anti-bias audits.

On a Federal level, the Algorithmic Accountability Act was proposed in 2019 to regulate private firms’ algorithms and would task the Federal Trade Commission with creating AI regulations. 

European Commission Rules

On an international level, the European Commission released proposed rules for artificial intelligence regulation, including strict penalties for deviants of its policies on banning “social scoring” systems, limiting facial recognition, and so on.

These rules are a broad and ambitious attempt to mitigate the gender, age, and racial bias risks of AI, but they wouldn’t be implemented for several years, as they need to undergo an approval process including the European Parliament and EU member nations. In any case, your application of AI is likely to be unaffected.

The rules take a tiered approach to regulation based on the perceived risk of an AI system. This excludes virtually all AI systems implemented by SMEs, for instance. High-risk systems include those that are used to make social credit scores, manipulate people’s behavior, and of course, systems that harm people physically or physiologically. Additional oversight is placed on biometric identification and AI systems that control critical machines, like medical devices and autonomous vehicles.

These are largely meant to prevent the creation of overreaching systems like China’s social scoring and mass surveillance systems, and to prevent damage from technologies like self-driving cars being released before they’re ready. Moreover, while AI regulations are unlikely to apply to your use case, it’s still vital to follow traditional data privacy regulations like GDPR.

GDPR

The mutually agreed General Data Protection Regulation (GDPR) has now been in place for several years, modernizing the handling of user data. GDPR replaced outdated data protection rules that were up to two decades old and is built on top of previous principles.

Personal data, which is what fuels many AI models, is at the heart of GDPR. Personal data is any data that allows a person to be identified, which may include pseudonymized data. GDPR includes the principles of lawfulness, fairness, and transparency; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality (security); and accountability. 

The new principle under GDPR is accountability, which relates to documenting how personal data is handled and the steps taken to ensure only people who need to access some information are able to. Adhering to data regulations like GDPR, and any others that are applicable in your jurisdiction is crucial to consider when building AI systems, which often rely on personal data.

Types of machine learning bias

We’ve looked at the ethical/legal frameworks for bias in machine learning. But what are the actual statistical underpinnings of these biases? Or rather, from a data science perspective, what exactly is going on behind the scenes to create these biases?

These are some common types of bias you’ll encounter when working with statistical and machine learning models:

  • Sample bias
  • Measurement bias
  • Observer bias

Sample bias is the tendency for a model to generalize poorly on data that it has not been trained on. For example, if you have a system that predicts whether an applicant will pass the bar exam based on their LSAT score, but your sample only included Harvard-educated students - it would erroneously assume that new datapoints had similar backgrounds, and would be overconfident of their probability of success, giving false positives. This problem can be mitigated by increasing the number and scope of samples you use when training your model.

Measurement bias occurs when the data itself is incorrectly measured, typically systematically. For example, suppose you’re building a model to predict whether a lead will convert, but your tracking pixel incorrectly measured the time the lead spent on your page. The model would learn incorrect patterns due to the incorrect data, which would cause it to break in the real world.

Observer bias occurs when there is an error in how observations are recorded or collected during data collection. If observations are recorded by one person but then passed on to another person for recording, this could introduce errors into the data set. This can also be an issue during the data labeling process. For example, if one data labeler labels images of traffic lights as “traffic signs,” because they’re unaware of the “traffic light” label, that would introduce observer bias and reduce the accuracy of the model.

Strategies to reduce bias

There are some simple strategies you can employ to reduce bias in your machine learning models.

These strategies are not a panacea, but they can help you to reduce the impact of human bias on your models.

Disentangling bias from signal 

The first step is to understand what you are trying to achieve. For example, if you're looking at predicting employee attrition, then it might be appropriate to focus on the behavior of your employees (signal), rather than their demographics (bias).

To give another example, suppose you’re building a loan approval prediction model. Here, the goal is to predict whether the loan will be repaid, so it makes sense to train the model with financial signals, rather than ethnicity or gender.

A conversation with an Akkio AI bot that predicts loan eligibility.

Human oversight

The key takeaway here is that it's important to think about why you are building the model in the first place. If you don’t have a clear objective for your model, then it will be easy for your biases to creep back into your decision-making process.  

Another issue with many companies is that they tend to build their AI models in isolation from other parts of their business processes. The end result is that these models can become disconnected from reality and prone to bias when applied in practice. 

It may be more appropriate for an account manager or salesperson who interacts directly with customers to use AI tools.

In order for AI tools to be truly transformative, we must ensure they are used as part of a cohesive business process; otherwise, we run the risk of creating systems that are less effective at achieving our goals than traditional manual approaches would have been.

Transparency

A third strategy worth considering is transparency - ensuring everyone involved knows how decisions will impact people and understand any potential consequences (e.g. a loan being approved may cause the customer to spend more than they would have previously). 

This type of transparency can help ensure that machine learning models are not applied in isolation but instead work hand-in-hand with other parts of your business processes.

Using a transparent no-code AI system like Akkio to avoid bias

One way to help achieve adherence to data and AI regulations, in addition to building safe, trusted, and explainable AI, is to use transparent AI platforms with low barriers to entry.

Traditional means of building AI platforms involve highly complex tools like Python, GCP, Jupyter Notebooks, and so on, which means that understanding those systems requires a high degree of technical expertise, and often the use of dedicated data scientists - not a problem for business leaders in the big data field, but a significant expense for startups. In other words, business users, ethics researchers, and the like have a hard time truly understanding traditional AI systems, leading to a trade-off between efficacy and utility. 

With tools like Akkio, anyone can build and deploy AI systems in minutes, clearly showing what data was used, where the system is deployed, the accuracy metrics of the model, and so on. By dramatically lowering the barriers to entry, Akkio makes AI systems far more transparent.

Closely related to building transparent AI is the idea of collaborative AI. With Akkio, it’s easy to invite employees at any skill level to collaborate on an AI flow, whereas traditional means of building AI were disjointed and siloed, involving files spread across devices. Now, a broad collaboration involving multiple stakeholders, which can help drive safe and ethical applications, is easier than ever before.

Conclusion

Machine learning is an exciting area of technology, and we are just scratching the surface of what it can achieve. While there are many benefits associated with applying machine learning models, we should also be mindful of the potential for bias in these systems when building them.

The good news is that if we understand how to build AI models with human oversight, then we can reduce bias in our decision-making processes. This will help ensure that AI tools have a positive impact on businesses - both now and into the future.

Ultimately, bias in machine learning models is a human problem. We have to be aware of the challenges we face, and ensure that we are applying machine learning in an ethical and transparent way. Using transparent, collaborative AI platforms helps with this. Sign up for a free trial of Akkio to see how it’s done.

Guide

The Complete Beginner's Guide to Machine Learning

A comprehensive guide to the foundations and applications of machine learning.
Learn more about Machine Learning->
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.