Blog

issue-resolution
Quick Tips

AI models can exhibit bias and discrimination – We have the responsibility to address and resolve these issues.

The self-perpetuating bias of AI models poses a significant threat that can have severe consequences on health, employment opportunities, access to information, and democracy. The question arises as to how we can modify data that reflects societal biases that are deeply ingrained. Algorithmic accountability is currently unregulated, but some organizations and governments are taking steps to address this issue.

Artificial intelligence has revolutionized various fields, from wealth management to population health to touchless retail operations. However, along with its benefits, there are also downsides.

When asked about the threat that AI poses, different individuals have different responses. Some believe that robots are taking over jobs, while others feel that Big Brother is always watching. While these are legitimate concerns, the most significant challenge that AI poses is the self-perpetuating bias present in its algorithms.

As AI technology advances at an exponential rate, it reinforces existing societal inequalities, leading to irreversible harm if left unchecked. AI systems are not created in isolation; they are designed, built, and deployed by people, meaning that their behaviors reflect the best and the worst of human characteristics.

Consequently, AI models inherit deeply ingrained biases from the data and algorithms that represent them.

Continuing to propagate human biases

The use of data-driven models by businesses to make important decisions that impact individuals’ lives and prospects, such as loan approvals, job shortlisting, and parole decisions, is of great concern. AI models can worsen discrimination and expand inequalities by perpetuating human biases and making them harder to identify.

To address this issue, it is crucial to comprehend how computerized decision-making can lead to bias and establish governance mechanisms to identify and prevent discrimination. Algorithms quickly absorb prejudices associated with attributes and identifiers such as gender, race, or disability status, regardless of whether sensitive information is intentionally collected or not.

This information is usually embedded in vast datasets, and AI models learn the correlations when trained on historical data. When an algorithm feeds social category information without explicitly avoiding discrimination, bias seeps in.

Discrimination for hire

One prominent example of the perils of algorithmic bias can be seen in the use of automated tools to facilitate recruitment screening. 

Although the intention is good, machine learning algorithms frequently amplify systemic biases in ways that would be unacceptable if humans were making decisions.

For instance, Amazon discovered in 2018 that its AI hiring software downgraded resumes with words such as “women” and candidates from all-women’s colleges because the company had a limited track record of hiring female engineers and computer scientists. Similarly, a study in the same year found that Microsoft AI facial recognition software assigned more negative emotions to black males than their white counterparts. As a result of such biases, automated systems unjustifiably deny opportunities to people from historically disadvantaged groups.

The justice system is not exempt from racism embedded in algorithms either, as the COMPAS algorithm was found to discriminate against people of color. Biases have also been observed in healthcare, where there is a strong preference for white patients over black patients in cost/benefit analysis and predicting who needs extra medical treatment.

While some statistical biases are necessary for model accuracy, such as when developing algorithms for breast cancer using almost exclusively female patient data, a moral responsibility exists to ensure that AI is fair. There is also a business imperative as socially aware consumers become more informed about the implications of biases within AI, leading to a potential decline in the adoption of biased technology.

Acceptable judgment

What steps can we take to address the issue of bias in AI models? It’s not enough to simply remove sensitive features like gender and race, as the models will still internalize stereotypes. Additionally, model transparency alone isn’t a solution. While interpretability and explainability can be helpful, they won’t completely eliminate bias in AI models.

Actions towards establishing standards

I am worried about the absence of a comprehensive, internationally enforceable standard or guidance to ensure that AI is used in a safe, fair, robust and equitable way.

Despite the existence of standards in different countries, algorithms operate globally, and a unified approach is required. To stop bias in AI, we need global leadership that can provide policymakers and stakeholders with the necessary tools and a broader perspective. The Singapore Model AI Governance Framework, the Accountability Act in the US, and the General Data Protection Regulation in the EU are some examples of existing standards.

However, we still have a long way to go. The digital landscape is expanding rapidly, and we are facing challenges in data collection, model inequalities, privacy concerns, and shifting norms in AI governance.

Nonetheless, I remain optimistic that we can overcome these challenges and make AI work for the betterment of humanity.

Thank you for reading. For continued insights and in-depth discussions, please follow our blogs at Ezeiatech.

Leave a Reply

Your email address will not be published. Required fields are marked *