⇓ More from ICTworks

How to Mitigate Negative Algorithmic Biases in Machine Learning – Your Weekend Long Reads

By Guest Writer on February 2, 2019

Algorithmic Bias

Machine learning models or algorithms have shown over the past few years that they can exhibit human traits like racism and sexism by misidentifying black people as gorillas (Barr, 2015) or perpetuating gender income inequality through ad suggestions (Datta et al., 2015).

Algorithmic bias, however, is not inherently problematic.

  • A model that discriminates between those who are credit-worthy and those that are not, is desirable.
  • An algorithm that discriminates negatively against people of colour when attempting to determine rates of criminal recidivism is not (Kirkpatrick, 2016).

Given the potential harm machine learning can cause, how can South African organisations mitigate against problematic algorithmic bias in their data and models?

Three Algorithmic Biases in Data and Models

This essay will use the taxonomy of algorithmic bias created by Danks and London (2017) to differentiate between the various types of algorithmic bias and give examples of how problematic bias might perpetuate immoral discrimination within a South African context.

Specifically, it will examine training data bias, algorithmic focus bias and transfer context bias.

It will then show that in order for South African organisations that use machine learning to mitigate against this bias in their data and models, they need to implement measures that broadly fall under the following steps:

  • Increase scrutiny of data and models through auditing and increasing public accessibility to models.
  • Fill relevant gaps in available data and re-allocate resources.

1. Training Data Algorithmic Bias

The most intuitive bias is training data bias; if biased data are used, the resulting model reflects that bias. Underrepresentation of data is also particularly dangerous in perpetuating societal bias.

For example, while South Africa has eleven official languages, English is the only language that offers sufficient data quantitatively and qualitatively, due to historical conditions in South Africa and global technology priorities. The written history of English stretches back centuries, a comparable wealth of (historical) data for other South African languages does not exist.

Natural Language Programming (NLP), a sub-discipline of machine learning, seeks to interpret, manipulate, and potentially generate human language. In order to build models that might one day be used to triage the urgency of messages, it is necessary that one recognises the language used.

NLP utilizes what is known as a corpus, a large body of words and stories which can be used to train models to interpret meaning, emotion or even for the purpose of transcription. For the English language this is not a problem, there exists an enormous quantity of open source data sets (“Corpus of Contemporary American English (COCA),” 2019) and books (“Project Gutenberg,” 2019).

For a number of South African languages which were primarily oral tradition there is no comparable large corpus of materials from which a model might learn.

One corpus of publicly accessible, consistent corpora that exists across all ten South African (excluding English) languages was put together by Centre for Text Technology (CTexT) at North-West University (Eiselen and Puttkammer, 2014). This is inadequate, particularly as it consists of formal, not conversational texts. It disadvantages those who don’t speak English or opt to receive messages in their native tongue.

2. Algorithmic Focus Bias

Algorithmic focus bias refers to the selection or rejection of certain types of input data to avoid. For example, it is illegal to discriminate against someone based on their race. In this case, one introduces statistical algorithmic bias in an attempt to prevent societal discrimination.

Simply removing certain features does not always solve the problem. Parry and Eeden (2015) show that South Africa is racially segregated along geographical divides. This means that one would need to carefully account for how location or address might be used in our models, as a proxy for race.

3. Transfer Context Bias

Finally, one should consider transfer context bias. In this case, blame may be apportioned more toward how an algorithm is deployed by those who wish to use it.

For example, if autonomous systems are trained to allocate health care resources and a model is trained on data generated within a specific context, such as an urban area, when that model is used within a context like rural hospitals, it will not perform optimally.

The assumptions inherent in the design of these systems would not hold, and would provide a inequitable service.

How to Reduce Algorithmic Bias?

There are three distinct ways of reducing algorithmic bias with varying degrees of efficacy and expense. Increasing scrutiny of the data and the processes used to generate models using the data can help to reduce algorithmic bias. It could be accomplished in three ways.

A. Hiring external firms as data and model auditors

Any internal organisational bias would be more likely exposed by an impartial third party, and auditors would be able to point out where processes had failed and suggest improvements to current and future projects (Hempel, 2018).

The number of organisations that specifically deal with these issues are limited, but this is due to the relative recentness of the industry. Increasing demand should incentivize a large number of organisations to offer algorithmic scrutiny as a service.

B. Making algorithms publicly accessible for scrutiny.

Allowing external parties to feed in their own data sets and examine the results would create an accountability mechanism and reporting of cases where the model may not behave fairly.

This was demonstrated when ProPublica was able to use an available algorithm that predicted recidivism rates in criminals, to determine that it unfairly favoured white inmates (Larson et al., 2016).

Furthermore, there is a growing body of work that seeks to investigate bias in ‘black box’ algorithms. This means that external parties would not need access to the internal workings of an algorithm, and could still be able to detect bias.

Tan et al. (2018) offer a method of auditing models by attempting to emulate results with another model and Adler et al., (2016) provide a method for interrogating features that might indirectly influence outcomes, even when not included in the model.

It could be argued that is impractical as it would result in essentially providing free use of intellectual property of the company, but one can limit its use, either by rate limiting access to the model’s API or by standardizing use of the API with an auditing agreement between users, similar to that of a free trial.

Even so, putting something in the public domain does not mean that it will actually be scrutinized, in the same way that software that is made open source is not guaranteed to be interrogated by external software developers.

Regardless, it still provides a method of accountability for those who would like to know why they may have received a particular result. This is contrary to closed systems without any method of investigation, such as a flawed teacher performance evaluator, which left its subjects with no recourse to appeal or follow up (O’Neil, 2016).

C. Making data sets available to the public.

The benefit is that this would reduce the issues of representative information scarcity that contributes to under or misrepresentation of people groups in building of models. This would enable other organisations to use those data sets to build less biased models.

A direct benefit to organisations would be as a positive signal to prospective employees that an organisation cares about algorithmic bias that this is a company actively attempting to combat it.

It could be argued that the sharing of data violates ethical stewardship of user’s data and this outweighs the responsibility of contributing to reducing negative bias.

Releasing data sets could actively harm users to whom the data belongs, violate their trust (Hern, 2018) and damage the reputations of organisations that share data. While this is certainly a concern, there are ways in which data can be anonymized and methods already exist, which ensure the ethical sharing of data sets.

Nonetheless, this is challenging to do as it requires collaboration across legal, data and security spheres. Poor regulatory frameworks in South Africa make it difficult to assess what can be shared, and there is very little upside for an organisation to take on the risk.

This means that while collectively organisations would benefit from sharing of data sets, individual organisations do not currently have the benefit of structures that would allow them to do so.

Funding for Mitigating Algorithmic Bias

The above points stress the need to commit resources specifically to detecting and mitigating algorithmic bias. Even if business models usually rely on funders or clients, contractual provision must be specifically laid out to determine deliverables and allocations for algorithm development, testing and operational costs.

Unless funds are specifically allocated to check for bias, organisations may simply not have the resources to allocate time to scrutinize the data and models, reflect on how choices might introduce bias or develop diverse teams and testers more likely to flush out systematic problems.

Checklists and frameworks that attempt to counter bias through examining processes and individuals involved in projects are available for free (Loukides et al., 2018), but allocating time to implement them is not.

It could be argued that increased resources for fixing algorithmic bias diverts resources from other necessary elements which increase likelihood of project failure, particularly in the technology sector, where one is encouraged to deliver value as quickly as possible in the form of a minimum viable product.

One could refute this by defining success in the longer term, where creating robust and ‘fair’ models lessens the chance of not maximizing impact or failing to serve underrepresented groups.

We Need to Reduce Negative Algorithmic Biases

Combatting problematic algorithmic bias is not a simple task with direct solutions. This essay makes the point that bias is not inherently a negative, distinguishes amongst various types of algorithmic discrimination and has given examples of how negative bias might manifest themselves in a South African context.

It has critically evaluated practical steps and examples that organisations can take to mitigate problematic algorithmic bias and given examples of negative outcomes when they have not been followed.

It has shown that scrutiny through external parties is vital to creating processes and organisations that reduce the negative consequences of these models, that while sharing data is in the best interests of society, it is challenging for individual organisations and finally underpinned the fact that these steps require specific allocations in order to be carried out.

Further research is needed into how society can audit algorithms for problematic algorithmic bias and it is also worth discussing in what cases the benefits of using machine learning outweigh the potential harm of their use.

By Nathan Begbie, Data Scientist at Praekelt.org and originaly published as Problematic algorithmic bias; on manifestation in a South African context and methods for mitigation

Bibliography

Adler, P., Falk, C., Friedler, S.A., Rybeck, G., Scheidegger, C., Smith, B., Venkatasubramanian, S., 2016. Auditing Black-box Models for Indirect Influence. ArXiv160207043 Cs Stat.

Barr, A., 2015. Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms [WWW Document]. Wall Str. J. URL https://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/ (accessed 1.23.19).

Corpus of Contemporary American English (COCA) [WWW Document], 2019. URL https://corpus.byu.edu/coca/ (accessed 1.22.19).

Datta, Amit, Tschantz, M.C., Datta, Anupam, 2015. Automated Experiments on Ad Privacy Settings. Proc. Priv. Enhancing Technol. 2015, 92–112. https://doi.org/10.1515/popets-2015-0007

Eiselen, R., Puttkammer, M.J., 2014. Developing Text Resources for Ten South African Languages., in: LREC. pp. 3698–3703.

Hempel, J., 2018. Want to Prove Your Business Is Fair? Audit Your Algorithm. Wired.

Hern, A., 2018. Fitness tracking app Strava gives away location of secret US army bases. The Guardian.

Kirkpatrick, K., 2016. Battling algorithmic bias: how do we ensure algorithms treat us fairly? Commun. ACM 59, 16–17. https://doi.org/10.1145/2983270

Larson, J., Mattu, S., Kirchner, L., Angwin, J., 2016. How We Analyzed the COMPAS Recidivism Algorithm [WWW Document]. ProPublica. URL https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm (accessed 1.23.19).

Loukides, M., Mason, H., Patil, D., 2018. Ethics and Data Science, 1st ed. O’Reilly.

O’Neil, C., 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 1 edition. ed. Crown, New York.

Parry, K., Eeden, A. van, 2015. Measuring racial residential segregation at different geographic scales in Cape Town and Johannesburg. South Afr. Geogr. J. 97, 31–49. https://doi.org/10.1080/03736245.2014.924868

Project Gutenberg [WWW Document], 2019. . Proj. Gutenberg. URL http://www.gutenberg.org/ (accessed 1.22.19).

Tan, S., Caruana, R., Hooker, G., Lou, Y., 2018. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. Proc. 2018 AAAIACM Conf. AI Ethics Soc. — AIES 18 303–310. https://doi.org/10.1145/3278721.3278725

Filed Under: Data
More About: , , , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

Sorry, the comment form is closed at this time.