Daish, Peter

Daish,  Peter
Start date:
October 2020
Research Topic:
User Centred Design Approach to Designing and Evaluating Fair Unbiased AI-Driven Models in Socio- Technical Decision Support Systems
Research pathway:
Research Supervisor:
Dr Matthew Roach
Supervising school:
Department of Computer Science,
Primary funding source:
ESRC DTP Studentship

Following a call from the UK government in the “Industrial Strategy” to put the UK at the forefront of the AI and Data revolution, there has been a significant response with initiatives to innovate, develop and integrate such solutions into social decision making systems. However, capitalising on the potential of AI has to be tempered with caution: cases from Microsoft and Google have shown the potential for harm – Microsoft’s AI twitter chatbot, Tay, started to tweet racist messages and Google’s automatic image classification software failed to correctly label people based on their race – because insufficient regard was given to biases hidden within the datasets used to train their AI algorithms. Further integration of AI technology within society – for example, making judicial decisions – must therefore be designed with minimal bias so as to combat any social injustice.

This project will adopt a mixed methods approach to reducing the potential for harm caused by bias in AI algorithms. It will engage stakeholders from experts to end users in a user-centred design process establishing a workable definition and acceptance criteria for appropriate adoption of a non-bias algorithm. Through focus groups, prototyping and testing, users will debate and design the effectiveness of the removal of bias from AI algorithms in a given social context. Examples contexts will include those where widely accepted historical data contains bias e.g. social care, finance, legal decisions.

Digital prototyping will focus on, the use of a new popular computational method: Generative Adversarial Nets (GANs). We aim to use GANs to ensure the minimisation of bias in models and data sets used to train decision making algorithms that can address social injustice in the adoption of AI. Generative Adversarial Nets (GANs) is an algorithmic architecture (typically used with AI deep learning algorithms) to generate synthetic instances of data that can pass for real data. This is achieved by using two learning algorithms that compete (adversarial) to drive each other to perform better. The ‘Generative’ algorithm’s aim is to fool a ‘Classifying’ algorithm by generating synthetic examples of data that cannot be distinguished from real data by the Classifying algorithm.

The proposal here is to investigate how GANs be used to create non-biased datasets on which algorithms can be trained to generate models – mainly free from bias – for decision making support in sociotechnical systems.