Racist AI behaviour is not a new problem

Professor Margaret Boden, an AI and cognitive science researcher, took the time to speak to me in 2010 about computers, AI, morality and the future. One of the stories she told me comes back to me every now and then, most recently by Microsoft’s latest failure to anticipate the result of releasing their chat bot, Tay, into the world, only to see it become racist in less than 24 hours.

In the early ‘80s St George’s Medical Hospital School in London decided to automate parts of their admissions workflow. The existing process was time consuming and expensive. Before the applicants were invited for interviews, their initial suitability was assessed based on their grades, classes taken and other criteria. It was that stage that was going to be automated.

A statistical system for weighting those different criteria was devised. The program would select the highest ranking candidates to invite for an interview. After the initial design was complete, the program was trained to adjust the weighting it gave to different criteria based on previous years’ admissions datasets. It was taught to achieve results consistent with the way human staff would select potential students.

The program was used between 1982 and 1986, until in 1986 two members of staff complained to the Commission for Racial Equality. The program had learnt to discriminate against non-white and female applicants. Those with postcodes betraying their working-class background were also given lower priority in the selection process.

The story making the news: The Age, 26th February 1988.

The quote from the spokesperson for the Commission for Racial Equality at the time was damning:

St George’s computer program merely replicated the discrimination that was already being practised by the selectors, and there is no reason to believe that the selectors at St George’s were any more discriminatory than selectors elsewhere.

The Age, 26th February 1988, (emphasis mine)

That was in the ’80s. Plus ça change.