As detailed there, the recent revolution in AI has been primarily due to the development of new models of machine learning algorithms (plus a massive increase in the availability of data for them to act on). Unlike traditional approaches to AI, which required that you be able to detail all of the rules governing how an algorithm operates upfront, machine learning instead allows you to specify an intended goal and then let an algorithm develop its own approach by working with vast quantities of data and progressively refining its own operation to maximise performance with respect to that goal.
However, this doesn’t mean that there is suddenly no role for humans. Many of these algorithms still require heavy supervision, particularly in their early stages, in order to know when they are getting things right or wrong. And whilst some algorithms may be able to learn in an unsupervised way, there are good reasons to think that is not such a great idea. An increasing number of people are highlighting the fact that algorithms which learn from historic data sets that display strong statistical bias are likely to display, and even amplify, this bias in their own operation. For example, algorithms used in the US justice system to determine bail conditions for defendants in court cases have been found to exhibit stark racial biases.
To ensure that machine learning algorithms develop properly, and to counteract the danger of algorithmic bias occurring, there is going to be an important role for “algorithm trainers” who can spot the early signs of undesirable effects occurring and intervene to ensure that they do not. For charities that represent vulnerable individuals and groups who may find themselves on the receiving end of algorithmic bias, this is likely to be a vital part of their work.