Making existing information more accessible
Perhaps the most prominent example of AI in a charity context so far is the use of chatbots. These are AI-powered, text-based conversational interfaces; which most of us will by now be familiar with through their widespread use in commercial customer service settings. These are being harnessed by a number of charities to provide services of various kinds.
For example, Arthritis Research UK has partnered with IBM Watson to develop a bespoke ‘virtual personal assistant’ that can provide information and advice to people living with arthritis. Other charities are tapping into more “off-the-shelf” versions of the technology by piggy-backing on existing platforms: WaterAid, for instance, has developed a chatbot aimed at awareness-raising and engagement that can be accessed through Facebook.
There is a potential long-term financial incentive here, in that charities might be able to reduce costs significantly if they are able to provide services via chatbots 24 hours a day, 365 days a year, rather than having to employ large numbers of staff. However, as with any examples of automation, this may prove controversial and raises questions about the responsibilities of organisations towards their human employees in the future. Cost-cutting is not the only reason to consider utilising chatbots though - there are also potential advantages for service users and supporters. One is that advice can be tailored based on the user’s stated needs. In addition, the time it takes to find relevant information significantly reduced. For advice on things like acute mental health issues, where there is a premium on people being able to access the help and advice they need at crisis moments (which might well be in the middle of the night), this could offer a real advantage over human operator-delivered advice services.
One step beyond site-specific text-based chatbots there are conversational assistants. Many of us will now be aware of these more generalised voice-operated interfaces as a result of their presence on our mobile phones or, increasingly, in our homes in the form of devices like Amazon’s Alexa or Google’s Home. One immediate possibility this offers is to make the internet (and, as a result, other technologies) much more accessible to certain groups. This is most obvious in the case of those with visual impairments. However, it could also benefit those who have no particular physical disability but are simply less comfortable with technology; because conversational interfaces mirror more closely the ways in which people are used to getting information offline.
Could we see an end to language barriers?
Another area in which some charities have been able to harness the existing application of AI is language translation. Many of us will have used Google Translate or another text-based translation tool at one point or another. Despite the obvious power of these tools, they have until now been seen as somewhat inaccurate and not a valid replacement for human translation services. But that may be about to change: we are starting to see new AI-based translation tools that not only match human performance, they outstrip it. It is only a matter of time until AI translation is the standard.
The crucial thing is that this does not only apply to written translation: the development of powerful speech recognition and natural language-processing algorithms has now made it possible to achieve highly accurate voice translation in real time, and this is a product that is already available commercially. This could be of huge benefit for charities that deliver services, in terms of ensuring they are as accessible as possible.
For example, The Children’s Society reported in 2017 that it was using Microsoft’s Translator app to facilitate some of its interactions with refugee and migrant young people. In addition to reducing the cost and administrative burden of having to procure professional translators, charity workers who used the app also reported that in some cases it had made the interaction easier, as the young people were more comfortable talking about sensitive and difficult issues without an unfamiliar third party in the room.
To see where things are likely to go next, it is worth noting that in late 2017 Google announced the launch of its new Pixel Buds: in-ear headphones that are able to use AI to live-translate 40 languages directly into the ear of the user. As devices like these become more widely available, and the technology becomes embedded in other products and services, it may well be that we will see an effective end to language barriers in the foreseeable future.