Impact on organisations and funding

AI is likely to have a profound effect on the way organisations, and even entire industries, operate. This is going to apply to civil society just as much as any sector, so there is the potential for significant disruption to the governance and business models of charities and non-profits; as well as to the means by which people are able to engage with the causes they care about. In this section, we will consider just a few of the ways in which this disruption might manifest.

Robotic Process Automation (RPA) & automated call centres

Perhaps the most straightforward disruption will come from automation. This has become a focus of widespread attention and concern recently because of the awareness that AI brings the potential for automation extending far beyond the traditional blue collar, manual jobs that have long been under threat from the introduction of industrial machinery and robots. The advent of machine learning and cognitive computing means that it is now (or will soon be, at least) possible to automate even the kind of skilled or knowledge-based jobs that were long thought impervious to such a threat.

We will consider the wider social impact of mass automation, and the role that civil society will have to play in managing this transition, in the next section. Here, let us focus on the impact that automation will have on CSOs themselves as organisations. Two of the areas that we can already identify as low-hanging fruit are repetitive, data-heavy processes and services that require telephone operators. In terms of the former, there is already an entire field of Robotic Process Automation which focuses on finding ways of using software “robots” to automate and improve clerical processes. It is almost certain that CSOs could benefit from much of what is being developed within this field. For instance, a blog from Nesta’s Geoff Mulgan in 2017 outlined ways in which the grant application and selection process used by philanthropic foundations might be streamlined and rationalised using AI.

In terms of call-centres and telephone operators, we considered in the previous section the use of AI-powered chatbots by nonprofits to deliver advice services in furtherance of their mission. But some of these organisations may also operate quasi-commercial call centres that are focused on helping people to access products and services, and as such could also benefit from the application of customer-service chatbots.

Autonomous vehicles

Whilst there have been some much-publicised setbacks in their development recently it seems certain that within the next decade we will see driverless cars and freight vehicles on our roads. There are likely to be wider societal implications to the introduction of autonomous vehicles. Many, for instance, have speculated that it will lead to a significant shift in our relationship with cars, as we shift from a norm of owning our own vehicle to one of having access to a shared vehicle of some sort.

This could radically alter the nature of the operations of some CSOs. Organisations that currently operate specialist services to help those with mobility needs, or whose physical or mental impairment prevents them driving or using public transport, could instead help those people access mainstream driverless services. Similarly, organisations with complex transport and logistics requirements (e.g. many humanitarian aid and international development NGOs), might be able to access those services without having the capital costs of owning and operating large fleets of vehicles.

RegTech

CSOs are subject to many different kinds of regulation by a wide range of agencies around the world. The adoption of AI by regulators, therefore, could have a significant impact. And we are already seeing examples of this as part of the wider ‘RegTech’ field. For example, the UK tax authority HMRC is trialling the use of software robots to check tax returns and chatbots for customer service. The possibilities go much further, however: for instance, machine learning could be applied to large volumes tax data or companies’ financial data to analyse patterns that could be used to develop ‘early-warning systems’. This could enable a shift towards more preventative regulation, where potential issues are identified early and dealt with before they come to pass, which would be both more effective and cheaper than traditional methods of enforcement after the fact.

The same principles could apply to charity regulation in the future. As with anything else to do with ML, this will put a huge emphasis on data. It will also raise many of the questions about fairness, transparency and accountability that we will come on to in section 4. 

case study 580 260 robot

HMRC deploys robots to check tax returns

  • The Times

HM Revenue & Customs (HMRC) wants to automate ten million tasks by the end of 2018, ranging from complex tax cases to customer service on Twitter.

Read this article
pixabay virtual reality woman case study 580 260

Robo-Advisory in Wealth Management

  • Deloitte

As robots become an emerging trend in the classic field of Wealth Management, we look closely at the German Robo-Advisory market.

Read the report

Al & Philanthropy

Giving to charity is (according to classical economics, at least) an inherently irrational act. However, there have always been those who have sought to remedy this perceived failing, and to make philanthropy more rational so that it is a better tool for redistribution within our society. AI could offer new ways of redressing this balance, and could have a profound impact on the ways in which people are able to give to charity.           

One way in which this impact could be felt is through the use of AI to turn philanthropy advice into a mass-market product. There are already numerous examples of financial services companies developing “robo-advisors” to give advice to customers. One of the key benefits of doing this is often argued to be that it makes such service more cost-effective, so they can be offered to a wider base of clients. If AI could be applied to automate philanthropy advice in the same way that it has been used to automate financial advice, then it could make it a feasible mass market product, and this could have a massive impact on the ways in which people give.

There are various different ways in which AI could be applied to offer philanthropy advice. One is to use the same sort of tailored recommendations based on past behaviour or peer group activity that underpin the algorithms used by Facebook or Amazon to present you with new content or products (i.e. “if you liked X, why not try Y?”, or “your friends are all doing Z, why not join them?”) Facebook itself has enabled giving to charity via its Facebook Messenger Service.  And along similar lines, Salesforce has partnered with United Way in the US to add an advice function to its workplace giving platform based on its AI-powered “Einstein”.

The obvious appeal of this is firstly that it fits well with existing platforms; and also that social cues are an important part of philanthropy, so harnessing peer group effects is potentially a powerful way of getting people to give. However, there are also clear reasons to be wary. The main one being that algorithms based on prior behaviour or peer group activity simply tailor information to fit with existing biases. When it comes to charitable giving, this means that they are likely to result in well-understood causes and well-known organisations getting promoted at the expense of less well known ones. This should be a serious source of concern for a sector where there are already concerns about the balance between large and small organisations and the difficulty of fundraising for unpopular causes.

A more sophisticated way to offer philanthropy advice, and one that would go further in terms of the challenge of making philanthropy more rational, would be to apply ML to data on social and environmental needs (much of which is out there already, although probably sitting in siloes in the public and private sector) and to data on the social impact of CSOs and interventions. This would enable identification of where the most pressing needs were at any given time, as well as the most effective ways of addressing those needs through philanthropy, and thus allow a rational matching of supply and demand. We have previously coined the term “philgorithms” for algorithms of this kind.

AI advising charity donors

In the short term, even if philgorithms are developed, the mostly likely scenario is that human philanthropy advisers will play a role in mediating interactions with them. Donors would then work with human experts to define the criteria for their donations (e.g. cause area, geography, organisational size etc.) and those experts would then employ philgorithms to provide advice and recommendations about the most effective giving strategy.
 
Some might argue that donors are unlikely to embrace this kind of approach on the basis that it is dehumanising.However, the success of the Effective Altruism movement and the growing focus on social impact measurement in many areas of philanthropy suggests that there is a market for data-driven advice. In any case, donors would be at liberty to ignore any advice they received, so the element of donor choice would remain. The key difference is that having been given the evidence, they would have to make an active choice to ignore it; rather than simply not having the information in the first place.

In the longer term, if we are able to use AI to determine the most rational allocation of philanthropic resources, can we not simply take things one step further and simply automate the process of making donations based on this information and thus remove the element of human involvement altogether? The immediate response of many is likely to be that this is absurd, as people are not going to outsource their charitable giving to machines. However, there are two reasons to think that in the medium term the idea might not be that crazy after all.

Firstly, we are all going to become much more used to relying on AI to make decisions for us and to offer us recommendations. We are already seeing this in terms of our reliance on the algorithms of platforms like Spotify and Netflix to find new content, and this is likely to seep into many other areas of our life. Even more profoundly, the development of new types of interfaces such as conversational interfaces (e.g. Google Home, Apple’s Siri, Amazon’s Alexa, Microsoft Cortana) or Virtual/Augmented Reality (VAR) interfaces will have a massive impact.

These types of interfaces are underpinned by algorithmic processes that mediate your experience of the world and determine what information you are presented with. As they become ubiquitous, it is likely that we will become accustomed to asking for recommendations or expecting our interfaces to present them to us without us even having to ask. In this context, it will seem peculiar if giving to charity is one of the few places where we are not able to access advice in this way.

In many of the examples in which AI systems display “creative” approaches to problem solving or goal attainment, the solutions they come up with are at best weird and at worst quite worrying.

inevitable automation

The second reason to think that the idea of philgorithms is worth paying attention to is that in the future there are likely to be new kinds of contexts in which there is the possibility of directing money towards good causes, but the only feasible way of doing so is to use some kind of automated process. For instance, as the Internet of Things (IoT) expands and develops it is likely to converge with other technologies such as blockchain and create a ‘machine-to-machine’ (M2M) economy, in which smart devices with embedded AI interact directly with one another without human intermediation.

A large part of this M2M economy will be made up of a huge volume of high-frequency, low-value micropayments (or even nano or pico payments if the currency is suitably divisible as, for instance, many cryptocurrencies are). There may be an opportunity to harness some of the value in these transactions for good causes, but it will be impossible to do so in any way that requires human oversight of each donation.

One option would be simply for an individual or company to specify a particular charity or group of charities upfront that are to receive donations, in much the same way that existing models of electronic rounding or ATM giving currently operate.

However, this might be easier said than done: one consequence of the development of a full-scale IoT is likely to be that we see a shift away from traditional notions of ownership and property towards a notion of access to shared objects. Some experts have even suggested that smart objects will become autonomous agents; able to earn and spend their own money. In this scenario, the question of which human individual or organisation has the right to select charities on behalf of a smart object is likely to be increasingly complex. A further reason to think that the model of human specification of particular charities is unlikely to work in an M2M context is simply that it is likely to seem remarkably inflexible and anachronistic in a world where so many other processes are governed by adaptable and ever-changing algorithms.

It seems, then, as though some degree of automation would be inevitable in this future context. And whilst this could still be automation based on processes directly specified by humans, it is likely that we will start to look for algorithms that can grow and adapt based on changing information and conditions. Hence it will be a natural context in which to implement philgorithms.

Algorithmic Creativity, Value Alignment and Unintended Consequences

There are many tasks, such as recognising images or conducting natural language conversations, which humans find easy to do but extremely difficult (if not impossible) to explain how we do. Historically, this was a huge barrier to our ability to develop AI systems capable of performing these tasks; as it was assumed that one would need to understand all the rules in order to program them into the algorithms directly.

One of the key reasons that we have seen a massive step forward in AI technology is that machine learning enables us to overcome this barrier. In circumstances where we are not able to program systems directly because we don’t know the relevant ‘rules’, we can instead create algorithms that are able to adapt and learn for themselves.

One of the intriguing aspects of the development of ML is that when systems are trained to perform specific tasks or achieve particular goals, even when they do so successfully, they often do it in totally unexpected ways. For instance, when Google DeepMind’s AlphaGo system beat the human champion Lee Sedol at the ancient and complex game of Go, the winning strategies it employed astounded even the most seasoned human experts.

There is clearly huge potential here, if new forms of AI can allow us to tackle problems in ways that we have never thought of before (and perhaps which humans could never have thought of). However, we should also strike a note of caution. In many of the examples in which AI systems display “creative” approaches to problem solving or goal attainment, the solutions they come up with are at best weird and at worst quite worrying. For instance, in many of the cases in which ML systems have been trained to win at computer arcade games, they do so by discovering glitches in the underlying programming which enable them to amass huge scores rather than by winning the game in any traditional sense.

Destructive problem solving

More alarmingly, there are many examples of “destructive problem solving” in which ML algorithms concentrate all their efforts on learning to crash the system so that they “win” by default, or simply redefining the parameters used to evaluate success in their favour.

This highlights the challenge of ensuring that ML algorithms develop strategies and approaches that reflect our original human intentions. When we see perverse results, this usually shows us that our original framing of the problem was not precise enough or contained rules that could be interpreted ambiguously. Whilst this is not such a big deal if we are merely trying to teach the AI to play 1980s arcade games, it will be a far bigger problem in the future when we try to develop algorithms in areas of life that touch on issues of human safety or ethics. This is sometimes known as the “AI alignment problem”: i.e. how to ensure algorithmic systems “share” our values and goals, and aren’t going to produce unexpected negative consequences.

Values alignment is likely to be a significant challenge for the future development of philgorithms. It is pretty obvious that we do not have a set of “rules” for philanthropy; but neither do we have any sort of consensus on what exactly philanthropy is or what its goals should be. There have attempts to build such a framework, but many critics would argue that these represent a narrow view of philanthropy that doesn’t reflect its actual practice.

The challenge for these critics, however, is to provide a better answer to the question “what should be the goal of a philanthropy algorithm?” And the stakes here are significant: it is easy to conceive of many ways any such proposed algorithm could go wrong, and the cost to the people and communities that civil society is meant to serve could be high.
   
Part 4: New challenges for civil society to address


Charities Aid Foundation © | Registered Charity Number 268369
25 Kings Hill Avenue, Kings Hill, West Malling, Kent ME19 4TA Telephone: 03000 123 000
10 St. Bride Street, London EC4A 4AD Telephone: 03000 123 000