Rhodri Davies, Programme Leader, Giving Thought

Rhodri Davies

Head of Policy

Charities Aid Foundation

The role of giving


22 May 2017

We have previously explored in CAF Giving Thought blogs and discussion papers some of opportunities and challenges that Artificial Intelligence (AI) might present for philanthropy and the work of charities.This has ranged from longer term speculation about the development of a new form of hyper-rational, data-driven AI philanthropy in order to capitalise on the likely explosion of machine to machine transactions in the future, to more immediate or near-term issues such as the consequences of algorithmic bias for charities, or how to start building the data set on social impact that will be necessary if we are to maximise the potential of AI in the future.

In this blog I want to consider another near-term question, and a largely positive one: could AI lead to the development of effective, low-cost philanthropy advice, and thus drive better and more effective giving?

It was not that long ago that AI was the preserve of sci fi and niche academic conferences, but recently it has become firmly established as part of the mainstream. In part this is because the vast increase in the availability of data combined with the development of new “deep learning” algorithms has resulted in a step change in the sophistication of AI.

It is also a result of growing awareness of the extent to which AI is prevalent in many aspects of our lives, and of the opportunities and challenges this brings. Many people will by now have interacted with at least one AI bot on the internet, and thus have experienced the way in which these fairly simple applications of AI can improve customer services by making them more responsive and accessible (no engaged phone lines, no office hours etc.) Others may have used a conversational AI assistant such as Amazon's Alexa, Apple's Siri or Microsoft's Cortana. Many will also be aware of recent controversies around AI, such as the ongoing questions about the role that Cambridge Analytica and its algorithmically-targeted advertising played in bringing about both Brexit and the election of Donald Trump, or the way in which algorithms can entrench existing social biases in terms of race, gender and so on.

Awareness that AI is not a technology of the future but something that is happening right now is starting to filter in the world of charity and philanthropy. Some organisations are already using AI to deliver more effective interventions to their beneficiaries. But one area where it has not yet had an impact is philanthropy advice - despite the fact that there are potentially enormous opportunities here to use the technology to drive greater amounts of giving and to make it more effective (as I will argue below).

Philanthropy advice has a long history. As far back as the 17th century London merchant and philanthropist Thomas Firmin became so famous for his knowledge of the needs of the poor in the capital and his insights into how to give effectively that he became known as the “almoner general for the poor”, and other wealthy donors would seek him out for advice and guidance on their own giving. And during the heyday of British philanthropy in the Victorian era, no less a figure than Charles Dickens applied the knowledge of social issues that he had amassed through his work as a crusading journalist to philanthropy, through his role as an adviser and confidante of the philanthropist Angela Burdett-Coutts (You can read more about the history of philanthropy advice in my book, Public Good by Private Means, and the case study on Angela Burdett Coutts is available here).

Philanthropy advice can take all sorts of forms, but in broad terms it usually consists of a combination of some or all of the following:

  • Identifying the donor's aims based on their values and experience
  • Identifying the most pressing needs within particular cause areas or geographic locations
  • Identifying which organisations are working to address those needs
  • Identifying which of these organisations is most effective.
  • Guidance on the practicalities of giving (models, tax implications, legal considerations etc.)

 Before we consider the various ways in which AI could transform philanthropy advice, we need to characterise a couple of important distinctions in the approach that one could take to giving such advice. Firstly, we need to be clear that there is always a question of whether the primary aim is to meet the needs of the donor or to meet the needs of society. In some cases these may be perfectly aligned, but in most cases there will be a mismatch and the degree to which one attempts to map one onto the other (and in which direction) is a matter of choice. i.e. one can either decide that the primary goal is to satisfy the donor, and thus try to find the interventions that most effectively meet existing needs whilst also doing this; or one can decide that the primary goal is to meet to needs of society, and thus try to shape the donor's priorities and approach so that addressing these needs give them as much satisfaction as possible.

Secondly, we should draw a distinction between approaches which prioritise objective or subjective criteria. When it comes to identifying what causes to focus on, an objective approach would be one that prioritises analysis of evidence and data about where the greatest need is. (The extreme form of this, where there is no room for taking the donor's own views or values into, is Effective Altruism). A subjective approach would be one that prioritises what the donor themselves says about their values and what they want to give to, for personal or emotional reasons. Likewise, when it comes to identifying the most appropriate interventions, one can prioritise objective information about which is most effective and produces the best social outcomes, or you can prioritise considerations about which approach is likely to fit most neatly with the donor's beliefs and values and thus give them the greatest satisfaction.

So let's look at how AI could transform philanthropy advice.


This is probably the most obvious impact that AI could have on philanthropy advice. At a retail level donors often find it difficult if not impossible to get information and guidance on identifying needs, finding effective charities or choosing giving methods. In some markets such as the US there are at least partial solutions - such as charity rating services like CharityNavigator or GuideStar - but these are not without controversy.

Currently, tailored advice on philanthropy is the preserve of the wealthy because the costs involved in providing such a service in proportion to the value mean that it only makes economic sense above a certain level of giving. And even at higher levels of wealth, making the case for the added value of philanthropy advice can often be hard.

Putting aside for a moment the question of exactly how AI could be applied to philanthropy advice (which we are going to cover below), but assuming that it can be applied in at least some way; one thing that seems certain is that the technology could radically bring down the cost of providing such a service and thus open it up to a mass-market audience. As a recent PwC report on AI put it: “AI has the potential to become a great equalizer. Access to services that were traditionally reserved for a privileged few can be extended to the masses.”


AI could radically bring down the cost of philanthropy asvice and thus open it up to a mass-market audience.


That report also surveyed people for their views on how likely it was that AI would replace human-based services in various industries in the near future, and in some cases the results were striking: more than half of those surveyed, for instance, thought that the role of travel agents and tax preparers would become automated in the next five years. It is interesting to note that the PwC survey found greater resistance to the idea of AI taking over in the areas that are closer to our current focus on philanthropy advice, such as financial advice (where only 41% thought AI would become dominant within 5 years). A recent ING survey also found that very few people were comfortable with the idea of a "robo-adviser" making decisions for them at this point.

However, even if it takes slightly longer in these areas, the signs are already there that it will happen: an increasing number of banks, for instance have already introduced AI advice services, which are clearly intended to extend the availability of the service beyond the very wealthy. Obviously there is a capital cost involved in setting up these systems, but once they are up and running they are relatively inexpensive and they offer significant additional advantages in terms of the fact that they are not dependent on the involvement of human staff (hence, for instance, a customer can access them whenever they want, even if that happens to be the middle of the night).

A sceptic might argue that this is fine for fairly generic, objective advice based on number crunching, but will never be able to replicate the value of human philanthropy advice in terms of understanding a donor’s personal history and values. I used to think something along these lines, but I have now come round to thinking that I was wrong and that this underestimates the potential of AI, as I shall explain further on.

In any case, given that the vast majority of giving at the moment is unmediated by any sort of advice (or even information), even a relatively unsophisticated AI-powered advice service could add a huge amount of value. For anyone interested in encouraging more giving and making that giving more effective, (and given that this is basically CAF’s core mission, I count myself in this category) this is a potential game changer.

The question, then, is what could this advice actually look like?


Let’s start with one of the more mundane possibilities; which is that AI could be used to help people identify and choose between the various methods available for giving (or more broadly, achieving social good). What I mean here is that we are not talking about choosing causes or beneficiaries (let’s assume that this has already been done), but just about the various models available for getting money to them.

This is not much more complicated that the many instance of customer service bots already in operation, as it basically just involves taking information that could be provided on a series of factsheets and integrating them into a responsive AI framework. This is almost certainly not going to result in a step-change in giving, but on the basis that many people don’t get even this level of advice currently, it would be adding some value.


Another important part of philanthropy advice is helping donors to understand what the most pressing needs are within society or their local area. This is somewhere that AI could have a major impact. It is fairly obvious that an AI which has access to vast quantities of data and the ability to analyse it at greater depth and speed than a human ever could is going add value when it comes to identifying  most acute pressure points in terms of social or environmental needs at any given time.

An AI with access to vast quantities of data and the ability to analyse it at greater depth and speed than a human ever could is going add value when it comes to identifying  most acute pressure points in terms of social or environmental needs at any given time.


A human adviser would, at best, be able to access the same information or analysis via AI and relay it to a client. And in the short term this may be what happens: people are likely to be more receptive to the idea of human services augmented by AI in many areas, rather than having an interaction solely with a machine. As the previously-mentioned PwC report argues, “While they are eager to see increased affordability and access in transactional services like hailing a taxi, consumers still crave human insight and connection when it comes to more long-term or impactful decisions”. However, over time as people become more used to interacting with AI in all aspects of their life, and begin to rely on its capabilities, this attitude is likely to change.

There is a clear challenge in the short term to applying AI to identify and prioritise needs, which is the same challenge it encounters in many other contexts: namely the availability of data. Deep learning algorithms are hugely powerful, but they require enormous data sets to operate on so that they can improve to the point where they are useful. At the moment, even where there is data that could be used to identify social or environmental needs, it is often tied up in silos or recorded in a wide array of inconsistent or incompatible forms. This problem will eventually diminish as the long-term trend towards data sharing takes hold, but in the short term it may create a bottleneck.


There is a clear challenge in the short term to applying AI to identify and prioritise needs, which is the same challenge it encounters in many other contexts: namely the availability of data.


But imagine for a second that this challenge has been overcome, and (appropriately anonymised) data from healthcare providers, charities, government welfare providers, aid agencies, lifestyle companies, personal fitness devices (Fitbit etc.) and so on is recorded somewhere (probably on the blockchain, as per my previous thoughts) and free to be acted on by algorithms. In this scenario, it is quite possible that an AI might be able to determine not only which needs were most pressing at any given point, but even which specific communities or individuals were in greatest need of help.


Unless we assume that donors are going to take their own views and wishes entirely out of the picture (which is unlikely unless they take a full-blown Effective Altruism approach), another important part of philanthropy advice is to understand what donors themselves want to get out of their giving and to use this to inform recommendations about where and how to give. This could involve simply asking the donor explicitly about their values and what causes they most care about, but given that many people are not especially good at self-analysis it may also involve a longer process of relationship development and working with the donor to help them understand how they want to approach their philanthropy and what they want to get out of it.

Many people will argue that AI will not be able to match humans when it comes to this sort of values-based conversation. Again, they may well be right in the short term. At this point, when no AI has successfully passed the Turing test and robots and visual representations of AI are either conspicuously non-human or fall foul of the “uncanny valley” effect, a lot of people will almost certainly still prefer to deal with a human adviser (or at least assume that they would prefer that). However, as I argued before, this attitude is likely to change as we all become more accustomed to interacting with AI in other aspects of our lives.

Our attitude towards taking philanthropy advice from an AI is likely to change as we all become more accustomed to interacting with AI in other aspects of our lives.


One possibility when it comes to identifying what causes a donor might want to focus on in terms of what will best match their values is to use analysis of their online social interactions and peer network. This is what Facebook currently does; using its own deep learning algorithms to analyse the vast reams of data generated by its users, in order to provide them with ever more targeted content. Given that Facebook has already begun to dip its toe into the world of charitable giving, it seems extremely likely that it (or other similar organisations) will seek to leverage its huge store of social data in order to provide some sort of advice or recommendation service to its users.

This worries me, as I think that there are significant risks inherent in this kind of approach. For one thing, if people are given recommendations about which charitable organisations to support on the basis of their past preferences and behaviour and those of their peers, there is an obvious danger of simply reinforcing existing biases (as my colleague Adam has argued) and exacerbating the “filter bubble” effect that we have heard plenty about in recent times (including my thoughts on its impact on philanthropy). This could lead to unpopular causes being sidelined and to those charities with the biggest profile and brand recognition hoovering up an increasing share of donations at the expense of smaller, less well-known organisations.


There is a danger that algorithms simply reinforce existing biases. This could lead to unpopular causes being sidelined and to those charities with the biggest profile and brand recognition hoovering up an increasing share of donations at the expense of smaller, less well-known organisations.


We have already considered the way in which AI could be used to identify the most pressing social and environmental problems based on analysis of data. But it could also be used to identify the best solutions to those problems by analysing data on the social impact of particular interventions and organisations. These two things need not go together, of course. It would be entirely possible (in fact, probably normal) for a donor to choose what outcomes to focus on based on subjective criteria such as family history or religious values (or on a blend of these and objective criteria), and then to apply AI to identify the interventions that could deliver these outcomes most effectively.

Some donors in the future might choose to put subjective considerations aside, and to take a purely data-driven approach to giving. I.e. an AI would determine both the most pressing areas of need and the most effective interventions based on analysis of data and match them up. Of course, if one is willing to take things this far, there isn’t really any need to have reference to the donor, so the entire process could be fully automated. This kind of hyper-rational ‘AI Philanthropy’ is something we have explored before.

Whilst the idea of fully-automated giving may seem fanciful to some, who believe that the element of human choice will never be removed from philanthropy, it is worth bearing in mind the future likelihood that billions of smart objects and AIs will be transacting directly with each other all the time. In this context, there will be an opportunity to direct a small fraction of some or all of the currency in these transactions towards charity (a bit like a turbo-charged version of the electronic rounding schemes currently in operation), but it will be totally impractical for humans to choose the beneficiary of each micro (or indeed nano) donation. One option would be to have a pre-selected list of organisations that receive the funds (a bit like ATM giving), but that is likely to seem extremely old-fashioned and unresponsive in a world where AI would make it possible to have an adaptive, needs-based automated philanthropy strategy instead. I know which one I would prefer, for sure.


The last aspect of philanthropy advice to consider is choosing interventions or organisations to recommend based on judgment of which would give the donor greatest satisfaction. Once again, this could be in the context of having identified areas of need objectively using AI analysis or in the context of having identified them subjectively based on the donor’s own expressed preferences.

So what role could AI play here? Well, most basically it could lead the donor through a process of trying to determine their own views and feelings about the kinds of organisations they want to give to (eg small vs large, national vs local, advocacy vs direct service etc.). This is something a human adviser could also do, so the primary benefit of using AI here is making the service more accessible. Obviously this is a subjective method of identifying organisations, as it is based on the donor’s self-reporting of what they think will make them most satisfied. However, it may also be possible to determine likely donor satisfaction objectively in future, and that is where things get really interesting.

One way of doing it would be to use the sort of peer network and past behaviour analysis highlighted earlier (i.e. the Facebook approach) to determine which donations would be most likely to give the greatest degree of satisfaction in the future, based on what the individual themselves and their peers have given to in the past. (We might call this the ‘objective social’ approach). If we assume a sufficiently large data set, and we also assume that past behaviour and peer group behaviour are good proxies for what a person likes or values, then we could argue that this sort of objective analysis will converge on a pretty accurate prediction of what is likely to give them satisfaction in the future. Of course, all of the problems I previously outlined with this sort of approach, in terms of entrenching bias and creating filter bubble, once again hold true.

A more intriguing possibility, however, is the possibility of determining donor satisfaction objectively at the individual level (the “objective individual” approach). This is something that occurred to me after reading Homo Deus, the recent bestseller in which Yuval Noah Harari (author of the similarly best-selling history of humanity Sapiens) outlines his vision of what the future of humanity might look like. One of the key ideas that Harari identifies is that “non-conscious but highly intelligent algorithms may soon know us better than we know ourselves.”

The argument for this conclusion is fairly complex, but in basic terms it rests on the idea that modern neuroscience thinks that we all have at least two “selves”: an experiential self and a narrative self. The role of the latter is to weave the story that we tell ourselves about our experience of the world in order to systematise and understand it. And for a long time this mechanism was the best (if not only) way of understanding our own needs, wants and desires. However, Harari’s argument is that in the future AI may actually become better at analysing and interpreting our experiential self than our own narrative self currently is; so it would literally understand us better than we can understand ourselves.

In the future AI may actually become better at analysing and interpreting our experience than we  currently are; so it would literally understand us better than we can understand ourselves.

How could this work in the context of philanthropy advice? Well, imagine for a second that an AI had access not just to data about your past interactions on a particular social media platform, but to data about all of your social interactions since you were a child (perhaps you started wearing smart glasses at that point, or got a retinal implant). Everything you had ever said and heard and the way you responded in any given context you had experienced. It might even have access to physiological data provided by neural sensors within your body and so on. In this scenario an AI could determine which donations were likely to give you the most satisfaction and happiness not on the basis of your own subjective interpretation of what makes you happy, but on the basis of objective evidence about what has made you happy in the past.

Economists often talk about the idea of a donor getting a “warm glow” when it comes to charitable giving (largely because altruistic behaviour doesn’t make any sense within classical economic theory, so you have to find a way of making it selfish by introducing a personal benefit that the individual is trying to maximise- i.e. the warm glow). And this is corroborated by neurological experiments using Functional Magnetic Resonance Imaging, which show that charitable giving stimulates the parts of the brain associated with pleasure and reward (e.g. by releasing the neurotransmitter dopamine). An AI with access to bio-implants or wearable tech in the future (just think a more invasive Fitbit) could monitor your dopamine levels when you give to various causes or organisations and see which gave you the greatest “warm glow”.

This will probably make a lot of people very uneasy, as it seems like the erosion of free will. However, our attitude to that may well change in the future as our relationship with AI and our reliance on it grows. Also, Harari argues that free will is essentially a myth because our action are entirely determined by biological “algorithms anyway, and the sense of individual choice we feel is merely a by-product of the way in which our narrative self makes sense of experience (cheery thought, isn’t it?) So handing over decision making to an AI algorithm shouldn’t make any difference, except that it might be more effective.

On interesting question is whether people’s attachment to the idea of free will will mean that when presented with objective evidence of what will give them most pleasure, they try to resist it: either by making a different subjective choice, or by opting for a purely rational approach. Is the sense of personal agency and donor choice in philanthropy sufficiently important to trump objective evidence about what is most effective and will also make you happiest? (Of course an AI might also know that what would give you most pleasure is to have a sense of agency, and therefore have factored it into the information it presented in order to guide your choice… But let’s pull back from that particular rabbit hole).

AI seems certain to present both opportunities and challenges for philanthropy in the immediate future and in the longer term. As I have attempted to outline here, there will be a range of different ways in which it could be applied to the provision of philanthropy advice and these will raise significant and important questions about the balance between meeting the needs of donors and the needs of society. However AI is applied, one thing that seems certain is that it could make philanthropy advice a mass market commodity. Assuming it is done in a way that avoids unintended negative consequences this could have a transformative impact on the effectiveness of giving.