Al & Philanthropy
Giving to charity is (according to classical economics, at least) an inherently irrational act. However, there have always been those who have sought to remedy this perceived failing, and to make philanthropy more rational so that it is a better tool for redistribution within our society. AI could offer new ways of redressing this balance, and could have a profound impact on the ways in which people are able to give to charity.
One way in which this impact could be felt is through the use of AI to turn philanthropy advice into a mass-market product. There are already numerous examples of financial services companies developing “robo-advisors” to give advice to customers. One of the key benefits of doing this is often argued to be that it makes such service more cost-effective, so they can be offered to a wider base of clients. If AI could be applied to automate philanthropy advice in the same way that it has been used to automate financial advice, then it could make it a feasible mass market product, and this could have a massive impact on the ways in which people give.
There are various different ways in which AI could be applied to offer philanthropy advice. One is to use the same sort of tailored recommendations based on past behaviour or peer group activity that underpin the algorithms used by Facebook or Amazon to present you with new content or products (i.e. “if you liked X, why not try Y?”, or “your friends are all doing Z, why not join them?”) Facebook itself has enabled giving to charity via its Facebook Messenger Service. And along similar lines, Salesforce has partnered with United Way in the US to add an advice function to its workplace giving platform based on its AI-powered “Einstein”.
The obvious appeal of this is firstly that it fits well with existing platforms; and also that social cues are an important part of philanthropy, so harnessing peer group effects is potentially a powerful way of getting people to give. However, there are also clear reasons to be wary. The main one being that algorithms based on prior behaviour or peer group activity simply tailor information to fit with existing biases. When it comes to charitable giving, this means that they are likely to result in well-understood causes and well-known organisations getting promoted at the expense of less well known ones. This should be a serious source of concern for a sector where there are already concerns about the balance between large and small organisations and the difficulty of fundraising for unpopular causes.
A more sophisticated way to offer philanthropy advice, and one that would go further in terms of the challenge of making philanthropy more rational, would be to apply ML to data on social and environmental needs (much of which is out there already, although probably sitting in siloes in the public and private sector) and to data on the social impact of CSOs and interventions. This would enable identification of where the most pressing needs were at any given time, as well as the most effective ways of addressing those needs through philanthropy, and thus allow a rational matching of supply and demand. We have previously coined the term “philgorithms” for algorithms of this kind.