ARTIFICIAL INTELLIGENCE AND SOCIAL IMPACT MEASUREMENT
12 October 2016
HOW DO WE GET A GOOGLE ALGORITHM FOR PHILANTHROPY?
I have been thinking a lot about the intersection of new technological developments and the future of philanthropy over recent weeks, and as such have been reading around a lot. I was struck by an anecdote in Kevin Kelly’s new book The Inevitable: Understanding the 12 technological forces that will shape our future (which I highly recommend, by the way) which led to a few ideas coalescing in my mind.
The anecdote concerns a conversation Kelly had with Google co-founder Eric Schmidt 25 years ago, in which he questioned why they were creating a new search engine when there were already plenty on the market. Schmidt’s answer was that they weren’t creating a search engine: that was just means to an end, and the real goal was to create an Artificial Intelligence (AI).
The point of the story is that at the heart of Google’s search capability is a highly complex deep learning algorithm governing how the search function operates and which determines how pages are ranked, what information is presented to each user etc.
Algorithms of this kind need nourishment in order to grow and develop, and this comes in the form of data. In Google’s case that is what the search engine provides, and has been providing for nearly two decades : a vast store of data on people’s online search habits.
This got me thinking about whether there is an important lesson for philanthropy in terms of data being a means to an end, rather than necessarily an end in itself.
An intuitive argument
This got me thinking about whether there is an important lesson for philanthropy in terms of data being a means to an end, rather than necessarily an end in itself. There is an awful lot of talk in philanthropy/charity circles about measuring social impact, but far less clarity about exactly why. There is obviously an intuitive argument that we should want to know as far as possible whether the interventions we are using actually work, but the flipside of that is that implementing new measurement systems is very time consuming and expensive, so for cash-strapped organisations there may have to be some sort of more tangible pay off to convince them to invest scarce resources.
It is true that some institutional funders like government agencies or charitable foundations increasingly demand rigorous metrics on how money is spent, so in these cases there is a clear imperative for putting in place the required measurement systems. (Although it is still worth bearing in mind that these may be specific to the need of the particular funder in question and thus not necessarily that applicable in other contexts). However, the broader assumption that “if you give people more information on the impact of donations, the more they will give” is starting to be questioned.
The challenge, then, is to work out how to bridge the gap between the short term reality of organisations operating on very tight budgets and the long-term opportunity that philanthropic deep learning algorithms might offer. I have suggested a few starting points, but there is clearly plenty more to do. As ever, any thoughts or feedback on any of these ideas would be heartily welcomed!