new challenges for civil society to address

We have looked at the potential for civil society organisations to harness AI to further their missions, and also at some of the opportunities and challenges the technology might bring for organisations and their wider operating environment. However, the power and likely ubiquity of AI systems means that they will also have a profound impact on society as a whole. And whilst some of that impact will be positive, there are also going to be unintended negative consequences.

Think you don’t need to care about AI?

Even if a CSO has little interest in harnessing AI, and is not concerned about the possible impact the technology might have on the organisation or its operating environment, it should be aware of the potential for AI to create new challenges for the people and communities it serves. If many existing social and environmental issues increasingly become technology issues as well in the future, but the CSOs whose missions are to address those problems fail to adapt:: then civil society will fall short in its duty.

If, however, civil society rises to the challenge by engaging with these issues now, it can play an absolutely vital role: both in leading the debate about how AI can be developed to minimise the risks of damage to our society, and in ensuring that CSOs are well-placed to deal with any future challenges that cannot be avoided. In this section, we outline some of the possible future challenges that AI might bring, and the role civil society could play in minimising them.

Autonomous weaponry

The risks posed by the deliberate, malicious use of AI were starkly highlighted in a recent paper. This report outlined a range of challenges and scenarios, including the development of autonomous weapons, the use of AI for enhanced cyber-warfare, and the micro-targeting of propaganda and misinformation to undermine elections and the wider democratic process.

When it comes to autonomous weapons, the potential advantage to be gained from perfecting them is so great that there is a huge incentive to devote significant resources into research and development. And the concern is that this applies to malign actors and rogue states just as much as it does to recognised military powers. Given the historic role that civil society has played in advocating against militarisation - from campaigning against the use of nuclear weapons to calling for bans on the use of landmines and cluster munitions – it is critical that it stays abreast of developments in this field and is able to raise concerns.

Fake News, targeted propaganda and democracy

Another challenge, which spans both malicious intent and unintended consequences, is the impact of targeted misinformation in the form of ‘fake news’ and propaganda. Analysis of vast data sets on previous behaviour and social interaction has enabled the creation of algorithms that can target tailored information to an incredibly granular degree (down to the level of neighbourhoods, households and even individuals). This has been used by some organisations simply for commercial gain, but others have employed it with the deliberate aim of influencing election processes and subverting democracy. The scandal that has erupted in early 2018 as a result of the revelations about the relationship between Facebook and Cambridge Analytica, who specialised in this kind of targeting, highlights the scale of this problem.

The coarsening of public discourse and the devaluation of notions of truth and fact pose significant challenges for civil society. Many organisations seek to work across societal divisions, but if people are unwilling to engage with others outside their immediate circle this will prove increasingly difficult. Similarly, CSOs often rely on evidence or expertise to support their advocacy work. If those in power or with vested interests in resisting change are able to question this evidence or to counter it with claims of their own, and there is no clear sense of objective fact, then the ability of civil society to campaign for social change will be hugely diminished.

Civil society can also help to challenge the culture of misinformation and fake news. Whether through supporting new models of funding journalism or through acting as focal points for efforts to build community cohesion and overcome differences, CSOs can play a key role in ensuring that issues are brought to light in a way that is fair, evidence-based and constructive.

Deepfakes

New applications of AI may, unfortunately, be about to make the task of combatting misinformation even more difficult. In particular, the development of “deepfakes” – artificially generated videos that are indistinguishable from real footage – could pose huge challenges.

To see why, just imagine someone making a claim that a prominent figure such as a politician said something controversial could also provide believable AI-generated video evidence to back up their case. To help combat this, CSOs may in the future look to support initiatives to find ways of verifying video and photographic footage.

Deepfakes also have implications at an individual level, as the technology is widely available. Unsurprisingly, it is already being used in the pornography industry, and it is not hard to see how this could cause problems in the near future.  We are already seeing issues in terms of the widespread sharing of sexually explicit material online and the rise of “revenge porn”.

The future challenge will be that deepfake technology could make it possible to produce revenge porn even where an individual has never actually engaged in the act supposedly portrayed. For civil society organisations that deal with relationships or support the victims of sexual offences, this could pose significant new challenges.
i wish this were fake news unsplash image case study 580 260

The battle against 'fake news'

  • Giving Thought blog

The rise of misinformation and fake news could be immensely harmful to efforts to achieve social progress through campaigning, and advocacy, based on evidence and expert opinion.

Read about the challenges of 'truth decay'
business newspaper unsplash image case study 260 580

Journalism and a healthy democracy

  • Alliance

Many are starting to ask whether general support for journalism as an end in itself should constitute a valid focus of philanthropy, to ensure a healthy democracy.

Should philanthropy fund the media?

Algorithmic Bias

One of the most widely-publicised unintended consequences of AI so far is algorithmic bias. This is the phenomenon whereby when ML algorithms learn from data sets that contain historical bias for factors like race or gender, they start to exhibit those biases and even strengthen them. There have already been numerous examples in which people from marginalised groups and communities have found themselves discriminated against by algorithmic decision processes.

This is an issue of obvious relevance for civil society. CSOs need to understand the dangers of algorithmic bias and how they might affect the issues they work on and the people and communities they serve. They can then play a vital role in highlighting these challenges and dangers to the companies and organisations implementing new algorithms, as well as the policymakers responsible for formulating new laws and regulation designed to govern them.

Civil society can also play an important ongoing role in addressing the challenge of algorithmic bias. Much of the debate around how to solve this problem has so far centred on the need to make algorithms fair, accountable and transparent, and there is a growing literature on what this means. In terms of fairness, for instance, we need to ask questions such as: Is it fair to make a given AI system at all? Assuming that we are making it, is there a fair technical approach? And once we have made it, how do we test the system for fairness? Likewise, when it comes to accountability, there are important questions to be asked about where responsibility lies for the unintended consequences of algorithmic processes, and who is empowered to address them.

But it is perhaps transparency that poses the biggest challenge. Many algorithmic processes operate as opaque “black boxes”, and it is argued that they need to be made more transparent. But what would straightforward transparency concerning their inner workings actually achieve? If most of us were presented with vast reams of technical data, would we actually be any the wiser when it comes to understanding why we had been refused health insurance or been identified as a possible suspect in a crime? Almost certainly not.

Perhaps, then, “explanation” is a more relevant concept than straightforward transparency. And this is an area in which some fascinating work is taking place: from algorithms that are able to “explain themselves to the idea of using counterfactuals (i.e. statements of how things could have been different) to explain algorithmic decisions without having to 'open the black box'

Given the existing prevalence of algorithmic decision processes, and the high likelihood that they are going to affect ever more of our lives in coming years, it is vital that civil society is in a position to speak up about the issue of bias and to play a role in addressing it by ensuring the fairness, accountability and transparency of automated systems in the future.

Filter Bubbles

In addition to facilitating the spread of misinformation, as highlighted above, AI is already having a wider impact when it comes to social media. Algorithms tailor the content we receive based on our previous behaviour and preferences, so that we are continually given more of “what we want”. The danger is that this then helps to create “filter bubbles” in which people are only even giving information that fits with their existing world view and only ever interact with those that share that view.  Over time, the existing views and biases of people within these filter bubbles are reinforced and amplified.

The growing use of non-traditional interfaces (e.g. conversational interfaces such as Amazon’s Alexa or Microsoft’s Cortana, or augmented/virtual reality interfaces in the near future) means that this effect is likely to be heightened. As a growing proportion of our experience becomes mediated by these AI-driven interfaces, the danger is that they will seek to present us with choices and interaction based on existing preferences and thus will limit our experience even further (perhaps without us even realising it). This will create new challenges for charities in terms of things like heightened social isolation and decreased community cohesion. It may also make it harder for charities to engage with potential supporters, both because they might struggle to break through the filter of the AI interface to make the first contact and because it may become harder to create an emotional connection if people’s empathy for those outside their realm of experience becomes diminished.

Automation & the Future of Work

One area that is getting an understandable amount of attention is the potential impact that AI will have on the future of the workplace. The convergence of AI with a range of other technologies such as robotics, the Internet of Things and autonomous vehicles, is opening up the possibility of automating a far wider range of tasks than ever before. As a result, there are fears that some professions, or even entire industries, will become effectively redundant in the near future.

There are those who challenge this view. Whilst they may concede that the disruption caused by the widespread adoption of AI is likely to be of an unprecedented pace and scale, they maintain that the outcome will not be that we all find ourselves out of work, but rather that the nature of the work we do will change, and we will increasingly find ourselves working alongside and utilising AI systems.

There are many, however, who are adamant that this disruption to the workplace will lead to a ‘post-work economy’, in which few people are employed in any traditional sense. But even amongst those who share this perspective, there are those who take a utopian view of what it might mean, and those who take a much more dystopian view. The utopians argue that we will have to change our economic models fundamentally - perhaps by implementing some form of universal basic income - and that if we do so, the freedom created by removing the need to earn a salary could lead to a flowering of the creative arts or scientific research as people are able to concentrate their efforts in those areas. The dystopians, meanwhile, argue that mass automation will lead to ever-increasing inequality and that our notions of purpose and self-worth are so tied up with ideas of economic productivity that many will struggle to adapt and find meaning in a post-work economy.

Of course, neither the utopian vision nor the dystopian vision is inevitable. As with any prediction, it is merely a version of how things could be. And it is in our power to shape the future we want by taking the right decisions now. This is where civil society once again has an absolutely vital role to play, as it must have a voice in representing the people and communities it serves in the debate over the future of work and must also be in a position to help them manage the transition to this future.

There are fears that some professions, or even entire industries, will become effectively redundant in the near future.

A new sense of purpose?

To avoid the dystopian outcome, CSOs must speak out about the dangers of increased inequality and the impact that automation could have on our sense of purpose. They can also play an active part in helping people to manage the transition to the future of work.

This might be by helping them to develop the skills they need to adapt to new roles and to work alongside automated systems people. Or, if there is little prospect of those people finding new jobs, CSOs might help by offering them a different sense of purpose through engaging them in social action or collaborative projects. In this way, the shift to a post-work economy may see an explosion of voluntarism, or of new forms of social purpose activity that do not fit neatly within the traditional boundaries of “for-profit” and “not-for-profit".

 Inequality

A key question that civil society should be asking about any new technology is “will it reduce inequality or make it worse?”

AI has the potential to improve lives around the world, and to empower people and communities in ways that have never been possible before. But it also brings the risk of entrenching existing power structures and creating far worse inequality than we face today. This could include:

Inequality of access

Iif a technology is sufficiently powerful and widespread, then our ability to make use of it will become a fundamental dividing line. The internet, for instance, has become so important to our lives that the UN declared access to it a basic human right in 2016. The likelihood is that we will become similarly dependent on AI in the future; so there could be a stark inequality between those who are able to access the technology and those who are not.

Inequality of ownership

If access to AI is an important factor, then the question of who owns and operates the technology has even more profound implications. As AI becomes increasingly ubiquitous, and as more and more responsibility for decision-making is outsourced to algorithmic processes, the power concentrated in the hands of those who design and own the algorithms will concentrate enormously. The advantage gained by these individuals and organisations could introduce a form of inequality that is almost impossible to overcome.

Employment inequality

We highlighted above the challenges that AI might present for the future of employment, and the possibility that in the foreseeable future will will see a transition to a new economy in which there are fewer jobs and those that remain are radically different to the roles we have currently. This will obviously introduce inequality between those who are able to adapt and find employment in this new market, and those who are not.

Geographic inequality

The impact of AI may not be felt equally across geographies, either in terms of the benefits it brings or the ability to respond to the challenges it poses. This might introduce a new dimension of inequality between nations that have the infrastructure, talent pool and policy environment to adapt quickly and those that do not. Some argue that this will result in poorer or developing nations getting hit harder while others argue that developed nations may in fact suffer more as they are able to invest more in automation and will thus see a greater short-term impact on their workforce.There may also be inequality within nations: perhaps between urban and rural areas, or between different regions.

Inequality is one of the defining challenges of our time. The central focus up to now has been on wealth inequality, although there are many other dimensions of inequality too (e.g. cultural inequality, inequality of opportunity). Civil society already plays a key role in highlighting and combatting these challenges. If technologies like AI have the potential to introduce new forms of inequality, this is something that CSOs must be aware of so that they can highlight the dangers and prepare themselves to deal with the consequences.

Impact of interaction with AI on human behaviour

In addition to the kinds of direct impact AI may have on our lives outlined above, there are interesting questions about the longer-term, indirect impact it may have on human interactions and development. For example:

Gender attitudes

It has been noted by some that where chatbots and conversational AI assistants have been developed with human characteristics, they are very often female. Given that the relationship between AI assistants and their human users is one of servitude, there are real dangers if this relationship also has wider connotations in terms of gender dynamics.

De-sensitisation through distance

Some have raised concerns that if we are able to automate or outsource responsibility for decision-making to algorithmic processes, this will result in de-sensitisation and a lack of moral responsibility for our actions.

Child development

As conversational AI interfaces become ubiquitous in our homes, children are increasingly going to interact with them during formative stages of speech and social development.

Could the ways in which we converse with these AI interfaces fundamentally alter how we learn to speak and behave? For instance, will interacting with a voice-operated assistant that is required to do our bidding from a very young age lead to children expecting the same in human interactions, and able to speak only in commands?

And how we interact with AI systems could affect more than just the ways in which we speak. There is already evidence that prolonged interaction with robots can lead children to develop anti-social or abusive tendencies.

In response to the growing problem of “robot abuse”, researchers have developed a toy tortoise called Shelley to teach children not to harm robots.