Public Services > Central Government

The Government needs to act now to make sure AI is responsibly developed and used

Published 17 October 2017

Tina Hallett, Government and Public Services leader at PwC says AI can be a force for good. But there needs to be careful thought in putting the needs of public services users first


Concerns about the impact of automation on jobs was at the forefront of debate at this autumn’s party conferences. While our own analysis highlights jobs in areas like public administration and defence to be among those facing the highest potential for automation, in practice this may not happen for a variety of reasons. Indeed, new technologies driving Artificial Intelligence (AI) and robotics will create new digital jobs and make other jobs much more digitally oriented.

However, as well as the potential to change the nature of jobs a new area of debate has opened up on how AI is being used to make decisions and how to challenge the outcomes when it is difficult to see inside the ‘black box’. As a result, the development and use of AI is posing new challenges for government both in terms of policy and regulation as well as the delivery of public services which make use of AI to augment, assist or, in some cases, replace human decision making.  

How do you respond to the challenge from a benefit claimant to explain how a decision has been arrived at in their case which is unbiased and relevant to their needs? And a business which disagrees with a tax decision? Who has legal liability when the software used in the decision making is made by one company, relying on the data from a range of other companies?

As AI develops, these questions will multiply and require government at national and local levels to consider whether and how to regulate ranging from:

  • the AI algorithms themselves in specific use cases;
  • the processes, people and engineers used to build AI driven systems; and
  • the final product that contains the AI (e.g. medical devices).

Qualifications, quality standards and laws can be developed and enforced by a range of regulatory bodies to meet some of these concerns, as well as the appropriate incentives for adopting new approaches. In most safety critical industries, a regulatory framework already exists so there is the potential to augment or adapt these for AI. Indeed, it may need more extensive practitioner regulation such as a version of the 'Hippocratic Oath' for those developing and implementing AI.

In so many ways, AI can be a force for good. But this will need careful thought to put the needs of public services users first. AI and robotics should be seen as making public services like healthcare more efficient and affordable and education more personalised; otherwise there is a risk that these technologies may become the provenance of the well off. Trust and transparency is needed through the responsible implementation of technology to help society realise the huge potential gains that AI has to offer.

This is also not just a question to be resolved in Westminster but in local communities. By working collaboratively with entrepreneurs, innovators and SMEs using modern technology to deliver innovative products and services that users really want, cities and regions can make the most of new technology and 'Gov.Tech' solutions to transform public services, achieve better for less and improve outcomes.

But this will require deep, more widespread digital skills and there’s a risk: that the economic benefits of AI are unevenly skewed towards those places with the skills to adapt to an increasingly digital economy, putting a premium on education both before entering the workplace and when the need to reskill arises. This is further recognised in an independent AI Review commissioned by government which calls for new skills initiatives, like an industry-funded Masters programme and conversion courses to grow AI expertise, as well as measures to increase the uptake of AI by building confidence that the “use of data for AI is safe, secure and fair.”

There is a range of potential policy responses but two areas appear of particular importance. One is working with employers and education providers locally to help guide investment in the most effective types of education and vocational training and embed new digital skills, and responsible practices, in the workforce of the future. In addition, central and local government bodies, as part of the forthcoming Industrial Strategy, could develop a framework of support for digital sectors which helps spread the associated job creation opportunities across the UK.

Whatever choices are made, there is much work to be done to embed responsibility into the AI process and ensure that an ‘algorithmic’ government is one which the public can trust to deliver the public services it needs.

Tina Hallett is Government and Public Services leader at PwC

We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.