Public Services > Central Government

Report warns government over lack of AI and robotics strategy

Neil Merrett Published 12 October 2016

Science committee says a failure to better prepare for automated technologies and delayed digital strategy publication risks missing out on potential benefits to public services, jobs and efficiency

 

The government’s Science and Technology Committee has called for the formation of a commission to look at the wider ethical, legal and technical impacts of artificial intelligence (AI) technologies, which are viewed as having significant potential in fields such as healthcare and public service provision.

In findings reviewing current government thinking around AI and robotics, the committee warned that UK authorities had not set out a strategy for reviewing and putting in place skills training and legal frameworks to deal with and govern upcoming advances that may occur.

Following the outcome of the EU referendum earlier this year that favoured the UK’s exit from the bloc, former Cabinet Office minister Matt Hancock pointed to AI and data science as a means of underpinning more accurate analyses across Whitehall.

Artificial intelligence was also last year identified as one of a number of key focus areas for the government’s third technology review that would look at the significant advances that may best reshape UK productivity and how services can be delivered to the public.

The latest report by parliament’s Science and Technology Committee said it was not currently possible to see how a fourth industrial revolution, potentially driven by automated processes like AI may play out with regard to new services, jobs and improved efficiency.

At the same time, there was the possibility of losing key occupations as a result of the innovation driven by AI.

However, the report found what it described as a lack of leadership by government around ensuring that the UK is prepared to play a key role in driving IA and robotics technologies, particularly with regards to “socially beneficial systems”.

“While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now,” said the report.

Conservative MP Dr Tania Mathias, who serves as interim chair of the committee, said that with technology leaders like Google and Amazon having formed their own AI partnership to consider potential risks and benefits, the government must also take similar responsibility.

Dr Mathias said one such commitment should be establishing a ‘Commission on Artificial Intelligence’ at the Alan Turing Institute.  This would aim to bring together development, technology, legal and wider expertise to set out principles around governing new technologies in this area.

She argued this would help foster public debate around the type of commitments and action needed.  The report noted that this would include potential regulation on limiting progression of technology and close collaboration with the ‘Council of Data Ethics’ currently being set up by the government.

"Concerns about machines 'taking jobs' and eliminating the need for human labour have persisted for centuries. Nevertheless it is conceivable that we will see AI technology creating new jobs over the coming decades while at the same time displacing others. Since we cannot yet foresee exactly how these changes will play out, we must respond with a readiness to re-skill and up-skill,” said Dr Mathias.

“This requires a commitment by the government to ensure that our education and training systems are flexible, so that they can adapt as opportunities and demands on the workforce change. It is disappointing that the government has still not published its Digital Strategy and set out its plans for equipping the future workforce with the digital skills we will need.”

Among the other core conclusions of the committee’s report were ethical considerations around verification, validation and transparency within decision making processes, as well as privacy and safety.

The findings argued that such factors would require ongoing monitoring, with the UK being seen as well placed to provide “global intellectual leadership” to such developments.

Related articles:

Matt Hancock outlines policy design challenges for “Brexit Britain”

Parliament committee sets out delayed digital strategy concerns

Third government technology review expected this year

 







We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.