Nesta publishes new code of conduct for public sector use of AI

Nesta publishes new code of conduct for public sector use of AI

Innovation foundation Nesta has published a code of conduct for the use of artificial intelligence (AI) by the public sector.

The code of conduct, ‘Code of Standards for Public Sector Algorithmic Decision Making’, contains 10 core principles and Nesta hopes it will guide and regulated the public sector’s use of AI and algorithmic decision making.

Nesta has said that there has already been a considerable amount of work done to encourage and require good practice of using data and the analytics techniques that will need to be applied to the new code of conduct.

The Director of Government Innovation at Nesta, Eddie Copland, cited the example of the UK government’s Data Science Ethical Framework, which has outlined six principles for responsible data science initiatives.

Copeland said data protection laws, such as the EU’s General Data Protection Regulation (GDPR), mandate certain practices around the use of personal data. He said GDPR will create a ‘right to explanation’, where a user can ask for an explanation of an algorithmic decision that was made about them.

In Nesta’s code of conduct, the ten principles ranged from the use of algorithms and requirements to effectively use the technology to forming a risk scale against each mitigating process.

The first principle said that all algorithm’s used by public sector organisations must be accompanied with a description of its function, objectives and intended impact and made available to those that use it. Secondly, public sector organisations should publish details describing the data on which an algorithm was (or is continuously) trained, and the assumptions used in its creation, together with a risk assessment for mitigating potential biases.

Nesta’s code of conduct also says that algorithms should be correctly categorised on the Algorithmic Risk Scale of 1-5, with five being the highest impact on the individual and one being the lowers. The fourth principle outlines that all the inputs required to help an algorithm make a decision must be published.

On a personal level, the fifth principle outlines that citizens must be informed when their treatment has been informed entirely or partly by an algorithm and sixth, every algorithm should have an identical sandbox version for auditors to test the impact of different input conditions.

The remaining principles include when using third parties to create or run algorithms on their behalf, public sector organisations should only procure from organisations able to meet Principles 1-6. Additionally, any member of staff should be held solely responsible for any actions taken as a result of an algorithmic decision.

Artificial Intelligence, Biometrics get a boost from GDS

The final two principles state that organisations wishing to adopt algorithmic decision making in high-risk areas should sign up to a dedicated insurance scheme, which will provide insurance to individuals that are negatively impacted by the technology. And finally, organisations should commit to evaluating the impact of the algorithms they use in decision making, and publishing the results.

Carrying out the project aims to achieve a better code of conduct for operating the technology across the public sector and ensure that all individuals are safe, whilst still allowing organisations to operate more efficiently with the technology.