days
-4
-5
hours
0
-8
minutes
-4
-1
seconds
-5
-6
search
Minimizing machine learning risks

How to develop machine learning responsibly

Michal Gabrielczyk
machine learning
© Shutterstock / Freedomz  

Machine learning inevitably adds black boxes to automated systems and there is clearly an ethical debate about the acceptability of appropriating ML for a number of uses. The risks can be mitigated with five straightforward principles.

ML is inherently risky

Controversy arose not long ago when it became clear that Google had provided its TensorFlow APIs to the US Department of Defense’s Project Maven. Google’s assistance was needed to flag images from hours of drone footage for analyst review.

There is clearly an ethical debate about the acceptability of appropriating ML for military uses. More broadly, there is also an important debate about the unintended consequences of ML that has not been weaponized but simply behaves unexpectedly, which results in damage or loss.

Adding ML to a system inevitably introduces a black box and that generates risks. ML is most usefully applied to problems beyond the scope of human understanding where it can identify patterns that we never could. The tradeoff is that those patterns cannot be easily explained.

In current applications that are not fully autonomous and generally keep humans in the loop, the risks might be limited. Your Amazon Echo might just accidentally order cat food in response to a TV advert. However, as ML is deployed more widely and in more critical applications, and as those autonomous systems (or AIs) become faster and more efficient, the impact of errors also scales – structural discrimination in training data can be amplified into life-changing impacts entirely unintentionally.

Setting some rules

Since Asimov wrote his Three Rules of Robotics in 1942, philosophers have debated how to ensure that autonomous systems are safe from unintended consequences. As the capabilities of AI have grown, driven primarily by recent advances in ML, academics and industry leaders have stepped up their collaboration in this area notably at the Asilomar conference on Beneficial AI in 2017 (where attendees produced 23 principles to ensure AI is beneficial), through the work of the Future of Life Institute and OpenAI organisation.

As AI use cases and risks have become more clearly understood, the conversation has entered the political sphere. The Japanese government was an early proponent of harmonized rules for AI systems, proposing a set of 8 principles to members of the G7 in April 2016.

SEE ALSO: The state of machine learning in 2018

In December 2016, the White House published a report summarizing its work on “Artificial Intelligence, Automation, and the Economy” which followed an earlier report titled “Preparing for the Future of Artificial Intelligence”. Together, these highlighted opportunities and areas needed to be advanced in the USA.

In February 2017, the European Parliament Legal Affairs Committee made recommendations about EU wide liability rules for AI and robotics. MEPs also asked the European Commission to review the possibility of establishing a European agency for robotics and AI. This would provide technical, ethical and regulatory expertise to public bodies.

The UK’s House of Commons conducted a Select Committee investigation into robotics and AI and concluded that it was too soon to set a legal or regulatory framework. However, they did highlight the following priorities that would require public dialogue and eventually standards or regulation: verification and validation; decision making transparency; minimizing bias; privacy and consent; and accountability and liability. This is now being followed by a further Lord’s Select Committee investigation which will report in Spring 2018.

The domain of autonomous vehicles, being somewhat more tangible than many other applications for AI, seems to have seen the most progress on developing rules. For example, the Singaporean, US and German governments have outlined how the regulatory framework for autonomous vehicles will operate. These are much more concrete than the general principles being talked about for other applications of AI.

Industry is filling the gap

In response to a perceived gap in the response from legislators, many businesses are putting in place their own standards to deal with legal and ethical concerns. At an individual business level, Google DeepMind has its own ethics board and Independent Reviewers. At an industry level, the Partnership on AI between Amazon, Apple, Google Deepmind, Facebook, IBM, and Microsoft was formed in early 2017 to study and share best practice. It has since been joined by academic institutions and more commercial partners like eBay, Salesforce, Intel, McKinsey, SAP, Sony as well as charities like UNICEF.

Standards are also being developed. The Institute of Electrical and Electronics Engineers (IEEE) has rolled out a standards project (“P7000 — Model Process for Addressing Ethical Concerns During System Design”) to guide how AI agents handle data and ultimately to ensure that AI will act ethically.

SEE ALSO: A basic introduction to Machine Learning

As long as these bottom-up, industry-led efforts prevent serious accidents and problems from happening, policymakers want to put much priority on setting laws and regulations. That, in turn, could benefit developers by preventing innovation being stifled by potentially heavy-handed rules. On the other hand, this might just store up a knee-jerk reaction for later – accidents are perhaps inevitable and the goals of businesses and governments are not necessarily completely aligned.

Five principles for responsible AI

As the most significant approach in modern AI, ML development needs to abide by some principles which mitigate against its risks. It is not clear who will ultimately impose rules if any are imposed at all. Nonetheless, some consensus seems to have emerged that the following principles identified by various groups above are the important ones to capture in law and in working practices:

  • Responsibility: There needs to be a specific person responsible for the effects of an autonomous system’s behavior. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes.
  • Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behavior is what it is.
  • Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed.
  • Transparency: It needs to be possible to test, review (publically or privately) criticize and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publically and explained.
  • Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behaviors from becoming embedded.

Together, these principles might be enshrined in standards, rules, and regulations, would give a framework for ML to flourish and continue to contribute to exciting applications whilst minimizing risks to society from unintended consequences. Putting this in practice for ML would start with establishing a clear scope of work and a responsible person for each ML project. Developers will need to evaluate architectures that enable explainability to the maximum extent that is possible and develop processes to filter out inaccurate and unreliable inputs from training and validation sets. This would be underpinned with audit procedures that can be understood and trusted.

This is not an exhaustive list and continued debate is required to understand how data can be used fairly and how much explainability is required. Neither will all risks be eliminated but putting the above principles into practice will help minimize them.

    DevOpsCon Whitepaper 2018

    Free: BRAND NEW DevOps Whitepaper 2018

    Learn about Containers,Continuous Delivery, DevOps Culture, Cloud Platforms & Security with articles by experts like Michiel Rook, Christoph Engelbert, Scott Sanders and many more.

tralala

Author
machine learning

Michal Gabrielczyk

Michal Gabrielczyk is a Senior Technology Strategy Consultant at Cambridge Consultants


Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of