Can standards help a CIO address AI/ML risks?
As more and more organizations develop and implement artificial intelligence (AI) or Machine Learning (ML), questions about the reliability of the results are increasing. Some high-profile AI/ML flaws are likely to give this technology a bad name. The related media reports created nervousness among CIOs and senior management.
Some concrete examples that have undermined society’s confidence in AI/ML applications include:
- Risk assessment tools in the criminal justice system amplify racial discrimination
- Fake arrests fueled by facial recognition
- Insurmountable barriers to accessing public services
- Unrecognized or uncorrected gender and racial biases
- Tests of self-driving cars involved in traffic accidents
- Environmental costs of giant server farms used for AI/ML applications
To avoid potentially thorny issues and reputation-damaging headlines, CIOs and senior management need a way to assess the design and performance of their AI/ML applications.
“Our members and other organizations have indicated that our standard has helped them embed responsible AI into their AI/ML applications,” said Keith Jansa, executive director of the CIO Strategy Council (CIOSC).”
CIOSC Accreditation by the Standards Council of Canada
CIOSC is a not-for-profit corporation providing a forum for members to transform, shape and influence the Canadian information and technology ecosystem, and is a standards development body (SDGs) accredited by the Standards Council of Canada (CCS).
“Our public and private sector members see value in our standards in part because of the robustness of our process,” says Keith Jansa. “We provide a neutral forum for standards development work using a consensus-based process that brings together a range of stakeholders and is accredited by SCC. »
CIOSC accreditation confers acceptance by the World Trade Organization (WTO) Technical Barriers to Trade (OTC) Annex 3 Code of Practice for the Preparation, Adoption and Application of Standards by Standardizing Bodies. This gives end users confidence that the “Ethical Design and Use of Automated Decision Systems” standard was developed using best practices. »
CIO Strategy Council Standard
To help organizations achieve a reasonable level of assurance that the risks associated with their AI/ML applications are being comprehensively managed, the CIOSC has developed the standard titled “Ethical Design and Use of Automated Decision Systems (CAN/CIOSC 101:2019).” The standard provides organizations with an auditable framework for protecting human values and integrating ethics into the design and operation of automated decision-making systems.
The value of the professionally developed standard is that using it is much faster and cheaper than trying to create your own framework.
Values-Based Principles for Responsible AI
The CIOSC standard provides a framework and process to help organizations apply the values-based principles of responsible AI. The Organization for Economic Co-operation and Development (OECD) describes a great example of responsible AI principles in its Council Recommendation on Artificial Intelligence. The principles are:
- Inclusive growth, sustainable development and well-being.
- Human-centered values and equity.
- Transparency and explainability.
- Robustness, security and safety.
Being grounded in the Principles of Responsible AI developed by the OECD lends the CIOSC standard credibility.
The CIOSC standard framework
The CIOSC standard framework focuses on managing the risks associated with AI/ML applications by encouraging designers and operators to answer a detailed list of questions for the following five topics related to automated decision systems:
- Risk management framework.
- Ethics by design.
- Monitoring and maintenance.
- Appeals and escalations of decisions rendered by the system.
The many detailed questions, developed by a CIOSC technical committee with diverse representation, provide end users with assurance that the standard is complete.