Could standards help address AI trust, algorithmic bias?
- 25 June, 2019 06:30
Artificial intelligence (AI) technologies could help transform sectors including human services, financial services, agriculture, logistics and resources, but there are concerns that need to be addressed including trust, algorithmic bias, market dominance, privacy and security, according to Standards Australia.
The organisation has launched a wide-ranging consultation, seeking input from a range of stakeholders in industry, government, civil society and academia on standards that could support the adoption in Australia of AI.
“Australians are fast adopters of new technologies, particularly in the home environment. Google Home, Alexa and Siri, for example, have become part of many people’s everyday lives,” said acting CEO of Standards Australia, Adrian O’Connell. “But the applications of AI are broader, ranging from the home, to the healthcare clinic and the factory floor, and present real opportunities for Australians in terms of our standard of living.”
“For this reason, standards in this space can help guide the rapid development of AI to meet our changing expectations as a community, in a way that brings industry, community and governments together,” O’Connell said.
The organisation has issued a discussion paper (PDF), and plans during June and July to hold a series of forums on the topics. Standards Australia intends to issue a AI roadmap in September.
The paper seeks feedback on whether the development of technical, management systems (addressing issues such as assurance and safety) and/or governance standards for AI would be appropriate. Adoption of the body’s standards is not mandatory, although they are sometimes referred to in Australian legislation.
The paper notes that work on developing AI standards is still in its early stages. Standards Australia is participating in the work of the International Organisation for Standardization (ISO) and the International Electrotechnical Commission (IEC) on the issue.
In 2017 a joint ISO/IEC committee (JTC 1/SC 42) was formed to look at AI. Areas it is examining include use cases, assessing the robustness of neural networks, ethical and society concerns, bias in AI systems, risk management, and concepts and terminology.
In late 2018, Standards Australia formed a mirror committee to JTC 1/SC 42 in order to provide an “Australian voice and vote,” enabling Australian input into the joint committee’s work, the discussion paper notes.
Locally, Standards Australia is not alone in its scrutiny of AI. The government has commissioned the CSIRO’s Data61 to develop an ethical framework for AI. A Data61 discussion paper outlines a list of eight principles to guide the ethical construction of AI systems: Generates net-benefits; do no harm; regulatory and legal compliance; privacy protection; fairness; transparency and explainability; contestability; and accountability. (Telco group Communications Alliance has suggested expanding that list of principles.)
The human rights implications of AI is also being examined as part of an Australian project.
Earlier this year, a group of prominent business leaders and academics called for an “urgent dialogue” on the issues raised by AI including the need for “governing body responsible for setting standards and guidelines for the ethical use of AI that would inform self-regulation and guide government regulation”.