New report reveals low public trust of growing technology as government pledges stricter regulation for ‘high risk’ products
Tech companies could be asked to watermark or label content generated by artificial intelligence such as ChatGPT as the federal government grapples with “high risk” AI products evolving faster than legislation.
The industry and science minister, Ed Husic, will on Wednesday release the government’s response to a consultation process on safe and responsible AI in Australia, which cites McKinsey research to suggest adopting AI and automation could increase Australia’s GDP by up to $600bn a year.
However, the response also notes public concern about the technology and Husic said that while the government wanted to see “low risk” uses of AI continue to flourish, some applications – including self-driving cars or programs assessing job applications – need new and stricter regulation.
“Australians understand the value of artificial intelligence but they want to see the risks identified and tackled,” Husic said. “We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.”
The paper notes surveys showing only a third of Australians believe there are adequate “guardrails” for the design and development of AI, saying: “While AI is forecast to grow our economy, there is low public trust that AI systems are being designed, developed, deployed and used safely and responsibly.”
The interim response pledges the government would immediately set up an expert advisory group on development of AI policy, including further guardrails; develop a voluntary “AI safety standard” as a single source for businesses wanting to integrate AI tech into their systems; and start consulting with industry on new transparency measures.
Also on the table for further consideration are mandatory safeguards including pre-deployment risk and harm prevention testing of new products and accountability measures such as training standards for software developers.
The interim response paper refers several areas of reform to further processes of consultation and review, with boosting transparency measures around AI – such as public reporting on what data an AI model is trained on – one suggestion.
Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup
The government will also “commence work with industry” on the merits of another suggestion – a voluntary code on including watermarks or labelling of AI-generated content.
The changes would come in addition to existing work from the federal government, such as communications minister Michelle Rowland’s pledge to change online safety laws and require tech companies to stamp out AI-created harmful material such as deepfake intimate images and hate speech. Reviews are also under way on the use of generative AI in schools and through the AI in Government taskforce.
Sign up to Afternoon Update
Our Australian afternoon update breaks down the key stories of the day, telling you what’s happening and why it matters
after newsletter promotion
The paper outlines specific concerns raised around “high risk” AI systems such as those “to predict a person’s likelihood of recidivism, suitability for a job, or in enabling a self-driving vehicle”. This is in contrast to “low risk” AI use, such as programs used to filter emails or in business operations.
The paper also highlights concerns about “frontier” AI systems. No specific programs were cited but the paper says such systems “can generate new content quickly and easily” and “be embedded in a wide variety of settings”.
“It was also highlighted that AI services are being developed and deployed at a speed and scale that could outpace the capacity of legislative frameworks, many of which have been designed to be technology-neutral,” it stated.
The submissions raised concerns about at least 10 areas of law which could need reform to address AI issues including whether the use of AI to generate deepfakes could be liable for misleading or deceptive conduct under consumer law; whether AI models used in healthcare could create risks under privacy law; and how existing content can be used to train generative AI models – such as whether it creates copyright infringements and whether there should be remedies for such activity.
Generative AI models that create new content, such as the commercially popular ChatGPT text bot or the Dall-E image generator, are trained on existing content. This has created widespread concern from the sources of that content, from authors and artists to news outlets, about how their original work has been repurposed and whether they should be entitled to payment.
The New York Times last month sued OpenAI and Microsoft over the use of its content to train generative artificial intelligence and large-language model systems.
“We want safe and responsible thinking baked in early as AI is designed, developed and deployed,” Husic said. “We also want government to move faster with developments in tech, so we’ve appointed an advisory body which will bring the best brains on AI together to map the way forward for future responses.”
Australia may ask tech companies to label content generated by AI platforms such as ChatGPT – The Guardian
Posted by