The chief technology officer at recently formed idoba has urged industry peers to adopt high ethical standards in the development and deployment of artificial intelligence products and services ahead of the formulation of laws that will likely reflect long-standing social caution about and even fear of AI.
“AI represents a watershed moment for humanity,” said Matt Schneider, CEO of Optika Solutions before the company was bought by ASX-listed Perenti Global as a cornerstone of its new idoba business.
Speaking at the first Machine Learning in Mining conference in Perth, Western Australia, Schneider said: “We stand today at a juncture. What we choose to do with the technology and how we are going to use it is ultimately your choice. My challenge to everyone in this room is to think about the work you are doing and think about the ethics of what you are doing, and how you are going to apply it.”
AI and data science are core capabilities of idoba, formed from the combination of three acquired tech firms in July this year by multi-national mining and drilling contractor Perenti. Schneider said the name was created with AI input from Greek and Japanese words.
“[The business] came into existence because Perenti realised that the future is going to be digital and they really wanted to change how they were going to approach it,” he said.
“We are looking at other companies as well, so stay tuned because the ecosystem we will be creating will be quite significant and we believe it will represent a global shift in how miners will be mining in the future.
“Ultimately what we are interested in is, how do you use data, how do you tell the story, but more importantly how do you combine that with people and make sure you are doing it ethically.
“For us it’s important to understand when you’re talking about ethics, it’s the ethics of what you’re doing and what your colleagues are doing, but also how your algorithms are going to be used in the real world. Remembering that people are at risk sometimes with some of the systems that we build. As we get more and more sophisticated and as we start bringing more people off site and become more and more automated, the risk profile will change.
“And we are at the foundation of that change.”
Schneider said questions about reliability, accuracy and bias in AI inputs and outputs remained paramount.
“These sorts of questions will become more and more important as we move forward,” he said.
Schneider said the company’s commercial stance on a “wonderful” machine learning program designed to speed interrogation of WA mine safety and inspection regulations, and the responsibilities and liabilities of managers, had changed as testing advanced.
“It totally changed the commercial conversation around compliance. It also means we are able to mine more safely and identify any issues,” he said.
“We’re looking at linking it to live data and seeing if we can automate the mine site to the regulation. This will be the next step.
“[But the testing to date meant] we changed it to more of a decision-support tool and are now working with a few mining guys as we improve it. So we changed our stance from, it’s a tool that’s going to do this job to, this is a decision-support tool – be careful.
“We changed how it can be used, and also the level of risk that will be applied to it.
“It was the right thing to do.”
Schneider said multiplying AI development and application guidelines and frameworks were not yet laws.
“They will be in time,” he said.
“So the onus is on all of us to be ethical in what we do and what we claim [our products] can do. That’s really important.”