AI a key weapon in mining risk profile fight: BHP’s McCuaig


Richard Roberts

Editor in chief

Top image :
BHP's Campbell McCuaig addresses the AusIMM 2023 Mineral Resource Estimation Conference

“I’m a fan,” BHP geoscience boss Cam McCuaig says of the artificial intelligence tools impacting the world’s biggest mining business. Of course, he attaches caveats, but adds: “If we use machine learning to ask the right questions it’s super powerful. If we ask the wrong questions we’ll fool ourselves.”

Addressing the 2023 AusIMM Mineral Resource Estimation Conference in Perth, Western Australia, McCuaig counted about 700 geoscientists in the company, “plus or minus a lot” given this month’s ingestion of OZ Minerals and its copper and nickel assets.

He didn’t tally the data architects and data scientists but it’s fair to say their numbers are growing fast as BHP continues its deep dive into the world of AI.

“We’re actually going through a fairly serious change in the risk profile of mining,” McCuaig said, highlighting the rising hurdles in discovery, project development, mining and mineral recovery.

“It requires that we actually get better at front-end loading knowledge of the subsurface into the decision-making process.

“We’re in a world where our ability to create datasets far exceeds our ability to fuse them together … to integrate and interpret them and get the knowledge out of them that we need to make a decision in the timeframes that are ever-accelerating in the business.

“That’s a real challenge. Artificial intelligence – machine learning and all of the subsets – absolutely has a role to play.

“We’ve got these fire hoses of data coming at us. How are we going to ingest that? How are we going to do real-time model updates? Clearly, machine learning has a role there.

“I am a receiver of the outputs of AI. I’m making decisions based on what people are delivering using machine learning and various techniques.

“So I’m coming at it from that perspective – as one who’s being asked to literally, if not officially sign off, at least in principle endorse what are billion-dollar decisions.

“If they’re based on ML, I want to be pretty sure that I’m comfortable as you can imagine.”

Far from being afraid to embrace AI, the conference heard the industry was using “machine learning before it was cool”.

McCuaig said AI was “challenged” by the geospatial aspects of exploration, resource estimation and mining, and by the complexity and uniqueness of everything from deposit and ore types, to regional lithologies.

“We usually have sparse training points,” he said.

“There are variable resolution or representativeness, so when I have a point, a sample, how representative is that of the volume around it and how much volume? That’s a continual problem.

“I still need a spatial model to join the dots and impute data between known points, even though those known points have their own error on them.

“You do find that, especially with these complex algorithms now, you can have spurious correlations in your data that actually the machine learning sees as something that must be important because it’s happened in a number of realisations, and then all of a sudden it becomes the fact.

“You see that a lot with things like generative adversarial networks when they try to make images and then it makes things that can’t be true because it doesn’t know what the truth is.

“It’s found a spurious correlation.

“The other thing that happens is … we end up overfitting the data. We get a false sense of precision.

“It’s hard to have a learning loop. When you say to the people that did the ML, and we’ve done this, okay, I want to know what dataset I’m leveraged to. Well, we’re not certain because of the way the workflow works.

“So that’s something we have to focus on: how do we keep the learning until we get the level of validation to say we trust it?”

McCuaig said while exploration was an inductive process to a large degree, AI was predominantly deductive.

“It’s interpolating with points.

“Exploration is … a very complex multi-parameter space. Not only do you have multiple datasets and multiple geological processes and the uncertainties around those, [but] you are thinking around geological time.

“So it’s a complex space to be playing in.”

He said there was no simple way around this. Users had to know their input data and how representative it was, and always ask the machine the right question.

A BHP exploration project involving an 800km-long area and circa-200 available datasets that produced 200,000 derivative feature sets looked at data-driven targeting potential but also other analytical insights. It drove home the point about how ML models could be interrogated, and how this could evolve.

“The issue here was that there were only 10 deposits in this area that we were even interested in,” McCuaig said.

“So we’ve got that many data sets and we’ve got 10 training points.

“Overfitting is a massive issue. And we knew that, so we did this deliberately to just see what we would learn.

“And what really came out were the insights, like where the data had common structure [for example]. That was really useful.

“Less so on the [probable location] of the next deposit.

“Machine learning is not made for big datasets, it’s made for big experience sets.”

McCuaig referenced the “Super Mario algorithm” where the machine algorithm learned through experience – moving and dying “a million times” in the virtual world – to play the popular Nintendo platform better than any human.

“What we asked the computer to do is, we want you to play exploration better than any human, but you only get 10 tries,” he said.

“We didn’t have the experience set.

“[With a] resource it is the same type of issue.

“Every deposit, or even parts of deposits, looks very different.

“So issues that come up are the representativeness of my training data, stationarity in the domains [and] the spatial model. What spatial model am I using to interpolate and impute data between the dots?

“And even if I have a lot of data, I don’t have a big experience set.”

Despite the challenges, investment in AI across sectors is prodigious and change is rapid.

“Where is it going to help in the future?

“Things like visualisation,” McCuaig said.

“We can think comfortably in a few dimensions. The computer can think in however many dimensions you want. It can take a whole bunch of datasets, synthesise it, and allow you to see common features [and] structures. That’s really powerful.

“And where we can take that is in things like auto-domaining.

“We’ve already got auto-variography. The next step, and another order of magnitude more complex, but how do we use machine learning to help us with auto-domaining and finding stationarity?

“That would be really cool.”

“It’s just helping the business make decisions eyes wide-open, about the whole range of the ways things can play out – the upside and downside”

McCuaig agreed orebody (and waste) drilling densities generally remained inadequate across the industry for the type of “front-end loading” of data that could radically change decision-making across value chains.

“[But] we’re moving much more as an industry from just resource categorisation to resource characterisation,” he said.

“Our ability to get more out of each hole is there. We can get more chemistry, we can get geophysics better … We’re getting more data [and] different datasets.

“And where AI’s helping us is how to ingest that data and still see correlations.

“The challenge of imputing data between the points still remains, just as it does for the human.

“But I think the AI will help us in getting those correlations and if we can get them to help with things like the auto-domaining, and then the auto-variography which we’ve already got, then I think it will help us.”

He said AI could play a more important role in conditional simulation and evaluation of risk where, the conference heard, there was a mountain of work for the industry to do.

“When we do predictive models, as we were talking about before, they become deterministic and then they take on a life of their own, and then you’ve got somebody thinking that the orebody is actually a bunch of squares.

“What we really need to be able to do is to show quantitatively – honouring the data – how different it could be within our bounds of rules. And then, given that uncertainty, you can go to the end-user and say, are you happy with that?

“Because if you’re happy with that, we’re sweet. If you’re not, these are the areas of high risk; here’s how we think we can de-risk it. Your call.

“It’s just helping the business make decisions eyes wide-open, about the whole range of the ways things can play out – the upside and downside.

“Machine learning can help [with] synthesising a lot of datasets to help us find correlations to do the domaining, do the resource estimate, do the characterisation better … That will help us build better models.

“They’ll still be uncertain.

“Quantifying uncertainty and communicating uncertainty and allowing the rest of the value chain to be able to consume that uncertainty is actually the big challenge.”

 

Leave a Reply

Not registered? Register Now

Powered By MemberPress WooCommerce Plus Integration