Challenge puts spotlight on bias impacts

Top image :
Scott Dunham addresses the AusIMM 2023 Mineral Resource Estimation Conference
‘It is clear that we have a big problem with the person-to-person variation’

A small team from junior copper company Hot Chili has been judged a clear-cut winner of the inaugural Parker Challenge, and Rio Tinto’s cheque for A$55,000. Less clear are answers to important questions about the need for greater impartiality and consistency in the industry’s all-important resource estimation and classification processes.

Results of the challenge, named after  former Rio Tinto resource modelling and geostatistics doyen, Dr Harry Parker, were announced at the first AusIMM Resource Estimation Conference in Perth, Western Australia. An aim of the competition was to probe human bias, or “person-to-person variance” in the assessment and manipulation of fundamental data – in this case exploration data from Rio Tinto’s Hugo Dummett South copper-gold exploration project near the major Oyu Tolgoi mine in Mongolia.

Guidelines and methods adopted to characterise and ultimately measure the value of resources that underpin the worth of public exploration and mining companies – for investors and other stakeholders – are evergreen areas of debate within the industry.

Codes, regulations and approaches continue to evolve. The emerging role of artificial intelligence has the discussion at something of a tipping point.

The availability and calibre of “competent persons” needed to help the industry scramble to identify and build certified stocks of so-called critical minerals, as ESG factors increasingly spill into resource calculations, have become bigger areas of concern.

“Resource estimation as we’ve heard over the last two days is a complex process and there are lots of steps all through it, and lots of factors [influencing outcomes],” Scott Dunham, SD2 principal consultant and one of about a dozen industry experts who judged the competition entries, said at the conference.

“Where does our judgment matter and where doesn’t it matter?

“This issue is really something that we need to come to grips with.

“I think [Harry Parker] would be fascinated by what we found but I think he’d also be a bit disturbed.”

Hot Chili resource development manager Kirsty Sheerin and three of her team spent about three weeks putting together their response to the Parker Challenge, launched in January this year.

They were among nine finalists and 29 submissions that emerged from circa-300 downloads of the original data provided by Rio Tinto. Entries came from mining company and consulting teams, academia and a few from left-field (“unconventional estimates”).

Organisers were anticipating a higher volume of final submissions but were nonetheless kept busy by those that did come through. The winner was a stand-out, according to Dunham.

But given all participants had exactly the same data and guidelines to work with, he has suggested the level of variability on display across the range of approaches taken, and particularly the end-results, was more than a little surprising and even “disturbing”.

“When estimation variation and classification variation combine the impact can be large,” he says.

“Extremely large.

“This should not be news to anyone. Yet for some reason we have this belief that calling something a measured resource or an indicated resource or an inferred resource absolves us from considering the person-to-person variation.

“I think it is clear that we have a big problem with the person-to-person variation in measured and indicated and inferred classification.

“We also have person-to-person variation in the estimation process itself.

“Starting with estimation domain interpretation and going down through all the little decisions we make around parameters.

“The loose standards around things like plus-or-minus-15% for three months’ production are not going to be appropriate when it comes to the very real differences we are seeing between people.

“Those metrics are looking at something quite different and it’s only the very tip of the iceberg.”

(Left to right) Stephen Durkin of AusIMM with Hot Chili’s Kirsty Sheerin and Chris McKie, and Graham Crook and Munkhsukh Sukhbaatar of Rio Tinto

Competition judges and other experienced industry people at the AusIMM conference saw the exercise as valuable, even if the final number of quality submissions was somewhat restricted by time constraints and possibly the nature of the data offered for modelling.

Dunham said it underscored the human “noise” problem and opened a door to more work and understanding in the area. He went as far as to say the current JORC code reset simply had to reflect the “noise we are seeing between practitioners”.

“Unless there is some provision in the revised code that deals with human judgement variation and noise all the other mooted improvements will not matter one iota,” he said.

Drivers of the challenge are keen to take away key lessons from the exercise and reprise it.

“Most importantly, [we] need a data set,” said Dunham.

“Data is hard to come by.

“So if you work for an organisation that has got a mine that’s been mined out and finished, and it’s been put on the back burner, but you’ve still got all the drilling data and maybe some wireframes or some geological descriptions, and some structure, talk to me.

“If it had grade control, that’s great because it would actually help us with the judging.

“As a committee I think we learned that we probably asked too much. We’re asking lots of questions. We should be trying to focus in on one or two questions.”

“Should we look at just classifications? Should we say, here’s the framework, go and do the geostats and the classification?

“Should it be a competition with a prize; what would incite you to [enter]?”

Respected independent geologist and consultant Dale Sims congratulated the winning team: “Four people, three weeks; 12 weeks [total] on this thing. You just can’t compete with that … So no-one is going to re-enter!

“Let’s think about it a different way. If we’re trying to upskill people as well as [understand] the problem, let’s turn it into a hackathon, run it over a weekend with diverse teams and don’t give it an open end. Produce a model over a weekend.

“And everyone runs on the same playing field.

“Perhaps Rio can splash a lot more [cash] and then you’re guaranteed a payment.

“That would really make it easier [for more practitioners] to compete I’m sure.”

Dale Sims

In the interests of noise reduction, and a level playing field, participants might also use the same or similar tools.

“One of the interesting findings for me about this challenge was that I could tell what software package people had done their work in by looking at the workflow [and] the way they did it,” Dunham said.

“Now that’s wrong because software is a tool and we should be using it as a tool as opposed to it telling us how we should be doing the work.”

Dunham said the application of machine learning technology was currently “reasonably limited”.

“But it actually reminds me a lot of the [past] adoption of kriging,” he said.

“Back in the 80s and you were getting people transitioning from inverse distance weighting, the best thing in the world back then, to this mysterious black-box kriging stuff.

“There was a lot of the same debate and discussion going on then that’s going on now around AI.

“I think we’re at that transition point where you’re going to see more and more adoption of this stuff and it’s going to get better.

“The trouble [evident] in the challenge results is that they were done by people that have never looked at resource models in their life, or if they had they’d not worked as a competent person on a mine site.

“If you combine that together – if you take somebody who’s got deep resource estimation, or traditional resource estimation, expertise and stick them with somebody who’s got deep machine learning expertise, you’re going to get a great outcome.”

 

Leave a Reply

Not registered? Register Now

Powered By MemberPress WooCommerce Plus Integration