US mining software company Eclipse Mining Technologies says ontology and modelling features in its updated flagship enterprise data platform open it up to fast-evolving AI and other technologies that can help miners unlock more value from data.
The Tucson-based firm, founded by principals of a leading general mine planning software developer acquired by Sweden’s Hexagon in 2014, says the new features mark a “significant expansion” of the capabilities of its SourceOne product.
Susan Wick and Fred Banfield built Mintec into an international brand on the back of traditional mining desktop software before it became part of Hexagon. Eclipse was established in 2017 to shake things up at the enterprise level. The pair wanted to create a better product for smoothing integration of the myriad disparate data sources and generators within mining organisations, but one also positioned to figure prominently in the industry’s new augmented-intelligence and Autonomy 2.0 era.
“At the core of this latest release is a schema that acknowledges the need for data to undergo a transformative process in order to become prepared for its utilisation by artificial intelligence tools and for knowledge generation,” Eclipse said this week.
“The new release details include an enhanced integration layer to achieve genuine data integration, an ontology layer to enrich data meaning with extensive context, a modelling layer with knowledge graphs to capture and visualize relationships between data objects, enhanced automation to provide answers on demand, and the ability to incorporate advanced analytics – via external AI models – to unlock additional value.”
Wick said mining’s large and increasingly complex data inflows demanded frameworks such as those provided by Eclipse’s SourceOne Enterprise Knowledge Performance System (EKPS) to make them consumable and ultimately useful. She maintains the industry’s conventional data management systems were “never designed to achieve” this.
“Technology is now meeting these needs and SourceOne is defining the path,” she said.
Banfield said “guidance” enabled by smart ontology and knowledge graphs effectively tapped the “collective experience of past practitioners”, something the industry is trying to more broadly achieve as it transitions to a new generation of professionals and managers.
Sean Hunter, Eclipse’s director of product development, acknowledged past challenges with investment in effective ontology layering in a field such as mining, but said the industry “cannot afford to not take the next step”.
“Mining organisations have massive amounts of data that they can’t use right now,” he told InvestMETS.com.
“It [investment in the technology] has been and will continue to be prohibitive if done on a company-by-company basis, but SourceOne offers the tool that will facilitate the process of building ontology and knowledge graphs from your own data, while allowing the system to improve autonomously.
“We’re looking to have a very flexible ontology that’s able to be created more directly from operational data than to create things ontology-first. What that means is we are taking a dynamic approach where the client doesn’t have to do a lot of work upfront with whiteboarding etc. They can customise the ontology as they go.
“SourceOne has a data lake and is an ELT [extract, load, transform] tool, meaning that all sorts of data can be brought in, and then mapped to the ontology. This approach has a lot in common with things like virtual knowledge graphs, though our approach is a bit different.
“The result is a knowledge graph that can deeply understand the larger data with very rich types of underlying data.
“It’s also easy to work with, since there’s a separation of concerns between who is uploading data, who is doing that knowledge work, and who is then using tools that are derived from the knowledge graph that’s able to give feed back into the system with new derived data.”
Hunter indicated a platform was in place for an ever-increasing range of AI applications.
“There’s graph-based AI that’s common in knowledge graphs, where you’re looking for relationships that may have been missed,” he said.
“Since we have the fully detailed underlying data, there’s organisation of the underlying data that can be done here with more classical prediction-based AI, which is aided by the ontology to be able to further segment and understand the data set.
“There’s generative AI, that can reduce hallucinations by using the knowledge graph, so it doesn’t make up its own facts. Since the knowledge graph is hooked up to your data, you can also make explicit rules, to make something like expert systems.
“While you can do these things in isolation, the idea is that the cross-domain nature of the solution and having a toolkit at hand to do these things will make things much more feasible.”
Wick said bringing in lots of data from different domains had been core to SourceOne “from the get-go”.
“We’ve worked to be able to bring in data from lots of sources, do data governance, etc,” she said.
“Beyond the data lake, there can be shared definitions of types and rules in the ontology, as well as equivalencies across sites, or even departments.
“The overall vision for SourceOne has remained the same since day one and that is one of creating a system where no data is ever lost, where data can be easily accessible to all who need it and where results can be easily recreated using original inputs.”