How to use big data in resource estimation

How to use big data in resource estimation

The last article in this series discussed the challenges of managing big data for resource estimation, starting with the fact that the process of acquiring, validating and analysing the base data for resource estimation is time consuming and expensive, which means mining companies must consider the value of the information and knowledge derived from that data when determining how they will store it. And then there is the issue of how to ensure reliable and easy access to data that can come in a variety of types (core scans, text files, Excel spreadsheets, resource models in proprietary binary format files, etc.) over a span of decades.

However, while these are significant challenges, they are not insurmountable, and the case for using soft data (the ‘big data’ component) in resource modelling and estimation is clear.

Why big data is critical

The base data for geological modelling and resource estimation can be classified as either hard data (data that is directly observed and measured), or soft data from other sources.

Using soft data can help detect correlations between variables that might not be immediately obvious from hard data alone, such as a subtle alteration pattern evident from hyperspectral core scans but not evident in assay results. Including additional geometallurgical-related parameters, such as hardness or grindability, acid consumption, moisture content or clay minerals, can also:

  • highlight potential processing issues or abnormal values that wouldn’t be recognised with a more limited dataset
  • help define trend surfaces, such as gradual changes in mean values that can be removed from the data to improve the quality of estimates, and
  • identify variables to be estimated that might not normally be included in the block models that represent the material to be mined.

Big data and domaining

Including additional geometallurgical and other parameters — through the use of self-organising maps, for example — also contributes to better domaining of the mineralisation because it allows geologists to consider many more characteristics as they define which volumes of material share similar characteristics and which are distinct.

By being able to limit the estimation process to volumes that have similar characteristics, geologists can then ensure that both the mean and the variance of the sub-datasets being used for the estimates are stable and there is no trend present, resulting in a higher quality resource estimate.

Geologists can also use big data to help confirm if the domains they have defined have hard or soft boundaries — not an easy task in cases where there are no clear contacts between different areas of the mineralised envelope. With more information from a variety of sources available to supplement directly measured data, geologists can determine:

  • if there are gradual changes in attributes between domains, which indicates a soft a boundary, or
  • if there are quick changes in attributes between domains, which indicates a hard boundary.

A soft boundary means that attributes from adjacent domains can inform estimates in a specific domain. A hard boundary means that no attributes from outside the domain should be used in the estimation as these will unduly influence the estimates (making them either too high or too low).

Big data and local estimates

Typically, resource estimates based just on hard data are used for long-term strategic mine planning but not for short-term tactical planning because, when kriging is used as the basis for estimation with a large search volume and number of samples, estimates are relatively smooth, with little variation over a given area or volume. Good tactical plans require more detail.

Adding in big data — via techniques such as co-kriging using secondary variables — can help produce estimates that take more localised (at a selected mining unit scale) variations in the mineralisation into account, while still:

  • achieving acceptable slope of regression (a standardised measure of the quality of the estimates), and
  • minimising conditional bias (true value is typically less than the estimate when the estimate is high, and the true value is greater than the estimate when the estimate is low).

Big data and sample analysis

Mining companies do not always complete a full analysis of all the attributes of all physical samples because of the amount of time and/or expense involved. This leads to a situation where the few samples that have been fully analysed are too widely spaced and some parameters cannot be estimated.

Geologists can use big data to fill in (impute) those missing values, using estimation techniques, proxy formulas or correlations. Then, once they have all desirable attributes available for each sample, they can return to more conventional techniques, such as kriging, to produce estimates or simulations — a set of equally probable realisations of the estimates — for all required parameters.

Big data and resource estimation workflow

Additional benefits of incorporating big data as part of the resource modelling and estimation workflow include increasing the ability for geologists to:

  • highlight areas of higher risk (with, for example, elevated levels of deleterious elements or material with potential processing problems) that could be subject to additional environmental or social considerations
  • adopt the industry best practice scorecard approach to the classification of the Mineral Resource estimates (from low to high confidence: Inferred, Indicated and Measured), and
  • improve mine site safety by identifying zones that might have poor ground conditions or require a change in standard mining practices to deal with (thereby introducing non-standard or unexpected behaviour).

What’s next

With the benefits of using big data in resource modelling and estimation clearly outweighing conventional practice, the next step for mining companies is to adopt a technology platform suitable for managing it.

The final article in this technical series will discuss how to choose a platform technology that makes integrating, retrieving and using big, complex resource data from multiple sources easier — and results in improved orebody knowledge and understanding of the controls on mineralisation.

A related technical series, called How to Use Machine Learning in Resource Estimation, will follow after this one. Topics in this second series include what machine learning is and how it works, along with how it can be used in automatic data domaining to provide geologists with the most suitable sub-datasets to use in estimating distinct volumes of the orebody.

Author

Michael Mattera is a Mining Industry Process Consultant at Dassault Systèmes GEOVIA with 30 years of experience in Industry. Michael holds an MSc (Engineering) in Mineral Economics from the University of the Witwatersrand. He has experience across a wide range of commodities and geographies leading to a broad understanding of multiple mining disciplines and associated technical systems. This experience includes resource modelling and estimation, multi-disciplinary project reviews focusing on Mineral Resources (PFS to Post Investment stages), public reporting of Mineral Resources and Ore Reserves (R&R) in multiple jurisdictions, associated governance and assurance processes and development of multiple R&R reporting systems.