Owned by: China Association for Science and Technology
Sponsored by: China Coal Society
Published by: Springer Nature
About issue

The International Journal of Coal Science & Technology is a peer-reviewed open access journal. It focuses on key topics of coal scientific research and mining development, serving as a forum for scientists to present research findings and discuss challenging issues.


Coverage includes original research articles, new developments, case studies and critical reviews in all aspects of scientific and engineering research on coal, coal utilizations and coal mining. Among the broad topics receiving attention are coal geology, geochemistry, geophysics, mineralogy, and petrology; coal mining theory, technology and engineering; coal processing, utilization and conversion; coal mining environment and reclamation and related aspects.


The International Journal of Coal Science & Technology is published with China Coal Society, who also cover the publication costs so authors do not need to pay an article-processing charge.


The journal operates a single-blind peer-review system, where the reviewers are aware of the names and affiliations of the authors, but the reviewer reports provided to authors are anonymous.


  • A forum for new research findings, case studies and discussion of important challenges in coal science and mining development

  • Offers an international perspective on coal geology, coal mining, technology and engineering, coal processing, utilization and conversion, coal mining environment and reclamation and more

  • Published with the China Coal Society

Show More
Editors-in-Chief
Suping Peng, Shimin Liu
Managing Editor
Wanjie Wang
Associate Editors
Bo Hyun Kim, Dongjie Xue, Pedram Roghanchi, Wu Xiao, Zhiqiang Wu
Publishing model
Open Access. Learn about publishing OA with us
Home > Volumes and issues > Volume 4, issue 2

Updating the coal quality parameters in multiple production benches based on combined material measurement: a full case study

Research Article

Open Access

Published: 09 February 2017

0 Accesses

International Journal of Coal Science & Technology Volume 4, 159-171, (2017)

Abstract

An efficient resource model updating framework concept was proposed aiming for the improvement of raw material quality control and process efficiency in any type of mining operation. The concept integrates sensor data measured online on the production line into the resource or grade/quality control model and continuously provides locally more accurate estimates. The concept has been applied in a lignite field with the aim of identifying local impurities in a coal seam and to improve the prediction of coal quality attributes in neighbouring blocks. A significant improvement was demonstrated which led to better coal quality management. So far, the proposed concept and the application in coal mining was limited to a case where online measurements were unambiguously trackable due to a single extraction face being the point of origin for the material. This contribution presents an extension to the case, where characteristics from blended material, originating from two or three simultaneously operating extraction faces, are measured. The challenge tackled in this contribution is the updating of local coal quality estimates in different production benches based on measurements of a blended material stream. For a practical application of the updating concept, which is based on the Ensemble Kalman Filter, a simple method for generating prior ensemble members based on block geometries defined in the short-term model and the variogram, is discussed. This method allows for a fast, semi-automated and rather simple generation of prior models instead of generating a fully simulated deposit model using conditional simulation in geostatistics. It should foster operational implementation in an industrial environment. The main purpose of this article is to investigate the applicability of the developed framework with a simplified prior resource model. In addition to this any model improvements due to the integration of sensor data obtained by observing a blend of coal from multiple extraction faces is investigated.

1.Introduction

One of the main challenges in mining is the control of product quality, which is impacted by impurities in the deposit, such as waste intrusions in coal seams. In lignite operations, these impurities can lead to high ash values (e.g. more than 15% ash) and cannot be localized completely by exploration data and captured in the predicted deposit models.

Utilizing online-sensor techniques for coal quality characterization in combination with rapid resource model updating, a faster reaction to the unexpected deviations can be implemented during operations, leading to increased production efficiency. This concept was first proposed as a closed loop framework by Benndorf et al. (2015). The developed framework is based on the concept of data assimilation, in particular to the Ensemble Kalman filter (EnKF) (Evensen 1994; Evensen and Van Leeuwen 1996; Evensen 1997a, b; Burgers et al. 1998; Evensen and Van Leeuwen 2000). It integrates online-sensor data obtained during the extraction process, e.g. measured on a belt conveyer, into the resource model, as soon as they are obtained.

The first investigation (Benndorf 2015) has proven the approach to work well within a synthetic case study under a variation of several control parameters (number of excavators, precision of the sensor, update interval, measurement interval, extraction mode/production rate). Wambeke and Benndorf (2016) extended the framework for practical application, including the handling of attributes and measurements showing a non-Gaussian distribution, dealing with localization and inbreeding issues, avoiding the spurious correlations and increasing the computational efficiency. The third investigation (Yüksel et al. 2016) implemented the framework in a full case study by adapting implementation details for coal quality attributes in a continues mining environment. The applicability of the framework for a full scale lignite production environment was validated and significant improvements were demonstrated. These results have been achieved by a test case where one sensor has been placed on the excavator. This sensor observes the produced material from that excavator and the data produced by this sensor is being used for updating the neighbourhood blocks around the mined blocks. Thereafter, Yüksel and Benndorf (2016) investigated the performance of the resource model updating framework with respect to main parameters, which are the ensemble size, localization and neighbourhood strategies and the sensor precision.

In many mining operations material quality control measurements are taken at central locations in the downstream process, such as, on a central conveyor belt or from the trains that are loaded after the coal blending yard. In this case the measurements represent a blend or a combination of material originating from multiple extraction faces. The measurement of one sample cannot be tracked back to one origin of the material. However, a collection of multiple measurements over time would allow to solve this unambiguity. In this contribution the updating framework is applied while multiple excavators are producing at different benches. This is done in order to understand the updating performance when feeding the blended coal observations back to multiple excavator locations from where the production originates.

A second aspect discussed here is the practical implementation in an operational environment. The resource model updating concept is based on EnKF, which requires ensemble members (or realisations). These can be obtained by conditional simulation (Benndorf 2013; Pardo-Igúzquiza et al. 2013; Srivastava 2013; Tercan and Sohrabian 2013), which can be a time consuming effort requiring some expert knowledge. For operational implementations, the process should be rather simple and robust. Therefore, the aim is to investigate whether realizations of a prior model can be obtained rather simple without loss of updating performance.

This study aims to present a new application of the framework in a full scale lignite production, where the initial resource model generation is automated based on a short-term model. This would immediately increase the production efficiency in a real mining environment, by simply giving the opportunity to react on the changes of the resource model with newly gained information. Moreover, using the real-time updating framework would also decrease the frequency of material misallocation. Having an improved resource model helps to have a smaller amount of actual lignite being incorrectly allocated to the waste dump. And similarly, a smaller amount of actual waste send to the stockpile.

In the following paper a real lignite mining case study is presented in order to compare the updating frameworks’ performance with a prior model based on conditional simulation and a prior model based on a short-term model. The comparison experiments are performed for different time intervals and different number of excavators. Results of this investigation should help the automation process when the framework is being used in a real mining environment.

2.Updating coal attributes in a resource model based on online-sensor data

For rapid updating of the resource model, sequentially observed data have to be integrated with prediction models in an efficient way. This is done by using sequential data assimilation methods, namely the EnKF based methods.

Let \(\varvec{Z}\left( \varvec{x} \right)\) be the state of a stochastic process modelling the spatial distribution, where Z refers the local ash content at excavation locations x. The developed framework uses geostatistical simulation technique (e.g. conditional simulation) in order to create the ensemble of realizations, also called prior ensemble \(\varvec{Z}_{0} \left( \varvec{x} \right)^{e}\), where \(e = 1, \ldots ,N\) is the number of realizations. Then the updated resource model ensembles, \(\varvec{Z}^{*} \left( \varvec{x} \right)^{e}\), is calculated by the following equation:

$$\varvec{Z}^{*} \left( \varvec{x} \right)^{e} = \varvec{Z}_{0} \left( \varvec{x} \right)^{e} + \varvec{K}^{e} \left[ {\varvec{l}^{e} - \varvec{AZ}_{0} \left( \varvec{x} \right)^{e} } \right]$$
(1)

where \(\varvec{Z}\left( \varvec{x} \right)^{e}\) and \(\varvec{l}^{e}\) respectively consist of an ensemble of realizations and the sensor based measurements; A represents a forward simulator of the production sequence, so the term \(\varvec{AZ}_{0} \left( \varvec{x} \right)^{e}\) represents the predicted measurements based on the prior block model. In Eq. (2), \(\varvec{C}_{{\varvec{zz}}}^{*e}\) refers to the updated error covariance of the resource model, where the overbar denotes the expected values of the ensembles.

$$\varvec{C}_{{\varvec{zz}}}^{*e} = \overline{{\left[ {\varvec{Z}\left( \varvec{x} \right)^{e} - \overline{{\varvec{Z}\left( \varvec{x} \right)^{e} }} } \right]\left[ {\varvec{Z}\left( \varvec{x} \right)^{e} - - \overline{{\varvec{Z}\left( \varvec{x} \right)^{e} }} } \right]^{\text{T}}}}$$
(2)

The Kalman gain, K, calculates a weighting factor that indicates the reliability of the measurements, to decide “how much to change the prior model by a given measurement”. The covariance matrices represent the whole ensemble and the Kalman gain K e is derived from these.

$$\varvec{K}^{e} = \left( {\varvec{A}^{\text{T}} \varvec{C}_{{\varvec{zz}}}^{e} \varvec{A} + \varvec{C}_{{\varvec{ll}}}^{e} } \right)^{ - 1} \varvec{A}^{\text{T}} \varvec{C}_{{\varvec{zz}}}^{e}$$
(3)

Two measures are implemented by (Wambeke and Benndorf 2016) to reduce computational time of Kalman gain calculations. The first measure is related to the neighbourhood. The size of the \(\varvec{C}_{{\varvec{zz}}}\) matrix is in the order of the size of the blocks that are in the defined updating neighbourhood. The second measure is a Cholesky decomposition which is implemented to avoid an explicit computation of the inverse in Eq. (3). This results in significant computational speed ups.

Blended measurements and differences in the scale of support are dealt through the empirical calculation of the covariances. These covariances (\(\varvec{A}^{\text{T}} \varvec{C}_{{\varvec{zz}}}^{e}\)) mathematically describe the relations between the blended measurements and individual source locations.

With the goal of a continuously updatable coal quality attributes in a resource model, a framework based on the normal-score ensemble Kalman filter (NS-EnKF) (Zhou et al. 2011) approach was tailored for large scaled mining applications. The NS-EnKF is chosen to deal with the non-gaussianity of the data by applying a normal-score transformation to each variable for all locations and all time steps, prior to performing the updating step in EnKF.

Figure 1 gives general overview of the operations which are performed to apply the resource model updating framework. The concept initially starts with resource modelling, traditionally by using conditional simulation. This is the first required data set consisting of ensemble members to be updated. The second data set consists of the production data and their related actual sensor measurements. The production data provides the excavated block information, e.g. names and quantities. The actual online-sensor measurement values are collected during the lignite production and they represent the excavated material. The third data set consists of a collection of actual and predicted sensor measurements. The predicted measurements are obtained by applying the production sequence as a forward predictor to prior resource model realizations. Once all of the input data are provided, the updated posterior resource model will be obtained. This process will continue as long as new online-sensor measurement data is received.

Fig. 1
figure 1

Configuration of the real-time resource model updating framework, modified from (Wambeke and Benndorf 2015)

3.A simplified prior model

As mentioned earlier, the first data set required to apply the resource model updating framework is a collection of the resource model realizations, also called the prior model. Traditionally, this is done by sophisticated geostatistical methods, such as conditional simulation. However, this requires some expert knowledge and adds an additional step prior to using the updating framework. Moreover, generating a new resource model might create disarray between geology and mine planning departments in the company, since they already have a resource model created by their own team. For these reasons, in order to apply the updating framework in real mining environment, a more practical and simplified application of the framework is required. The proposed simplification obtains the required prior model realizations by adding fluctuations around the company’s short-term mining plan. This short-term model is created by the mining engineers, based on applying the defined block geometries (Fig. 2) on the company’s estimated block model. In this way, each block will have an estimated ash value. Figure 3 compares both of the prior model generation processes.

Fig. 2
figure 2

Planned block geometries in the production benches

Fig. 3
figure 3

Flow chart of prior model generation

In order to create the quality model based on short-term model, the following strategy is employed:

  1. (1)

    Short-term block model values are generally available for each block and they deliver the prior estimation of block attributes (E-type estimate).

  2. (2)

    A conditional simulation is applied to production blocks that were in the short-term block model. For this application, the previously calculated block scaled variogram model is used. Drill hole locations with zero ash content are used as the reference point while running the simulations.

  3. (3)

    After this, simulated data on the production blocks refer to the uncertainty and they will be added on prior estimations of block attributes.

  4. (4)

    The short-term model based on the simulations is now ready to be imported into the algorithm as the first main component (prior model).

  5. (5)

    The updated resource model (posterior model) will be split in a mean part, which will be written back to update the short-term block model. The uncertainty related part will be written back in the ensemble part.

As long as new measurement data is obtained, these steps need to be applied recursively. The process can easily be automated by using a previously calculated variogram model and some interfaces. In this way, there will be no requirement for an additional complex process of creating conditional simulations since they are not part of the daily work flow. Moreover, there will be no disarray between a company’s short-term model and the input prior model of the updating approach; the integration will be smooth. Additional to that, no expert knowledge will be required when applying the framework due to the automated process, contrary to conditional simulation application. All of these simplifications on application are very significant since it is important to benefit from the framework in a real mining environment.

4.Application in a full scale lignite production using multiple excavators

This case study aims to discuss two different aspects. First, it aims to test the performance of the resource model updating framework while the sensor is observing a blend of coal resulting from multiple excavators. Second, it aims to simplify and semi-automate the framework for easier application in a real mining environment.

4.1 Case description

The case study is performed on a lignite mining operation in Germany, where the geology of the field is complex, including multiple split seams with strongly varying seam geometry and coal quality distribution (Fig. 4). In this case study, the challenge originates from the complicated geology that leads to geological uncertainty associated with the detailed knowledge about the coal deposit. This uncertainty causes deviations from expected process performance and affects the sustainable supply of lignite to the customers. The aim is to improve the knowledge over the coal deposit and increase the process performance by applying a resource model updating framework.

Fig. 4
figure 4

Complicated geology in the lignite mine

For the case study, the target area is defined as an already mined out area of 25 km2, where there are about 3000 drill holes. Mining operations are executed by six excavators, each working on a different bench. Among these six excavators, only five of them are continuously working on a lignite seam. Generally, the maximum number of excavators that are working at the same time is three. For this reason, the case study will apply cases where either only one excavator is working, or two excavators are working or three excavators are working at the same time.

The produced materials are being transported through conveyor belts. All conveyor belts merge at a central conveyor belt leading to the coal stock and blending yard, which is further connected to a train load. Figure 5 presents the mentioned six benches in black lined blocks, conveyor belts in blue lines, drill holes as green points and the online measuring system as an orange point (Fig. 6).

Fig. 5
figure 5

Production benches, belt system and drill holes on the study area

Fig. 6
figure 6

Radiometric sensor measurement device, installed on the conveyor belt, measuring blend of coal resulting from multiple excavators, just before the stock pile

A radiometric sensor measurement (RGI) system is installed on the central conveyor belt just before the coal stock and blending yard. This system allows an online determination of the ash content of the blended mass flow directly on the conveyor belt, without requiring any sampling or sample processing. For demonstration purposes, this case study assumes the RGI values to be accurate.

4.2 Data preparation

To apply the resource model updating framework, preparation of input data is required (Fig. 1). The first data set is the prior model, which contains a collection of the resource model realizations. For the case study, two different prior models are prepared based on different approaches.

4.2.1 Prior model: based on drill hole data

A prior model based on drill hole data refers to generation of prior realizations by conditional simulation based on the given drill hole data. First, the geological model of the defined coal seam is created on a 25 m × 25 m × 1 m dimensioned block model based on the roof and floor information of the lignite seam. Second, a 25 m × 25 m × 1 m dimensioned quality model capturing the wet ash content in percentages is generated by 25 simulations based on the provided drill hole data. The simulated ash values are then merged with the previously defined coal seam. After this, the block model realizations are ready to be imported into the algorithm as the first input.

4.2.2 Prior model: based on short-term model

A prior model based on the short-term model refers generation of prior realizations by adding fluctuations on short-term mining model of the company. A detailed explanation of this application is introduced in the previous section.

The updating experiments are performed both for drill hole based prior model realizations and short-term model based prior model realizations. This is done in order to compare the performance of the updating framework while updating differently generated prior models. The aim of this performance comparison is to investigate: “If the updating framework uses a non-geostatistical set of simulations as a prior model, would the updated models still be improved?”

The second data set consists of the production data and their related actual sensor measurements. The material travelling time from each production location (excavator & bench location) to RGI location is calculated. In order to determine the location of the received RGI measurement data, in other words: “to track back where the measured material comes from”, the production data is linked with the RGI data based on the given timecodes (material travel delays are taken into account). The second input file for the algorithm is written to a file containing the following information: timecode, actual sensor measurement (RGI data), excavated block1 id, excavated block1 mass, excavated block2 id, excavated block2 mass,…, excavated block n id, the excavated n block mass; where \(b = 1, \ldots ,n\) is the excavated block number in the given time span.

The third data set consists of a collection of actual and predicted sensor measurements. An ensemble of predicted values is obtained by the forward simulator applying the digging location and the material transport model to each realization. The third input file for the algorithm is written to a file containing the following information: the block ID, central block location (X, Y, Z coordinates), a series of real measurements and predicted measurements.

4.3 Experimental set-up

The experiments that are performed both with drill hole based and short term plan based prior model realizations, fall into two different categories. The first category involves a different time span based experiments, where updating of the prior model is performed every 2, 1 h, 30, 15 and 10 min. For these experiments, the related RGI and production data are linked to each other (for every minute) and averaged for each indicated time span.

The second category involves experiments that are based on the number of excavators producing coal at a given time period. It investigates the capability of the updating framework when updating multiple benches based on a blend of material measurements. For these experiments, the data set that is prepared for every 2 h of updating is taken as the base data and divided into three different data sets. This division is done based on the number of excavators that are producing coal at a given time span such as; 1 excavator, 2 excavators and 3 excavators.

For each criteria introduced above, an experiment is performed. Each experiment initially updated the prior model for a four day time period. Based on this resulting posterior model, forward simulator is used to generate predicted posterior model values for the future mining operations (for next two days). These predictions are then compared with the related RGI data. Chosen time spans are representative for any time span that might be chosen in the future.

The updating neighbourhood size is chosen as 900 m × 900 m × 10 m in X, Y, Z directions based on the variogram model range, which was calculated during geological modelling. A 225, 225, 5 m sized localization is applied for each mentioned experiment in order to prevent long range spurious correlations based on a previously performed sensitivity analysis (Yüksel and Benndorf 2016).

5.Results and discussion

5.1 Results

This section presents representative results of the previously defined experiments. The following graphs provide representative information where the X-axis refers to the mentioned time spans, \(i = 0,1,2, \ldots ,n\). Instead of writing the full date and time information, the authors decided to use time span codes for simplicity. For example for the case where the updating is every 2 h; if i = 0 refers on 01.01.2014 at 00:00 o’clock, i = 10 refers 20 h later, which is 01.01.2014 at 20:00 o’clock. Y-axis refers to the ash %. The presented graphs consist of the following information:

  1. (1)

    Posterior Model Box Plots: Box plot representation of posterior model simulations which are updated based on a given criteria (e.g. updating every 2, 1 h, 30, 15 or 10 min; or updating while 1 excavator, 2 excavators or 3 excavators are producing).

  2. (2)

    Posterior Mean: Represents the mean of the updated models in the learning period. Essentially, it is the mean of the posterior model that is updated based on a given criteria.

  3. (3)

    Predicted Mean: Represents the mean of the predictions in the prediction period. Basically, it is the prediction of future mining blocks, based on the four-day-long-updated model.

  4. (4)

    Prior Mean: Mean of the prior model that is created based on either the drill hole data or short-term model. It is mined through different operation files based on a given criteria (e.g. updating every 2, 1 h, 30, 15 or 10 min; or updating while 1 excavator, 2 excavators or 3 excavators are producing).

  5. (5)

    RGI: The averaged RGI data for a given time span.

  6. (6)

    White area: Represents the learning period, where posterior models are produced as a result of updating the prior model, by using the RGI data.

  7. (7)

    Green area: Represents the prediction period, where the mining operations are executed on the four-day-long-updated model.

In these graphs the prior model is updated for four days. Based on this updated prior model, the posterior model, further mining operations are performed for the next two days. The operation file mines through the posterior model and highlights the area as green.

5.1.1 Using a prior model that is based on conditional simulation

The achieved improvements are numerically evaluated using an absolute error measure. The absolute error is defined as the absolute difference between the measured value of a quantity and its actual value. In our case, absolute error refers to the absolute difference between the measured RGI value l of produced coal at a given time span and its prior \(\varvec{Z}_{0} \left( \varvec{x} \right)\) (or posterior \(\varvec{Z}^{*} \left( \varvec{x} \right)\)) value, calculated by the following equation:

$$AE = \frac{1}{n}\mathop \sum \limits_{i = 0}^{n} \left| {l_{i} - \varvec{Z}^{*} \left( \varvec{x} \right)_{i} } \right|$$
(4)

The absolute error values are calculated for each experiment iteration at a given time span \(i = 0, \ldots ,n\) and eventually averaged when the update of the block model is completed for the defined study case.

Table 1 provides the calculated absolute errors for prior models and predictions that are illustrated in the green area of the graphs. Additional to that, it indicates the improvement (IMPROV) in percentages when comparing prior’s and predictions’ absolute errors. Improvements indicate the decrease of the absolute errors and it can be calculated as;

$${\text{IMPROV}}\,(\% ) = \frac{{{\text{PriorModel}}_{\text{AbsoluteError}} - {\text{PredictedModel}}_{\text{AbsoluteError}} }}{{{\text{PriorModel}}_{\text{AbsoluteError}} }}$$
(5)
Table 1 Calculated absolute errors for predictions—Prior model is based on drill hole data

Time

Prior model

Predictions originated from posterior model

Absolute error

Absolute error

IMPROV (%)

2 h

2.25

0.82

64

1 h

2.82

1.03

43

30 min

1.20

1.14

5

15 min

2.59

2.08

20

10 min

2.57

2.39

7

1 Exc

2.22

1.87

16

2 Exc

1.72

0.96

44

3 Exc

1.12

0.92

18

Moreover, Fig. 10 presents the calculated absolute error for the following two days after updating the prior model every 2 h for four days. Red dots illustrate the calculated absolute errors for each time span.

5.1.2 Using a prior model that is based on short-term model

The achieved improvements are presented using “absolute error” as previously introduced. Table 2 provides the calculated absolute errors for prior models and predictions. The improvement percentages (IMPROV) are calculated by comparing the absolute errors of prior and prediction models.

Table 2 Calculated absolute errors for predictions—Prior model is based on short-term model

Time

Prior model

Predictions originated from posterior model

Absolute error

Absolute error

IMPROV (%)

2 h

1.09

0.70

36

1 h

0.99

0.84

14

30 min

1.49

1.27

15

15 min

4.37

1.17

73

10 min

4.24

1.59

63

1 Exc

2.10

1.36

35

2 Exc

1.13

0.91

19

3 Exc

1.18

0.89

25

Moreover, Fig. 14 presents the calculated absolute error for the following two days after updating the prior model every 2 h for four days. Red dots illustrate the calculated absolute errors for each time span.

6.Discussion

6.1 Improvements of predictions

Figures 7, 8 and 9 illustrate the improvement of the ash % predictions in the posterior model, where updating of the prior model (developed from drill hole data) is applied based on RGI data for four days. The ensuing posterior model is mined through based on production data. The predictions of the posterior model while mining the neighbourhood blocks are then compared with the actual ash % (in this case RGI measurements are assumed as reality) and prior model. Representative graphs are provided (Fig. 10).

Fig. 7
figure 7

Results based on conditional simulation: updating every 2 h for 4 days. The green area represents the prediction period. The white area represents the learning period

Fig. 8
figure 8

Results based on conditional simulation: updating every 2 h for 4 days, 1 excavator producing. The green area represents the prediction period. The white area represents the learning period

Fig. 9
figure 9

Results based on conditional simulation: updating every 2 h for 4 days, 2 excavators producing. The green area represents the prediction period. The white area represents the learning period

Fig. 10
figure 10

Absolute error predictions (for the next 2 days) of after updating every 2 h for 4 days

Figure 7 presents the case where the prior model (based on drill hole data) is being updated for every 2 h for four days. The following green area represents a period of two days, where orange points represent the averaged prediction behaviour of the posterior model which is updated for four days. In this time period, it can be observed that posterior model predictions are mostly following the trend of the RGI data (red lines). Moreover, when comparing the posterior model predictions with the prior model (blue square points), significant improvements are observed in the posterior predictions. Based on Table 1, the averaged absolute error for those predictions is 0.82, while it is 2.25 for prior model. This indicates a 64% improvement.

Figure 8 presents the case where the prior model (based on drill hole data) is being updated every 2 h when only 1 excavator is operating for four days. Similarly, in the green area, orange points represent the averaged prediction behaviour. Between the 50th–52nd and the 55th–58th timecodes, posterior model predictions are remaining stable due to production of the same mining block at each time. This stable prediction averages around the reality (RGI data). Moreover, uncertainty of the predictions (box plot whiskers) covers the reality (RGI data) better than the prior model. After the 63rd timecode, posterior model predictions follow a similar trend as the prior model due to spatial variability of the lignite seam. Furthermore, since this experiment focused on a case with only one excavator producing, the application was not limited only for one bench. As a result, after updating four days in three benches, using only the times where one excavator is working, could only improve the future predictions for a limited time. The authors believe that for this case, the quality and the lifetime of the predictions can be improved by extending the learning phase (more than four days).

Figure 9 presents the case where the prior model (based on drill hole data) is being updated every 2 h when 2 excavators are operating for four days. By using 2 excavators at the same time, already more information becomes available about the lignite seams that are being worked on and this leads a longer time of good quality improvements. This can be seen by comparing Figs. 8 and 9.

Figures 11, 12 and 13 apply the same experiments as above, however in these figures the prior model is based on the short-term model. With these experiments similar results as before were achieved. In Fig. 11, where the updating of the prior model is every 2 h, predicted ash % values are almost always aligned with the reality (RGI data). Figure 12 presented a case with 1 excavator and Fig. 13 presented a case with 2 excavators. As above, when looking to those two graphs, better predictions are observed when using 2 excavators.

Fig. 11
figure 11

Results based on short-term model: updating every 2 h for 4 days. The green area represents the prediction period. The white area represents the learning period

Fig. 12
figure 12

Results based on short-term model: updating every 2 h for 4 days, 1 excavator producing. The green area represents the prediction period. The white area represents the learning period

Fig. 13
figure 13

Results based on short-term model: updating every 2 h for 4 days, 2 excavators producing. The green area represents the prediction period. The white area represents the learning period

For both cases, Figs. 10 and 14 are provided in order to investigate the behaviour of the absolute error values obtained from predictions. The absolute error values are initially very low, but after approximately a day period they indicate an increase over time. When the distance between the mined block and the neighbourhood blocks increases, it is expected to see less improvement for the neighbourhood blocks. This occurs due to the lower spatial correlation. Moreover, when predicting the neighbourhood blocks there might be some blocks that are not updated in the learning period. This causes not only an increase in the absolute error over the time, but also outliers in the early phases of the prediction period. For example, see timecode 55 in Fig. 10 or see timecode 51 and 54 in Fig. 14. These outliers can be observed and the reason that they occur can be explained as follows: at each individual timestamp there are different blocks being mined from different benches. If a block gets mined in the prediction period and it has not been mined in the learning period or if it has not been in the neighbourhood of any other mined blocks, it has never been updated. Thus, it still has the prior model’s value assigned on it. This results in a prior biased prediction and an increase of the absolute error.

Fig. 14
figure 14

Absolute error predictions (for the next 2 days) after updating every 2 h for 4 days

6.2 Time based experiments

Different time span based experiments are performed (every 2, 1 h, 30, 15 and 10 min) for updating the prior model based on drill hole data. In overall, significant improvements (up to 64%) are obtained while updating the prior model with measured RGI values and predicting neighbourhood blocks’ qualities (Table 1). A comparison among the performed experiments between the most frequent update (every 10 min) and the least frequent update (every 2 h), shows that highest improvements are achieved by the least frequent updates of this case (every 1 and 2 h updating cases).

Similar to the above experiments, different time span based experiments are also performed for updating the prior model based on the short-term model. All of the experiments show satisfactory improvements (up to 73%) (Table 2). In this case, the highest improvements are achieved by the most frequent updates (every 15 and 30 min).

However, calculating these absolute errors does not necessarily indicate the best parameters to use. It only validates the applicability of the method for the given parameters. It should not be forgotten that the calculated absolute errors for predictions can vary depending on the quality of the posterior model that is chosen as the base of the predictions. For each case this paper has chosen the posterior models that are obtained after four days of updating the prior model. Other experiments are also applied to test this issue and they all recorded significant but varying amounts of improvements.

6.3 Excavator number based experiments

Experiments based on a different number of working excavators are performed in order to investigate the capability of the updating framework. The previous case study (Yüksel and Benndorf 2016), in which the study area was limited to one bench and one producing excavator, produced successful results. The RGI online-sensor was positioned on the producing excavator, so that the measured material was the produced material from that excavator. However, in this case study, there are three different benches and three different producing excavators (one excavator for each bench). The online RGI sensor is positioned on one of the conveyor belts just before the stock yard. Therefore, the RGI sensor measures blended material produced from different benches. The aim of performing the mentioned experiments in this section is to test the performance of the updating algorithm in when the observations are measured from a blended flow.

By looking at Table 1, a range of 16%–44% improvement is observed when using a varying amount of excavators in the updating experiments with a drill hole based prior model. This shows that the algorithm can handle a situation where the blended measurement data is fed into different benches where the material is originally produced.

By looking at Table 2, a range of 19%–35% improvement is observed when using a varying amount of excavators in the updating experiments with a short-term model based prior model. The obtained improvements are significant considering the benefits of automation while using a short-term model based prior model. Once again, the results indicate that the algorithm can handle a situation where the blended measurement data is fed into different benches where the material is originally produced.

7.Conclusions

This study provides a full-scale case study on the application of an Ensemble Kalman based resource model updating framework, with the aim of simplifying the application process.

To offer an easy application of the updating framework in a real mining environment, a simplified application method is created. This simplified application method involves creating the prior realizations based on the company’s short-term model. Improvement percentages, in average, were not significantly different when the case study results are compared with the results obtained from a case study where the prior realizations are generated with geostatistical simulations. This paper validates that the automation of the developed framework during real applications can be done based on a short-term model without any additional process being required in order to prepare the prior model.

Moreover, significant improvements are observed while using blended material measurement data in order to update different production locations in different benches. This provides great flexibility for future applications.

The authors would like to point out that this method can be applied to any bulk mining operation, without changing the core method and improvements, if a material tracking system, grade or quality control model and online-sensors measurement system are in place.

Future studies will focus on the value of introducing additional information in the short-term model during the production phase.

References

[1] Benndorf J (2013) Application of efficient methods of conditional simulation for optimising coal blending strategies in large continuous open pit mining operations. Int J Coal Geol 112:141–153
[2] Benndorf J (2015) Making use of online production data: sequential updating of mineral resource models. Math Geosci 47:547–563
[3] Benndorf J et al (2015) RTRO–coal: real-time resource-reconciliation and optimization for exploitation of coal deposits. Minerals 5:546–569
[4] Burgers G, Jan van Leeuwen P, Evensen G (1998) Analysis scheme in the ensemble Kalman filter. Mon Weather Rev 126:1719–1724
[5] Evensen G (1994) Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J Geophys Res: Oceans 99:10143–10162
[6] Evensen G (1997a) Advanced data assimilation for strongly nonlinear dynamics. Mon Weather Rev 125:1342–1354
[7] Evensen G (1997b) Application of ensemble integrations for predictability studies and data assimilation. In: Monte Carlo simulations in oceanography proceedings’ Aha Huliko’a Hawaiian Winter Workshop, University of Hawaii at Manoa, Citeseer, pp 14–17
[8] Evensen G, Van Leeuwen PJ (1996) Assimilation of Geosat altimeter data for the Agulhas current using the ensemble Kalman filter with a quasigeostrophic model. Mon Weather Rev 124:85–96
[9] Evensen G, Van Leeuwen PJ (2000) An ensemble Kalman smoother for nonlinear dynamics. Mon Weather Rev 128:1852–1867
[10] Pardo-Igúzquiza E, Dowd PA, Baltuille JM, Chica-Olmo M (2013) Geostatistical modelling of a coal seam for resource risk assessment. Int J Coal Geol 112:134–140. doi:10.1016/j.coal.2012.11.004
[11] Srivastava RM (2013) Geostatistics: a toolkit for data analysis, spatial prediction and risk management in the coal industry. Int J Coal Geol 112:2–13. doi:10.1016/j.coal.2013.01.011
[12] Tercan AE, Sohrabian B (2013) Multivariate geostatistical simulation of coal quality data by independent components. Int J Coal Geol 112:53–66. doi:10.1016/j.coal.2012.10.007
[13] Wambeke T, Benndorf J (2015) Data assimilation of sensor measurements to improve production forecasts in resource extraction. Paper presented at the IAMG, Freiberg (Saxony) Germany
[14] Wambeke T, Benndorf J (2016) A simulation-based geostatistical approach to real-time reconciliation of the grade control model. Math Geosci. doi:10.1007/s11004-016-9658-6
[15] Yüksel C, Benndorf J (2016) Performance analysis of continuous resource model updating in lignite production. Paper presented at the Geostats 2016, Valencia
[16] Yüksel C, Thielemann T, Wambeke T, Benndorf J (2016) Real-time resource model updating for improved coal quality control using online data. Int J Coal Geol. doi:10.1016/j.coal.2016.05.014
[17] Zhou H, Gómez-Hernández JJ, Hendricks Franssen H-J, Li L (2011) An approach to handling non-Gaussianity of parameters and state variables in ensemble Kalman filtering. Adv Water Resour 34:844–864

About this article

Cite this article

Yüksel, C., Benndorf, J., Lindig, M. et al. Updating the coal quality parameters in multiple production benches based on combined material measurement: a full case study.Int J Coal Sci Technol 4, 159–171 (2017).
  • Received

    21 October 2016

  • Revised

    21 December 2016

  • Accepted

    13 January 2017

  • Issue Date

    June 2017

  • DOI

    https://doi.org/10.1007/s40789-017-0156-3

  • Share this article

    Copy to clipboard

For Authors

Explore