Pulp and Paper Canada

News
Making use of MIS data

February 1, 2003  By Pulp & Paper Canada


Everyday, mills everywhere are generating vast volumes of data. This data contains a tremendous amount of information. The difference between the two is that data contains the raw numbers, details of …

Everyday, mills everywhere are generating vast volumes of data. This data contains a tremendous amount of information. The difference between the two is that data contains the raw numbers, details of the operation, whereas information conveys understanding of the meaning behind the numbers. Given the huge amount of data generated, there must be a great deal of information to be found from it!

So how does one go about extracting the information from the ever-expanding sea of data? With statistics, of course! Averages, max/min, coefficients of variation and correlation…these tools and many others will help you to discover the tremendously valuable relationships hidden within the data. What types of relationships can you find in this data? One of the most valuable is the amount of variation in the individual processes, as measured by the coefficient of variation or covariance. Variation in key values such as pH, chemical dosages, brightness, consistency, etc can cause increased chemical usage, decreased pulp quality and a need to run to higher targets to ensure on-grade production. The paper cited below compares many factors at mills across Canada, and can act as a reference for your own mill.

Advertisement

“What use is all this data when I have a problem in the mill?” One of the greatest benefits of the MIS data is that it can tell you how your mill was running. To do this, create a baseline of information for your mill operating under ‘ideal’ conditions. When a problem arises, this baseline can be used to determine differences from when the mill was running well. This is a more powerful tool than it seems — it allows you to do a comparison of factors across the mill in widely separated areas. Determining the cause of problems is more than half the battle and it can be quite surprising to learn, for example, that the low brightness found in the D1 stage is caused by a factor in the digester.

Another factor, the coefficient of correlation measures the degree of relationship between two or more factors. It can be difficult to track the effects of changes in a mill, considering the tremendously complex web of interrelationships throughout the mill. Analysis of the MIS data can help to determine the relationships between various processes, and their impact on one another. A change in one part of the mill can have an impact in an apparently unrelated part of the process. When a change is made in one factor, the correlations with other factors can help to determine the impact of the change throughout the mill. This is another situation where baseline data can be of use: it can quickly show you the effects of changes across the entire mill. This helps to avoid snap judgements, where one sees what is expected and perhaps miss subtle effects that become important later.

Another benefit of statistical analysis is to help pinpoint where the data is wrong. This sounds like pulling yourself up by your bootstraps, but is really quite easy. At one mill, a close analysis of the data was performed to determine the reason for poor operation of one leg of a split stage. It was thought that operator error was to blame for the discrepancy between the two legs. All instrumentation had been checked and all equipment was in working order, but the statistics still pointed to low temperature in the stage. So we rechecked the probe, resampled the pulp and determined that a quirk of placement had caused the probe to pick up overheated pulp far downstream from any steam addition. The moral of this is to never fully believe all the data that crosses your computer screen. I personally believe that in-line instruments are malicious and try to catch the unwary with false data at any opportunity!

What if you cannot stand statistics and do not see yourself slogging through the functions and calculations necessary? I do not blame you! I am not a statistician and have no love for statistics, but there are a number of programs available to do this work for you — one of the most common is Excel, although there are several others as good. Computers do not care how many data points you have, or how many columns of data. Their strength is number crunching — let the computer do the drudgery of analyzing the numbers.

However, it is not so important to know statistics as it is to know the mill. The numbers don’t do any good unless you have an experienced mind to interpret them and determine which relationships are of interest and which are merely artifacts of the data. With all the data and computers, you still need an experienced eye. The combination of experience and good information is tough to beat.

Reference

G. Pageau, D. Davies, “Analysis of Fiberline Log Sheet Survey”, 1999 PAPTAC Annual Meeting preprints

Dan Davies is the application manager at Degussa Canada in bleaching and water chemicals. He can be reached at dan.davies@degussa.com


Print this page

Advertisement

Stories continue below


Related