Pulp and Paper Canada

At last, data at your service

October 1, 2008  By Pulp & Paper Canada

Energy costs are rising, markets are increasingly competitive, and managers must ensure all resources are affected where they are most needed so they generate added value. All assets -especially proce…

Energy costs are rising, markets are increasingly competitive, and managers must ensure all resources are affected where they are most needed so they generate added value. All assets -especially process control equipment -must be maintained in an optimal state so they provide the best performance. To reach these objectives, managers should use modern tools and make sure their teams benefit from performance measurements and diagnostics.

It is essential to sustain results and performance improvements, but performance can decline for various reasons, such as operation procedure changes, raw material quality variations, equipment wear, completed maintenance, process changes, costs increases, and rotation of personnel.


Asset management and performance improvement are not new concepts. Traditional methods can be used, but modern tools with computers and software must be privileged. In fact, the key is in the approach: as the performance of process control systems will decline if they are not optimized regularly, continuous efforts must be made to keep processes running at maximum performance.

Turning data into knowledge

Plant employees are inundated with data from multiple sources, from every layer of the business. Of critical concern is data collaboration -finding the right data for the people who need it to make decisions at the right time, and in the proper format. By using monitoring and analysis tools to pull together all the different data sources and by visualizing those results in an intuitive way, personnel can make an immediate difference in responding to conditions. As a result, operations can become more reliable and agile.

The challenge in this is to get rid of disparate data. At the base of the data structure, there is the real-time layer (e. g. instrumentation, DCS/PLC); at the top of the structure lies the business layer (e. g. financial applications). The operation layer is between the two. There are custom solutions at each level, such as real-time proprietary applications, a multitude of spreadsheets (although generally useful individually, they are inefficient collectively), and information confusion (e. g. production reports are calculated differently based on individual definitions).

Implementing a collaborative plant management layer can result in the elimination of disparate data. Using standard open data communication, such as OPC, this layer brings data from different sources and puts it into context to generate reports and trends.

Instead of crunching numbers in spreadsheets and spending hours putting the data together, managers can obtain operational information in near-real time. The diffusion is done via the web from a main server.

Implementing web-based data collaboration systems can lead to four primary benefits:

• Business agility, making real-time response to changing market conditions and opportunities a reality.

• An integrated, enterprise-wide web-based data collaboration system provides improved workforce utilization and overall efficiency.

• Easy and total access to all information -regardless of data source, type, or proprietary format -provides a framework for greater comprehension, more thorough analysis and overall improvements to the decision making process. Web-enabled thin client solution ensures this information is always available.

• Administration costs are reduced. Central server resident software eliminates individual client installs, reducing overall administration requirements.

Seeing is believing

With everything employees at all levels have to do, it is important to be able to display high level data in a simple and intuitive format so problems and under-performing areas can be identified at a glance.

Treemap visualization technology is an excellent way to quickly obtain a global understanding of the situation. The total figure represents the plant and is subdivided in areas. Treemapping facilitates visual comparisons, because it presents a vast amount of information in a single display. Simple controls allow users to change display criteria and filter the set of data viewed. Rather than depicting control assets as a text list, tree-mapping technology uses shapes, sizes, colours and groups of geometric shapes to impart key performance information related to individual assets.

These data visualization dashboards must be easily customizable, because all users have different needs or preferences at different moments. The ability to customize high level views and dashboards without programming skills can add a lot of value to a data collaboration system.

The data displayed can be filtered (e. g. all controllers with a service factor of 80% and higher, or all controllers where the valve has stiction). Using these treemap views, managers can detect under-performing areas at a glance by looking for big red boxes. Similarly, production managers can pinpoint problems, planners can determine which valves need to be repaired at the next shutdown, control engineers can detect control problems, and technicians can have tools, one click away.

Drilling down from a treemap is intuitive. Hovering over a box displays all data related to the box. Clicking in a box displays a more detailed view, and clicking in elements of this view displays a more detailed report.

Once the data is accessible, contextualized and it is possible to implement solutions to manage production, operation and assets more effectively and in a timely manner.

Condition-based maintenance

Today, most maintenance activities are performed based on predefined schedules or when shutdowns occur. But when data is available, condition-based maintenance becomes an interesting alternative: it is performed only when there is an impending fault or failure condition. The objectives are to prevent unplanned downtime and make optimal use of maintenance resources while maximizing the operational life of plant assets.

A condition-based maintenance program uses real-time data on plant performance and compares it with benchmarks to determine the condition of the equipment. By analyzing historized real-time data, it is possible to make an accurate prediction of upcoming failures, and maintenance schedules can be adjusted based on these predictions. This way, limited resources are used on the right equipment at the appropriate time. This approach permits a reduction of unplanned shutdowns and less scheduled maintenance. Assets can be run at their maximum performance, while optimization efforts can be co-ordinated, and both materials and human resources can be used more effectively.

Some tools used in condition- based maintenance include alarm managers, control performance monitors for PID loops, advance process control (APC), analyzers, soft sensors, valves, process, etc., equipment condition monitors for pumps, motors, heat exchangers, etc., and asset managers for valves and transmitters.

PID loops

According to data available on PID loops from the last 25 years, there are numerous causes of low PID loop performance, such as valve performance issues, inappropriate tuning parameters or control strategies, changes in process dynamics or equipment wear-out, and deficient resources, to name only a few.

There is a lot to be gained with the optimization of control loops. It has been estimated 80% of process control loops cause more variability when they are run in automatic mode than in manual mode. About 30% of all loops oscillate due to non-linearities, such as hysteresis, stiction, deadband and non-linear process gain. Another 30% oscillate because of poor controller tuning.

When a loop is poorly optimized, an upset in the direction of inefficiency can cause product losses.

Alternatively, a load can lead to the production of off-spec products. When a control loop is run optimally, variability is minimized. Better tuning keeps the process on spec and reduces waste of expensive

Tuning objectives vary for different types of processes. For example, in a steam header, the pressure has to be maintained at the maximum allowable without large errors so the safety valves do not open. The PID controller must be tuned tightly to ensure the valve that controls the flow from the main header will move quickly to eliminate disturbance effects. In contrast, the PID controllers in a mixing process have to move at the same speed all together to ensure the ratios remain constant.

The characteristics of good control are difficult to obtain. Loop tuning involves a trade-off between robustness -the ability of the control loop to remain stable when the process (mainly dead time or process gain) changes – and speed of response.

Four aspects must be considered when a control loop is analyzed, the first of which is utilization: if the controller is not used, there is probably a fundamental problem. Second is performance, as a controller may be performing poorly, even if it is in use. Diagnostics come third: if performance is not satisfactory, there can be several reasons, such as model plant mismatch, improper tuning and instruorganized, issues. Finally, there is remediation: once the reasons for unsatisfactory performance have been identified, it is important to prioritize actions and make changes to the controller or process in order to improve performance without excessive cost or effort.

Optimizing operations

Control Performance Monitors, optimization tools, alarm management, and operations logbooks are four solutions that can be used to consolidate real-time systems and business systems to improve the production’s sustainability.

Control Performance Monitoring consists of analyzing incoming signals (process variables, set points, and state/mode) and outgoing signals (controller outputs) to determine if the expected performance is reached. All signals are read from the control via digital communications. The system detects oscillations and equipment that does not behave as benchmarked, and processes control problems, process problems, operation problems, and so on.

The system must detect all problems related to control loops, process equipment, operations, and production. It must also handle special control strategies (cascade, feedforward, override, ratio, etc.) and generate predefined reports.

It is a condition-based application that monitors, identifies, diagnoses, and remedies control asset issues across all plant layers. This software tool also offers modeling and tuning tools, and not only helps improve control performance, but also sustains it. It continuously monitors all regulatory control assets, detects and prioritizes problems, and notifies the appropriate personnel. The system monitors PID loops, APC, analyzers and soft sensors. Process optimization tools are part of the Control Performance Monitoring software tool. They are used when problems are detected and when actions are needed.

New technologies are now available to tune many loops simultaneously, which allows for area optimization. This eliminates the guesswork in control tuning and loop optimization by analyzing control performance in closed-loop conditions. In the past, it was necessary to put the loop in manual mode to analyze and tune it. This is not the case anymore; everything can be done in automatic or cascade mode. There is no need to disturb operations, as normal operating data suffice.

Process optimization consists of reducing energy costs, managing fuel usage, and improving operations.

There are five steps to optimize a group of loops with modern tools. First is to configure the loops to be optimized (usually in one area). The actual performance of the loops is then assessed. At the end of the process, the same will be done and a before/after comparison will be made. The third step is to test the loops. In automatic or cascade mode, a small excitation is sent to set points. Usually, five to 30 minutes suffice. Next, the process model is identified, and a desired performance is selected. The software tool will calculate tuning parameters to reach the criteria.

Process analysis tools can be used for de-bottlenecking, troubleshooting, and modeling. Auto-correlation is used to evaluate the impact of past data on actual data, and to obtain quick impulse responses to determine performance. In correlation analysis, cross-correlation is used to evaluate how two or more variables interact together. The results are presented using numbers, tables or coloured maps, and analyses are performed in time of frequency domains.

Other tools include spectral analysis, which is used for analyses in the frequency domain, six sigma’s analyses, multivariate statistics, and simple or multivariable modeling.

Alarm management tools allow performance optimization of alarm systems, which helps improve plant safety, productivity and profitability.

Software packages help reduce the workload when an alarm management lifecycle is adopted. In fact, they simply help execute alarm management strategies.

Alarm management problems are due to poor configuration practices used during the control system integra-mentation tion. In the 1960s and 70s, it was difficult and expensive to add alarms. Each alarm needed a physical wire going from the field device to a dedicated light on the control console. Therefore, all alarms were configured carefully. With the advent of the distributed control system (DCS), it became “free” to add alarms. Several alarms were available for each tag, and the general practice was to add alarms rather than exclude them. As a result, there were too many alarms and they tended to be improperly prioritized.

The alarm system is the primary tool used for identifying abnormal operations and it helps plant personnel take timely and appropriate actions to move processes back to operational targets. Effective alarm systems create effective operators, while ineffective alarm systems pose serious risks to safety, environment and plant profitability.

An alarm system collects and stores all alarm and event data and automatically generates web-based, standards-compliant Key Performance Indicator reports that provide an accurate snapshot of the system’s performance. Many plants receive too many alarms per day, per console (often 10 or 50 times more than recommended). This has negative impacts, not only on health, safety and environment, but also on productivity and profitability.

Moreover, due to the growing engineering gap and a retiring workforce, managers must improve production with fewer people and adapt to increasingly rigid regulatory guidelines. In this context, using automation and reporting is not only desirable -it is a necessity.

Operations logbooks collect and display operating instructions, actions and incident data. They automatically generate web-based reports that provide insight into the state of operations. These reports include suggestions, targets and limits.

This tool captures data regarding unit shutdowns and slowdowns, and generates fast and accurate incident reports. Also, it allows operators and supervisors to enter notes and general comments.

A question of survival

With modern tools, plants can leverage their existing infrastructure and take advantage of the information that is hidden in spreadsheets. These tools allow benchmarking, measuring, identifying bad actors and making corrective actions. This way, data is turned into actionable information, and managers can make better decisions more quickly. Adding value to actual data results is a considerable competitive advantage. Using the reports, managers and operation personnel can handle resources more efficiently.

Operating at peak performance is challenging. Sustaining results is possible if employees of all levels embrace the philosophy of “change management,” and if deployment and work organization are managed properly.

Many plants have installed optimization tools. The ones succeeding have implemented processes for using the tools, sustaining results and quantifying improvements.

The return on investment is measured in weeks, not in years. But, more importantly, it is a question of survival.

Michel Ruel is a registered professional engineer, university lecturer, and author of several books and publications on instrumentation and control. He is the president of Top Control Inc. Michel has over 30 years of plant experience and is practiced in solving unusual process control problems. Michel is a fellow member of ISA. He is also a member of OIQ, IEEE, TAPPI, and PEO.


1. Asiron, P., “Recorded Webcast: The Business Case for Data Visualization,” Matrikon’s Web Site, Recorded Webcast, 2008, www.matrikon.com/downloads/662/operational-insight/index. aspx

2. Brisk, M. L., “Process Control: Potential Benefits and Wasted Opportunities,” Fifth Asian Control Conference, vol. 1, 2004, 20-23

3. EEMUA Publication 191, “Alarm Systems: A Guide to Design, Management and Procurement,” Second edition, 2007

4. Gosselin, C., and Ruel, M.,”Advantages of Monitoring the Performance of Industrial Process,” ISA Management Newsletter, January 2007, 6-8

5. Lagac, J. G., Naud, S., and Emond, M., “New Performance Monitoring Software Cutting Edge Technologies,” Pulp and Paper Magazine, May 2004, 19-23.

6. Ruel, M., “Performance Monitoring and Supervision: An Economic Point of View,” ISA Technical Conference, November 2005, Chicago.

Print this page


Stories continue below