Fortunately, there are ways to amplify speed and find incremental performance jumps from a controller. Shaving 100 milliseconds (ms) on a decision or tool change per unit can result in an additional 1,000 units or parts per year, so these small increases in system speed directly impact the bottom line.
Control engineers can follow these three steps to boost throughput and productivity:
1. Optimize subroutines
2. Exploit interrupt programming
3. Leverage peer-to-peer I/O
Organize Your Bits and Rungs
Data organization at the bit and rung levels can have a big effect on system performance, so optimizing subroutine programming can result in significant speed improvements.
Using the data types native to the application controller can improve both performance and memory use. If your controller uses a 32-bit processor, the programming data type should also be 32-bit. If you use non-32-bit data types with 32-bit CPUs, each instruction can take up unnecessary memory space and running all these data type conversions can quickly consume CPU cycles.
For example, using double integer (DINT) data types instead of integer (INT) data types helps reduce execution time and memory usage.
Executing a simple ADD instruction with INT data types, like
INT + INT = INT, takes 260 bytes of memory and 3.49 microseconds (μsec) to execute.
If the same ADD instruction is executed with DINT data types:
DINT + DINT = DINT takes 28 bytes of memory and .26 μsec to execute.
The reason for this significant difference in memory use and execution time is that the controller converts each INT to a DINT before it adds them together, then has to convert the sum back to an INT. It takes additional execution time and memory to store the intermediate values created during these additional conversions.
Another organizational tip that might seem obvious is that subroutines should only be executed when necessary. Otherwise they are wasting CPU time and processing capacity. However, subroutines often are written in a way that requires a controller to scan through several rungs of code where every line is false, only to come to the crucial, activating rung and find the specified action is unnecessary.
One of the most demanding production examples in terms of speed – a disposable diaper production line – makes for a good example. Today, an application that produces 200 diapers per minute is considered a slower machine. In such manufacturing conditions, every microsecond counts.
If the controller running a machine on a diaper line is programmed to compile and send shift report data at the end of the 3 P.M. shift, then the controller should evaluate whether or not it is in fact the end of the shift in the first rung of code or even before the jump to subroutine is called. If the first precondition to execution reviewed – Is it 3 P.M.? – is false, then the controller can skip most or all of the subroutine and focus elsewhere.
We Interrupt Your Regularly Scheduled Programming…
For further speed boosts, programming strategies like interrupt programming can take an application from speeds in tens of milliseconds to the low single digits. Interrupt programming for task execution can be time-based or event-based.
Time-based tasks interrupt regular system routines on a predetermined time schedule. These periodic tasks are generally used for analysis that need verification more frequently than the controller scan time allows, and tasks that, if delayed, can quickly hold up the rest of the line and cause a production delay or downtime. So, if the controller scans at 50 ms intervals, and there are four or five outputs that need to be evaluated faster than that, these outputs would be written up as time-based tasks and prioritized based on speed and how crucial the task is to ideal system operation.
In a diaper line, a high-priority task might be to verify speed and position calculations where actions are executed on a single piece part, such as the mechanism that folds diapers. Because each diaper is folded at very high speeds, these metrics would need to be verified every few milliseconds. If positions and speeds are getting outside of the predetermined limits, the machine could fault and cause a back up. For a diaper line producing more than 200 diapers a minute, even a few minutes of delay or downtime in folding would cause a loss of hundreds or even thousands of diapers.
A word of caution: time-based tasks can allow a high-speed application to run efficiently on standard controllers, but if there are too many periodic tasks running quite regularly or if they are not properly prioritized, the CPU can waste resources jumping between tasks. Depending on the controller, skipping out of an interrupt task and back to the regular routine can take anywhere from 75 to 300 μsec. Interrupts need to be properly segmented and prioritized to reliably increase system performance.
In general, no more than five interrupt tasks are recommended. When programs start to contain more than ten interrupt tasks, problems can arise, because so much time is spent jumping from one task to the next. This can be prevented by some upfront analysis on the full run-time of a program with interrupts.
Tools to help with this analysis may be available from your controller supplier that will enable you to break down how much time your CPU is devoting to different tasks into easy-to-read charts. This allows users to benchmark how a system is running and modify or change interrupt priorities based on that analysis.
Event-based tasks are triggered when a predetermined event input, a particular bit or combination of bits, is detected. These bits or bit can interrupt the controller scan at a high rate of speed when needed with essentially no affect on scans when they are not required.
Near the end of a diaper production line, diaper material would flow through a machine where the tabs used to close the diaper are added. At a certain point in the machine cycle, adhesive drops might be applied to the fabric where the tabs will be placed.
If the material has stopped flowing into the machine, it would be necessary to stop the gluing process; otherwise glue ends up all over the machine and could cause production delays or downtime. To remedy this situation, a sensor attached to an I/O module could trigger an event-based task in the controller. The controller would then alert the adhesive output mechanism to stop applying glue. Without interrupt programming, this action would require the controller to continuously scan and make sure the diaper material is present. Since, in a properly functioning system, the diaper material should be almost continuously present, running a scan to confirm that the material is indeed present would waste time and resources. Instead, the system can be set to interrupt the gluing process if triggered by the absence of material.
Event tasks can be made even more efficient by triggering events based on a predetermined pattern of inputs or bits rather than just one input or bit. For example, before gluing tabs to a diaper, a machine would verify several requirements to ensure the product isn’t defective, the product is in position, the tab is available and the glue nozzle is not plugged. Traditionally, just one of these requirements might interrupt the control program and the other inputs would be verified inside the interrupt. This can waste time as the system might eventually determine one of the necessary preconditions is not met. Today, some I/O modules are capable of recognizing and triggering the event-based tasks based on a pattern of inputs, ensuring no interrupt occurs unless everything is in its proper state.
Think Outside the Controller
To further improve speed, from the low single digit ms into μsec, you can circumvent the controller entirely for high-speed decisions using new peer-to-peer capabilities within I/O modules.
I/O modules with native peer-to-peer functionality simply need to have in-chassis connections established to communicate directly with each other. Outputs are then energized based on data received from the input peer, independent of the controller. This method of control can significantly improve total System Response Time (SRT). SRT is a function of the time required for input module response, PAC processing and output module response. Therefore, eliminating or reducing communications between I/O and the controller can improve machine speed and parts production.
Shifting decisions or primitive evaluations to the I/O modules relieves a controller of the overhead required to process and direct I/O modules, helping improve repeatability in program execution and throughput, for higher parts production per shift. In peer-to-peer mode, input to output response time can be less than 100 μsec. The peer-to-peer capabilities are ideal for applications with fast detect-energize sequences, such as high-speed parts rejection.
As finished diapers are conveyed from production machinery to packaging machinery, a peer-to-peer input module attached to a sensor might detect a diaper with misapplied tabs. Traditionally, I/O is programmed to send an alert to the controller, and the controller processes that data to determine what message to send to the output module to activate a response on the line. Instead, input modules can directly alert a nearby output module to trigger the mechanism for removal of reject diapers from the line.
Faster quality inspections can mean the entire line can move at higher speeds. This means more diapers per minute, more packages ready to ship per hour and more dollars to the bottom line at the end of each shift.
For most applications, you can find incremental speed improvements from the control system you have via programming changes or from technologies, like peer-to-peer I/O. If your application is running in the tens of milliseconds, consider some of the steps discussed here to shift your control system into high gear and into the realm of microseconds.
Source: Rockwell Automation