Today’s manufacturing operations and maintenance teams generate vast amounts of data in all forms. As a result, finding the right information, at the right time, and making it accessible to the right people is critical to keeping these functions operating at optimum levels. Companies trying to understand how to make better use of their data are turning to various types of analytics for answers. These include how to best manage the data, how to determine what data is truly valuable, and when and how to align technology and people to assist with making meaningful conclusions.
Finding nuggets in a mountain of data
With digital transformation initiatives increasing the amount of data created and shared within today’s industrial organizations, making use of all this data can be a challenge. It’s not that this data isn’t relevant, but often some of the more meaningful and actionable nuggets are hidden within a mountain of disparate data—both structured and unstructured. It’s becoming increasingly difficult to make meaningful use of all the data being generated. This is particularly true for end users on the shop floor looking to expand their predictive maintenance and predictive analytics capabilities.
Data that was often managed separately in silos simply can’t be managed that way today. The implication is that maintenance and operations will need to have a much more cohesive vision around shared data and analysis. This is why many industrial organizations seek analytics offerings that can be used by operations and maintenance personnel alike.
The emerging democratization of analytics
With all of these data being streamed and stored in a wide variety of locations and systems, making practical use of it can be a challenge, since mining such disparate data can be difficult. Until recently, most software programs available required specialized expertise and investments in traditional and often costly analytics offerings. These offerings also have all the attendant services costs such as implementation and maintenance. In addition, the skill sets needed to use these offerings have traditionally been left to trained data scientists and statisticians assigned to organizations’ quantitative staffs.
For years, analytics offerings were deemed suitable only for large organizations with dedicated quant staffs. These teams commonly consisted of people with skills that ranged from report writing, business intelligence, and structured query language (SQL) programming expertise, and experts skilled in various forms of predictive and quantitative analysis. Consequently, many industrial organizations have been reluctant to fund analytics projects at the operations and maintenance levels.
More recently, however, new analytics offerings have been introduced to the market that are designed for other users within the business, such as operations and maintenance staffs. These users typically have limited quantitative skills and these newer offerings can provide value for a broader range of users within an industrial enterprise. As industry undergoes a digital transformation, non-data-science users now have more powerful and accurate tools at their disposal. They can now run various operations-specific predictive models and scenarios, and—if necessary—in near-real time; a capability not generally available until recently.
In addition to being relatively easy to use, some of these new offerings enable users to construct models intuitively via visual representations of the data. These powerful and intuitive offerings can enable business users to create queries and some models without the need to write and sequence SQL queries. Other offerings require text-based commands using SQL.
What makes these new offerings accessible to a broader set of users? With these offerings, the rules and sequences for data evaluation are often set by manipulating visual elements (much like setting joins and formulas in some report writer programs), with the underlying SQL code available for those experts who want, or need, to review in greater detail. The result has been a new class of data visualization analytics products that are powerful, yet intuitive and easy to use.
While sometimes derided by analytics experts as being too much like “black box” offerings (because the underlying code when constructing and evaluating data models is largely hidden), they can nonetheless guide users with pre-configured code for common analyses. While these easier-to-use offerings do not necessarily replace the highly trained and experienced quant personnel, they allow operations and maintenance users to conduct “what-if” modeling and analyses and make better use of analytics experts’ time to validate the underlying methodologies and models.
Many of these offerings also offer open application program interfaces (APIs) to allow connectivity options to a wide range of data sources. In many cases, software-as-a-service (SaaS) offerings are available, which can offer rapid time to implementation and a lower total cost of ownership compared to on-premise variants that require the purchase of perpetual licenses and associated hardware.
>>Ed O’Brian is research director of ARC Advisory Group (www.arcweb.com).