Digital Transformation in Process Industry

Background & Industry Challenges

The new normal of “lower for longer” in commodity pricing places Digital Transformation1significant pressure on process industry producer companies, driving a need to reduce OPEX costs on operating facilities. Producers must optimize the efficiency of their existing assets, reduce headcount, update maintenance and reliability strategies and innovate to do more with less. Executives in this environment demand rapid-fire execution of asset reliability, optimization, and risk management on the existing fleet of assets. This challenge becomes harder due to the over-hang of under-investment in Operations Technology and enterprise tools with the result that existing analysis processes age, while the rate of change in the technology space is accelerating. It is easy to be left behind.

Through effective asset planning, successful organizations can obtain better decision-making processes to balance costs, risks, opportunities, and performance. Producers need scalable capabilities that allow the organization to stay a step ahead of rapid growth in capacity, data magnitude, problem complexity, and technology evolution.  We need to enable superior Enterprise Asset Management, leading to improved workforce utilization and engagement, innovative asset optimization and condition based maintenance.

Historically there has been a distinct division between the information technology (IT) systems used for large-scale data-centric computing and operational technologies (OT), with long lifecycles, uninterrupted uptime, and near-perfect reliability, that monitor events, manage processes, and tune industrial operations. The infrastructure for data management, and analytics, including computers, storage media, networking systems, and other physical devices, comprise the IT components used to create, process, secure, and exchange electronic data. IT solutions are typically connected to the Internet on a 24/7 basis, protected from the ICEnet (Internet-Completely-Exposed network) by firewall security, transferring data where confidentiality is a priority. Conversely, OT solutions are tied to deterministic, offer fine manipulation for control of mission-critical operating assets, protected from the Internet by isolation, transferring signals where control is a priority. Isolated from IT, OT associated with upstream oil & gas, midstream oil & gas, power, utilities, and renewable industries includes industrial control systems (ICS) such as: supervisory control and data acquisition (SCADA) solutions; and distributed control systems (DCS).

Technology Vendors in the IT space, such as Microsoft, IBM, TIBCO, and Amazon are marketing increasingly capable offerings around dash-boarding & visualization, analytics and machine learning. Process Industry Executive-level decision makers are interested in leveraging these new tools against both their existing Operations Technology datasets and against newer Internet of Things (IoT) devices streaming enormous volumes of data in real-time.  Working with Technology Vendors for procurement, design, installation to ensure integration with existing OT devices, and fit within the IT infrastructure, is not trivial. While replicating staggering amounts of real-time and historical sensor data to the Cloud is integral to transformation, the process is by nature disruptive because of impacts on business process, budgeting and security.

Digital Transformation2.jpgWhile IT has traditionally accounted for tightly integrated communications strategies as a critical component of its scope, OT has relied on a separate network infrastructure stack for communicating with plant instrumentation & field devices. OT networks are characterized by proprietary protocols between instrumentation, DCS & PLCs, often with nothing more than air-gapping as a means of isolating the equipment from any exposed network. OT solutions are optimized for uptime through fault tolerance at both the software & hardware layers. OT networks running in dull, dangerous, distant & dirty (the 4 D’s of Operations) scenarios normally include on-site spares pre-programmed to that operations’ requirements to further reduce downtime – a contrast with IT’s trend toward just-in-time provisioning.

With network & server infrastructure, OT vendors are beginning to leverage IT-derived capabilities into their stacks but the degree of standardization isn’t as mature as with IT systems. However, the need for uncompromising reliability and near 100% uptime requirements for OT solutions is imperative. Where customers of IT departments readily accept downtime, both scheduled and unscheduled, without the threat of crippling financial loss, OT personnel not only expect, but go to great lengths to ensure near zero downtime for sensors, controllers and other assets deployed in the field. Consequently, making enhancements, changes, updates, or patches in a traditional IT system is something that can be scheduled on a frequent and recurrent basis, to correspond with expected downtime. Patching & upgrade cycles for OT solutions must adhere to plant maintenance & turn-around activities, often planned annually.

Complication: Technology Shift

Despite hurdles that IT and OT may have to overcome in pursuit of collaboration, we see a rapid integration of IT and OT, with increasing numbers of sensors and connected systems like wireless sensor and actuator networks (WSANs) to enhance the management of industrial environments. This IT / OT Convergence is characterized by evolving cooperation and integration in industrial environments of networking, data security, interoperability and communications as essential and integral components of a developing Internet of Things (IoT).

A growing collaboration between Information Technology teams & Operations technology teams is providing an opportunity for a shared strategic vision and a better alignment of all systems (including IT and OT) with enterprise business goals. Convergence means that engineers, operations staff and OT practitioners must work closely with IT professionals to break down organizational & systems / data silos. Driven by digital transformation & convergence, security, the purview of IT is being applied to OT as systems evolve to become more communicative and capable of exchanging huge volumes of sensor data and control instructions over standardized network topologies. Enterprise architecture and the design of IT solutions comprising computers to process, transmit, and store data, is being driven down to the level of industrial networks & operating machinery.

The convergence of IT with OT can lead to a disruption of OT point solutions that are designed to execute focused, equipment & process train critical tasks within very complex system monitoring and control operations. It is also true that existing OT solutions may not be up to the challenge of meeting stakeholder requirements. Awareness of this is happening as the growth in web technologies outpaces the upgrade and replacement of one-off proprietary capabilities by niche vendors who can no longer compete with the massive R&D budgets available to Technology Vendors like Microsoft, IBM, TIBCO, and Amazon. OT hardware and systems must become accessible via frameworks for cross-platform web development supporting HTML5, CSS3, and JavaScript on agents and devices as diverse as smartphones, tablets, and similar intelligent hand-held mobile hardware. During the transformation process, the systems to integrate OT with IT infrastructure may migrate from isolation to central hosting on virtualized servers which means that not only are data shared, but also operational control of OT devices across the Internet of Things. Also, an integrated solution must handle safety, traceability, cybersecurity, and access control.

Successfully implemented, many benefits derive from IT OT Convergence, such as improvements in the computerized automation of sensing, along with the increased visibility of sensor data. Convergence also leads to solutions which enhance the control an enterprise has over distributed operations, unlocking step-change improvements in maintenance programs and enabling iOps to function.

One critical characteristic that sets this evolution apart from the convergence of IT with other business domains is the challenge presented by the enormous and ever-increasing volume of historical data produced, and real-time data streamed, by OT devices: there is now a legitimate need for near-infinite scalability in computational analytics and data storage.

Not surprisingly, this is driving the adoption of cloud storage, and “Big Data” approaches to advanced analytics. The best-known platform for storing and analyzing Big Data is the fully-managed, and open source analytics service for enterprises, Apache’s Hadoop. Before Hadoop, storage of data was expensive and consumed a majority share of even the most generous IT budgets. Hadoop is the leading open source solution for handling Big Data. In the closed source world, Microsoft has implemented significant capabilities on Azure including IoT Event Hubs, Stream Analytics, and Time series Insights for sensor data storage & visualization. With Hadoop now available on Microsoft Azure, the opportunities to deliver an analytics strategy for plant operations on a hybrid open/closed solution become interesting.

For some organizations, this forces the question of whether OT engineers and operators are trained to use the new solutions, or IT professionals are educated in the OT domain. By leveraging IT team capabilities for infrastructure management, OT personnel can be freed up to engage in the use of new tools and approaches for managing the suite of OT devices, develop additional experimental hypotheses, and test solutions against real-world operating scenarios. Technical skillsets in both the OT and IT departments need to grow significantly to access these opportunities.

Organizational Considerations

Hybrid data streaming, storage and analytics solutions lead to better Business Intelligence (BI) that can immediately inform executive decisions in an increasing more complex and financially constrained market. Executives and OT professionals need secure data access and tools capable of presenting clear visualizations & insights covering a wide array of asset parameters, data touch-points, and performance metrics. Successful solutions provide real-time governed data and targeted analytics to create dashboards and ad hoc reports to ensure that OT requests and data no longer overwhelm the enterprise. In achieving these goals, the organization must consider many factors ranging from technical and environmental to those that centre on security and system compatibility. These factors combine to further define the divide between IT and OT.

Environmental

Consider the difference in time-boxing between IT and OT. Inherent in IT solutions is a culture of support and maintenance that is focused on delivering fast and comprehensive support for issues that arise. Quicker and more reliable computing, with increases in processing speed, capacity, data transfer rates, fault-tolerance, streamlined protocols, adoption of standards, and advanced architectures defines the IT space. Processing power has grown exponentially for nearly 50 years without fail. A modern smartphone has more power in a hand-held package, then all the buildings full of computers used to put men on the moon in the 1960’s.

OT has evolved around much less time-bound priorities. Rather than increases in processing power, OT devices have improved in their ability to withstand the extremes of hostile environments for very long periods of time, where extremes of temperature, pressure, humidity, and access are the norm. Change management requirements are often managed in OT with device discovery & self-configuration capabilities.

OT equipment is designed to deliver consistent reliability, accuracy, safety, and non-stop performance. While IT systems can be incrementally built and re-built, subscribing to an Agile development framework and rapidly deployed with associated downtime, OT solutions are built to be resilient, reliable, and subject to near-100% uptime. Most OT devices do one thing and do it well. For instance, gas pressure sensors may be expected to monitor pressures in an industrial process within an oil and gas application and raise alerts to prevent an over-pressure situation that could lead to an explosion. Safety is paramount. Sensors of this kind have been on the market operating against engineering first principles for over 50 years.

Competing Standards

IT cyber security standards include ISO/IEC 27001 and 27002, amongstDigital Transformation4 others. In OT, the ISA99/IEC 62443 standard for security of Industrial Control Systems prescribes an operating framework for practitioners.  Issues may arise when IT and OT standards and their corresponding execution approaches collide. Traditionally, organizations maintained separate accountability and governance structures for IT and OT, but as the two morph towards alignment, the demarcation between them dissolves and management approaches become normalized across the enterprise. It is important to ensure that new governance processes meet OT systems needs and that duplication of effort is avoided.

Risk

Risks can often be viewed differently by IT and OT teams. For example, consider uptime. In financial industry automation, IT is accountable for system uptime, particularly when downtime affects the bottom line. In less critical applications, uptime may not be considered an essential requirement to the same degree. The degree of risk associated with uptime can be thought of as on a continuum that is pegged to the ability of the enterprise to succeed even if the flow of accurate information is temporarily interrupted. The same is not true in OT, where downtime not only affects the bottom line through offline or de-rated processing facilities & loss of product quality but may lead to other significant impacts that go beyond financial implications, such as life-threatening catastrophic failure & environmental impacts.

Security

There are often lingering security concerns around migrating mission-critical data sets to the cloud or making Operations Technology data sets available on corporate networks. Whilst modern IT solutions are built with security as a cornerstone of their design, in the case of OT systems, many of which have been in place for 20+ years, the going-in assumption was for an air gap to exist between the industrial control network and the outside world. No organization should seek to implement a convergence strategy without first taking stock of current weak points in the OT network and systems stack. Expect security considerations to feature prominently in I/IoT and digital transformation initiatives for plant systems.

Aspirations

Industry players want to empower their most senior technical resources with a data visualization and analytics platform that will provide the tools to evaluate hundreds of assets quickly and efficiently. Optimizing the view of Big Data stores against time-synchronized measurements, provides better situational awareness, improved decision-making support, and new automated control strategies. This facilitation consequently moves industry focus from current “state estimation” into the realm of real-time “state measurement.” This not only results in increased reliability, accuracy, and control but also offers supplemental support to enhance static state-estimation tools.

It follows that new high-performance analytic computing techniques are now required to process the rapidly growing historical and real-time volume of Big Data. In fact, the magnitude of Big Data in typical applications is so large that it cannot be analyzed efficiently without the use of analytics and at some point machine learning algorithms. Analytic tools, such as Hadoop, offer extensible configuration parameters that maximize its platform performance. Optimized algorithms, such as a De-trended Fluctuation Analysis (DFA), or Enhanced Parallel De-trended Fluctuation Analysis (EPDFA) can be used to deliver scalable and highly performant analytics on massive stores of data. The Hadoop MapReduce framework was first proven while being optimized for Gene Expression Programming, providing analysis for massive amounts of DNA data, and has become the de facto standard for Big Data analytics.

The industry requires clear and rich visualizations of data patterns and trends using easy-to-understand and familiar tools, preferably used currently by operations and engineering personnel, such as Microsoft Power BI, TIBCO Spotfire, and Tableau. The need is for real-time dashboards, ad hoc analyses and report, connectivity to multiple data sources, unbounded scalability, managed governance, and uncompromising security. Operations personnel want to be able to predict near-to-midterm production and potential upsets with confidence. Tracking equipment utilization, process efficiency, losses and cycle times, delivers predictive capabilities to match production to delivery nominations and obligations. This leads naturally to more budgeting certainty for maintenance departments. Furthermore, operators are interested in squeezing end-to-end incremental production from existing facilities where a 2-3% production lift can mean the difference between profitability and continued losses.

Industry Themes

Data sources typically available only through OT tools such as time series data, alarms and events, maintenance system and operator logs are becoming increasingly accessible to IT-centric analytics. Creating dashboards and analytics applications that allow effortless mashup of data from multiple sources is growing more common in the industry. These tools enable enterprise leaders to better plan and execute decisions to not only manage the efficiency and profitability of assets but also to achieve a competitive advantage in the marketplace. In the gas and oil industry, executives need answers to difficult questions, such as:

  • Which assets are producing sufficient quantities to meet current and predicted near-term demands?
  • What is my likely equipment maintenance spend next quarter given current health indices of equipment at my facility and upcoming scheduled maintenance? Are any high-expense unplanned maintenance events likely to happen in the quarter?
  • Using advanced oil recovery and gas lifting approaches, how can we maximize returns from existing wells?
  • How does nitrogen lifting compare with natural gas lifting measured by total recovery?
  • What are the operational impacts of maintaining a well that is under-performing by 1%, 3%, 5% relative to other wells in the same geographic area?
  • What are the trends specific to certain types of equipment for mean-time-to-failure?
  • The SAGD plant is undergoing a refit of one of the OTSG’s this week. What wells should we shut that will least impact overall production given constrained steam availability?

New data management strategies & visualization tools developed by Technology Vendors , proven out by IT teams at many companies allow for more substantial collections of assets to be optimized using richer data sets than was previously possible. This enhances overall decision-making by bringing the rigor and precision of data analytics to every stage of delivered service. From technical use cases to cross-functional areas, such as HR, health safety, and the environment, organizations continually need to improve performance and efficiency to remain competitive and innovative.

Solution

The solution, to achieve the goals outlined above, involves applying Data Mining and Machine Learning to industry problems. Machine Learning consists of the application of sophisticated algorithms that use data and data patterns to “learn,” generalize and predict outcomes. The huge benefit that comes from employing machine learning to handle big data is that the algorithm improves its predictive and analytical capabilities with access to more data. Machine Learning is not a new concept, the term having been coined in 1959 by Arthur Samuel, a computer scientist at IBM. Samuel said, “Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed.”

Machine Learning, sometimes in very rudimentary forms, has been used in software since the early 1980’s. In the 1990’s, researchers and computer scientists began developing neural networks and sophisticated artificial intelligence systems to mimic the human brain. Unfortunately, outside the sterility and control of a lab environment, there were few real commercial applications to drive further research and development. What was missing to make a machine learning solution viable at the time, was access to vast amounts of data. In an era of the Internet of Things, organizations suddenly have ready access to staggering amounts of historian and real-time data. Rather than an academic novelty, machine learning has become an industrial necessity, providing solutions to harness, clean, package, and analyze massive datasets for use by other applications. Tools like Hadoop offer cost-effective means of managing unbounded collections of Big Data.

At an abstract level, Machine Learning can be described as comprising three interconnected processes: inputs, algorithms, and outputs. The inputs are the Big Data repositories that are used to train a Machine Learning algorithm. Algorithms analyze the data and turn them into insights by performing specific tasks that mechanically map to logical learning processes in one of several characteristic flavors such as Supervised Learning; Unsupervised Learning; or Reinforcement Learning.

Supervised Learning uses training that has already been labelled and structured by engineers and operations subject matter experts. By specifying a known set of inputs along with desired predictive outputs, a machine learns how to recognize and map one to the other successfully.  The process of Unsupervised Learning is typically slower and relies on the machines ability to discern patterns that occur naturally in otherwise unstructured data. An engineer might provide structured data to a supervised learning machine algorithm through a well-organized spreadsheet, while an unsupervised scenario may include an unstructured raw data dump from an array of sensors over an extended period. Familiar examples of unsupervised machine learning are the natural language processing (NLP) algorithms used in smart devices, such as Google Home, Alexa, Android phones, or Apple iPhones. The devices become “better” at NLP over time, and “learn” to recognize variances in intonation, volume, phrasing, accent, and vocabulary with exposure to more unstructured voice data.

Reinforcement Learning is the digital equivalent to operant conditioning, requiring that the algorithm achieve a predetermined and measurable goal through the post-behavioural application of rewards and punishments. The goal for this form of machine learning is the same as it is in humans and other animals – to strength or weaken voluntary behaviours. Initial programming of self-driving cars has been done almost exclusively using Reinforcement Machine Learning. Learning by industrial robots in a manufacturing environment offers another use case.

In the oil, gas, energy, power, utilities, and renewable industries, we see Machine Learning playing a vital role in the application of advanced trend visualizations, statistical modelling, and the development of highly-accurate predictive models to handle formerly stochastic, and often overlooked events. Through its implementation, Machine Learning will improve safe operations, asset reliability, production volume and quality resulting from optimization initiatives. Additionally, Machine Learning will cultivate standardization of design for expansions and retrofits following zero-based design principles. In other words, rather than adopting the status quo and modelling all new development on old implementations, OT engineers and operations experts will have an ability to quickly and efficiently examine all factors and influences in the design, starting from scratch, free of assumptions, and default parameters. In this way, each new expansion or retrofit will be designed from the ground up to be optimal.

Opportunity

Significant opportunities exist in the areas of near-to-mid-term production optimization at a time when price volatility, environmental oversight, and asset maturity add challenges to the industry.

iSolutions sees an opportunity to utilize IT-OT convergence and digital transformation to leverage the design and implementation of standardized OT data models that accurately represent equipment health and real-time behaviours. By design, these models are free of design constraints and are optimized to deliver near limitless scalability while supporting seamless interfaces with analytics tools within machine learning frameworks. The data models, if setup correctly, can provide a domain-specific, discoverable, ontology such that Subject Matter Experts (SMEs) and data scientists are enabled to perform asset optimization and asset performance monitoring use cases, both on single assets and from a fleet-wide perspective. iSolutions’ data models also support Enterprise Metadata Management (EMM) best practices to ensure comprehensive and consistent handling, efficacy, and usability of all information facets. All data constructs in the models are, by design, amenable and responsive to change management governance. We believe that it is critical that models satisfy all of the needs noted above without compromising security or performance of the underlying OT network and instrumentation. The overall solution provides engineering and Operations staff with a unified, integrated analytics platform comprising data, visualization and machine learning components combined to yield out-sized results.

IT-OT convergence and digital transformation should further allow the application of advanced visualization. In our implementations, notification and reporting tools interfacing with the OT data models reduce cycle times for root cause analysis, streamline stewardship activities and unlock management by exception of core assets. Visualization can be delivered through the design and implementation of highly-targeted precise data mining and machine learning solutions. A significant benefit of this approach is the opportunity of unlocking the capabilities of existing OT asset optimization and advanced process control solutions by repackaging proven solutions. This approach can ensure the application of optimization and control algorithms on the premises through edge computing devices, and in the cloud through stream analytics. This can effectively lead to the dissolution of organizational silos. It follows that the best of breed functionality will touch the entire portfolio of assets and equipment without having to displace or disrupt existing core control strategies in DCS / SCADA.

Real-time tools can provide monitoring for individual pieces of equipment due to the availability of high-resolution data and a wealth of engineering knowledge built up over many years. However, these tools struggle when trying to evaluate equipment performance across a fleet containing 100’s or 1000’s of equipment components, such as oil wells, where increasingly sizable multi-year data sets need to be used to generate the most interesting insights. With oil wells, for instance, the opportunity exists to improve on these capabilities to optimize for OT production factors such as rod lift, gas lift, and progressive cavity pumping lifting costs by employing predictive modelling to maximize the lifecycle parametric control of assets.

Statistical modelling can be used to identify parameters that are most impactful to the operation of equipment. Regression analysis, for instance, can be used to efficiently estimate the relationships between controllable parameters and known variables in complicated setups. A dependent variable, such as product output, can be determined relative to known independent variables, or predictors that govern the quality of the result. For instance, to maximize total oil production, the constrained supply of injection gas (such as natural gas or nitrogen) must be distributed optimally across numerous wells to ensure maximum gas-lift & meet target bottom-hole pressures. As reservoir pressure declines, the efficacy of gas lifting normalizes in a non-linear fashion that can be presented using a multi-variable regression curve.

Neural networks and genetic algorithms can be used to predict production rates given varying inputs and to optimize the costs and gains in a system. This allows operations and engineering staff to perform ‘what-if’ experiments, operating essentially, a digital twin of the asset without affecting real-world production parameters until outcomes are reasonably well understood. For instance, consider an oil well assisted by a gas lift extraction method. Using neural networks, it is possible to simulate and study the impact and interrelationships of a variety of inputs and variables. Entities such as: static pressure in the reservoir; wellhead pressure; lengths and diameters of tubing and lines; chokes, and other piping components, may all affect the outcome in a non-linear fashion. Not only can production gains be maximized, costs can be reduced, natural elements can be preserved, and the life of wells can be extended with neural networks.

Decision trees and related classification algorithms can also be used to identify particular pieces of equipment that are operating outside their normal envelopes. This can lead to early identification of high-risk assets to mitigate production upsets and potential safety incidents.

The Cost of Inaction

With so many process industry players currently implementing, or considering an IT-OT convergence /digital transformation strategy, and the potential benefits available, the cost of inaction is staggering. Organizations that resist or ignore digital transformation & convergence will become increasingly brittle, less able to respond to the threat of further erosion in commodity prices, or to unlock near- term opportunities in niche commodities. Those enterprises that fail to act will experience a continuation of existing threats and issues. In the near-term, there will be a continued loss of operational and engineering knowledge as a result of demographics, along with the regular resource churn that accompanies the departure of personnel in the organization. This is exacerbated Digital Transformation6when the knowledge and expertise of personnel aren’t captured prior to their departure. Passive or reactive companies will have a reduced ability to be ahead of the curve in identifying and planning for threats & opportunities in the operation of assets & facilities. This will lead to frustration for young engineers and operations staff who expect to apply newer toolsets to operations problems consistent with their education in academic settings. Smart people may ultimately leave the organization. Companies that stagnate now will lose their competitive capability in the marketplace and miss getting important information at the right time, in the right form, to the right person, so that strategic decisions can be taken. Additionally, a failure to grow and adapt to the fusion of IT and OT will reduce an organization’s ability to measure and mitigate operational, environmental & safety risks as compared with companies in their peer group. Resistance to change is not an option in an era when the status quo is inadequate.

The New Reality

Organizations that are successful in leveraging advanced analytics & OT-IT convergence will thrive as better companies that employees enjoy working at and will be able to:

  • Empower engineering operations through the introduction of tools that enable analysis at the speed of thought.
  • Extract maximum value from the engineering and operations expertise of senior staff while they’re still with the organization.
  • Enable a continuous approach to performing machine learning experiments, creating an information factory that can be scaled to tackle threats and opportunities.
  • More quickly build incremental production from existing assets.
  • Construct highly-accurate predictive models to handle formerly stochastic, and often overlooked, events.
  • Improve safety and environmental outcomes.

About iSolutions

iSolutions are the recognized experts in industrial production data management, with a 10+ year commitment to helping our clients unlock value from their OT data sets through data historians, field data capture and alarm & event journaling tools.  Existing partnerships with OSisoft, Capstone, Seeq, Spartan Controls and Emerson demonstrate our capacity and commitment to Operations & Engineering teams.

More recently, iSolutions has been investing heavily in the areas of advanced visualization & real-time dashboards, data mining & machine learningDigital Transformation7 using both on-prem and cloud hosted solutions. We have built expertise in the areas of well optimization for SAGD, input optimization algorithms (field, pad & well-level steam allocations for instance), equipment health index prediction for Transformers in the utilities space, and ROP / completion stage optimization in drilling & completions. Our Dashboarding solutions are helping clients better manage compliance risk in the areas of volumetric reporting & measurement integrity. This work is born out through our partnerships with Microsoft and TIBCO.

iSolutions combines deep domain knowledge in process industry with an ability to inter-operate with IT teams to bridge the gap between IT and OT in the interest of maximizing returns. Our OT data models for physical plant equipment are designed to inter-operate with It-centric analytics. We will help your organization quickly zero in on high-value blue ocean opportunities opened up by IT-OT convergence & digital transformation. We have solutions for asset optimization and failure prediction that highlight operational Key Performance Indicators (KPIs) for facilities and assets that become visible through responsive analytics and highly configurable dashboards.

iSolutions is actively looking for opportunities to work with existing and new clients to identify, quantify & execute on asset optimization / asset reliability use cases through low cost proof of concept initiatives that can lead to broader implementations across a portfolio of facilities & assets.

Digital Transformation in Process Industry