7 Biggest Lessons For A Better Oil and Gas Analytics Engine

We encounter a wide spectrum of perceptions from companies about how they see oil and gas analytics, especially machine learning and AI and share some of our biggest lessons and learnings about building a better analytics engine.

Samantha McPheter
February 9, 2023

The hype is strong around machine learning and artificial intelligence in many industries, including oil and gas. We encounter a wide spectrum of perceptions from companies about how they see oil and gas analytics, especially machine learning and AI.

On one end, some believe that their trove of SCADA data should be able to be easily ingested and scanned by machines and data scientists to identify the telltale multivariate data signatures of future problems and failures. They think -- so long alarm fatigue and fighting fires. On the other end, we encounter companies who have invested a lot of time and money in such efforts and have run into big challenges, never reaching their advanced analytic ambitions. They wonder -- what do we do now?

At eLynx, we’ve been continually improving the way oil and gas companies gather SCADA data and use analytical tools to make better and faster decisions that save costs and increase production for twenty years. In our efforts to one day make the theoretical potential of machine learning and AI a useful and practical reality for companies, we are learning some of the most fundamental lessons and important steps that every company will have to navigate.

In this article, we want to share some of our biggest lessons and learnings about building a better analytics engine. This will awaken many to the more complicated reality of what has to be done and provide the tangible path forward to those companies who have encountered the same challenges and are wondering how to get past them. Ultimately, we share our biggest lesson of all: making progress on the fundamentals will have a pay off now, not just in some idealized and distant future.  

Lesson 1: High Quality, High Granularity Time-Series Data Is The Foundation

We’ve written about this before, but this is one of the first and most fundamental challenges. Many oil and gas companies have lots of data, but it remains scattered, is not standardized or normalized to make comparison possible, is not collected or reported frequently enough to capture important variations, and has often not been checked for accuracy over time. A vast majority of oil and gas companies do not have the data that makes any sort of better analytics possible.

You can feed the world’s greatest supercomputer and scientists data, but if it's a swiss cheese of readings and has any accuracy issues, the result will be wildly inaccurate alarms and guidance about what is wrong. They can clean it up in various ways, but this relies on estimations and ignoring omissions. It is also only good for a one time, static analysis of a data set. As new data pours in by the second or minute in the oil field, useful analysis depends on having high quality, clean data in real-time.  

Lesson 2: Data Needs Labels

Let’s assume a company has lots of high quality, granular data. This doesn’t mean it has enough meaning to be very useful. This is a huge gap we have found that most oil and gas companies don’t realize in their SCADA application and SCADA analytics. In the rush for more collected data, they overlook the importance of documenting what events happen and where in the data series they occur.

They don’t realize how important it is to add labels or explanatory tags to any part of it, in what cases to do so, or how often they should. Some companies do catalog such things as events and failures but have no one way to properly insert them in their SCADA software with their SCADA data.

Analysis and learning of any sort can only be as good as the information available. Without events, actions, failures, and explanations adding meaning and context, any sort of analysis by human or machine will be seriously hampered. Significant connections and correlations will never be discovered.

Lesson 3: Failures Need Granular Reporting

It’s customary in many oil and gas companies to log well and device failures on a daily basis. However, this detail of reporting is not sufficient to bring context and meaning to data. Since failures are the causes of inefficiencies, very specific time and failure type logging has to occur so any pattern of correlations can be discovered. No analytics engine can learn without precise inputs such as these. This requires that companies build discipline into how all their people do field data capture. We have learned that the right digital interface available at all times builds this habit. This is one of the reasons we pay so much attention to user experience.      

Lesson 4: Constant Field Validation Is Required

When alarms sound or production engineers spot problems based on analysis, the analytics engine can only get smarter and more refined by getting feedback from operators and other field personnel. Without including feedback in the loop, there is no training of any model -- simple or highly sophisticated -- or person to know what worked and didn’t and adjust accordingly.

Instead of the move toward machine learning and AI eliminating the human component, we have learned that it requires greater integration with human activity and relies on human perspective to build and refine intelligence. Enabling better field data capture through mobile devices with intuitive human machine interfaces (hmi) is crucial for capturing this. eLynx makes it incredibly simple for feedback from the field to be gathered quickly on any device.

We learned the human has to be the center of any analytics engine. While some dream of an omniscient digital production engineer, the reality is that data labeling and field validation will always be crucial to building a more effective analytics engine.

Lesson 5: There Is No Big Leap, Only Getting The Smaller Steps Done Right

The hype of machine learning and AI combined with the common misperception that a large quantity of data is all that is needed to leapfrog from rudimentary SCADA systems to advanced, automated analytics is the bane of many oil and gas companies initiatives. Much time and money will be wasted without much to show.

What makes any sort of analytics possible is quality data, with greater context, and continuous feedback on analytical insights and suggestions. These foundational capabilities and processes must be in place and tightly integrated into how an entire oil and gas company works to enable more powerful analysis and decision making. There is no skipping any of these things. They have to be properly executed for them to make a difference.  

Lesson 6: It’s Not About A One-Time Experiment, But Making Analytics Operational Across the Organization

In analytics, there is no time except real-time. The window of opportunity to get ahead by discovering problems and taking the right actions is limited, so an analytics engine has to be built to find these as they occur.

Machine learning and AI is often used initially to examine historical, static data sets that are carefully pruned and cleaned. While this showcases how computers can find more needles in a haystack, it does not create the needed conditions to do this with any consistency and greater accuracy in real-time. New data floods in every moment from the oil field.

All of our learnings from the lessons above are needed to create the analytics engine that can provide more useful and meaningful outputs as near to real-time as possible. This requires gathering quality time-series data that is labeled well, documenting failures at the proper granularity, and incorporating validation from the field. These are the core ingredients for doing better analytics now and in the future.  

Doing more sophisticated analysis is not the biggest challenge. It’s being able to do it consistently and constantly based on the variations in all your real-time data.  

Lesson 7: The Payoff Is All Along The Way, Not Just At The End  

One of the most encouraging things we have learned is that value and better decision making occurs as oil and gas companies put these lessons to work. While all of these things we discussed will be needed for advanced machine computation, doing them well provides immediate value today.

Better data means more accurate alarms and data visualizations. Better labeling will provide greater insights to engineers and operators who are using tools to map correlations. Feedback and validation from the field will help your people adjust alarms and learn to spot specific problems. Aided by the right tools, decision making in the oil field progresses.

As techniques for machine learning and AI that scale are developed, these companies with the proper fundamentals and inputs are going to be set up to make use of them quicker and more effectively than anyone else. But there is no waiting around for this day. Putting these lessons to work will have immediate returns. However, these are quite hard to implement without the right knowledge, tools, and processes in place. It is not an easy undertaking.

We’ve Gotten Ahead By Discovering and Working On The Real Challenges

All of these lessons come from being on the ground and finding the problems that get in the way of doing more advanced analytics in the cloud. While we are looking ahead at the future and how to get there, we are obsessed with how we bring oil and gas companies more value today.

Many companies do not have the fundamentals in place to progress, so they need to walk before they ever hope to run. But they need to realize walking will actually get them places and make it possible for much bigger steps forward.

At eLynx, we are always trying new things, discovering what works and what doesn’t, and making sure that what we do can be implemented in the oil field. This is how we help companies make better decisions faster to improve the bottom line.

Stay updated with us on LinkedIn

Read More