“Automation applied to an inefficient operation will magnify the inefficiency.”
Bad inputs create bad outputs. This fact applies to all automation systems. With the amount of hype around AI right now, it seems everyone is asking, “How can I get AI?” Yet no one is asking whether their data is good enough for AI in the first place.
The complexity and variety of manufacturing, which involves coordination between many different systems and a mapping of a physical reality onto a digital model, makes the issue of data quality particularly hard. The challenges are twofold:
- Many specialized systems exist to solve particular manufacturing problems, and coordinating data exchange between different systems and functions is hard.
- To capture these data flows in a way that can be useful later, you need a standard model, and coming up with a standard model that can accommodate such heterogeneous data is hard. Even harder is to come up with a model that enriches with time, rather than becomes more chaotic.
Fortunately, the same standard provides a solution to both problems: ISA-95. The activity model in part three defines how to coordinate data between every activity of the manufacturing operation. And the entities for resources and work performance in parts 2 and 4 provide all the attributes and relationships that you need to build a flexible graph model that extends for any operation.
The activity model: Make way for high-quality data integration
Take a look at this diagram, a stylized version of the activity model from part 3 of the ISA-95 standard.
In our early conversations with customers and prospects, people are often most interested in the tracking and analysis activities. They want to be able to get genealogies, generate detailed tracking information for batch reports, use algorithms to analyze past runs, and calculate KPIs like (of course) OEE.
But notice where these activities exist in the execution cycle: post execution. In other words, how well you track and analyze data depends entirely on how well you can manage data in the planning, scheduling, and execution phases.
Activities are data streams that constantly serve as inputs for other activities. Let’s look at how this data-quality pipeline works in more detail.
Resource management and definitions provide the foundations
The “reference” phase is all about defining resources, processes, and states.
In this phase, you:
- Define resources and relationships needed to make things
- Define the processes to use these resources to make things
- Manage the state of resource capabilities to determine whether they can be used or not.
While building definitions and resource capabilities might not seem as exciting as other phases, these activities provide the foundation for everything else. When you properly define resource classes, instances, hierarchies, and properties, you create a model that your systems can use to schedule, execute, and collect data on production runs.
How can you schedule a production process if you don’t have a definition of what the process does or what resources it uses? How can you know if resources are available without an active record of capabilities?
Definitions also provide templates to be reused over and over. Why waste time modelling parameters for identical machines instead of just making a class to create a template for all similar equipment? How are you going to calculate KPIs if you don’t define what these KPIs are?
The definitions of production are also the slowest-moving data models. They only change when you change processes, discover more efficient ways to model existing processes, or introduce new products from R&D. So early investment in getting definitions correct pays off more and more over time.
Scheduling and dispatching ensure efficient execution is possible
Scheduling is the bedrock of efficient production: schedule too much and expectations aren’t met; schedule too little and resources get wastefully unused.
So digitizing and automating the scheduling and dispatching activities can pay huge dividends in automation.
But take a look at the arrows that flow to these activities. Scheduling and dispatching require data from all systems. As we mentioned in the last section, you need resource definitions, work definitions, and capabilities to be able to adequately schedule production correctly.
Information from the execution and post-execution phases also flows back to the scheduling system. As the production starts, it’s often necessary to make adjustments to correctly and efficiently assign resources. To enable this dynamism, scheduling and dispatching activities also must actively receive information from the data collection and tracking activities.
Scheduling is also important for KPIs and tracking. How can we calculate how well an execution run was without comparing the run to how we wanted execution to be?
Execution and data collection: capturing what we all care about
Let’s not spend too much time on the obvious: execution is the heart of an operation. Once the system receives the dispatch and operators start running equipment, things get made.
But definitions are still important even in the execution phase. The better defined workflow and resources are, the more clarity you have into processes. The better defined your work master parameters are, the less chance of human error happening from someone fiddling with the PLC setting. All this clarity also frees time and opportunity to discover further optimizations.
We all care about downstream analytics, but analytical insights are impossible without a way to capture the data in the first place. MQTT, OPCUA, and time-series data all fill valuable roles as data interfaces, but this data also needs to be stored and collected in a way that is visible and queryable. Furthermore, each system in the automation landscape—your MES, ERP, LIMS, CMMS, WMS, and so on—all have their own databases. In practice, without a data hub, data is far less visible than it seems.
Tracking and analysis
Finally we get to the post-execution. It’s true: machine-learning algorithms can uncover new patterns and places to optimize, and detailed tracking systems can help you replay and discover every manufacturing event. But if you don’t collect the data, you can’t track it.
Again this flow of information runs in both directions. The output of your tracking activities can also become the inputs to influence future definitions, scheduling, and execution.
Analysis is much easier when data is well-defined, categorized, and orderly. The categories and schemas come from definitions, and the correctness of execution comes from proper scheduling and proper application of pre-modeled operational workflows.
The ISA-95 Data model brings these activities in one standard form
If you’re a manufacturing veteran, you have probably participated in all of these activities in some form or another. And if you have, you know the truth about integrating the data from these activities: it’s often a hard, ugly, and manual business. This is where ISA-95 as an ontology comes into play.
ISA-95 already provides an data model for all these activities:
- Definitions? Those are modeled in work masters.
- Resource management? Those are the models for classes, properties, definitions, and capabilities.
- Scheduling and dispatch? ISA-95 provides models at all level of granularity, from the operational schedule that covers a week of work to the individual job order that covers a single production step.
- Execution and performance? That’s all the information captured in the job response. Or, on the macro level, performance is all the information captured by the collections of individual job response units.
ISA-95 also provides relationships for all these entities. This means it can serve as an ontology to contextualize these systems in one hub. And, its graph-like model provides a natural architecture for integration and coordination.
ISA-95 enriches your data
On two occasions I have been asked, “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?”
— Charles Babbage, Computing visionary
Perhaps you’re worried that the data from your manufacturing operation isn’t ready for useful applications in AI. Honestly, it might not be. But don’t worry: you’re not alone. The truth is that most manufacturers have data that is completely fractured and inaccessible.
Don’t despair, either. ISA-95 provides the way to real, golden-grade quality data. And ISA-95-based application architectures ensure that your systems integrate, orchestrate, and store data in a coherent model that consistently enriches itself with every subsequent manufacturing order and execution.
As a full ontology, ISA-95 provides the data model. As an activity model, ISA-95 provides the path to implementing manufacturing use cases. In practice, this creates a manufacturing knowledge graph of your entire operation. And knowledge graphs are the best input for machine learning systems, as an ever-growing body of research shows.
And yes, Rhize can help you with all of this.