A Dynamic, Configurable Pipeline for PLC Data

Increasingly, more and more data is being collected today to enable AI-related applications. When it comes to industrial data, an important data source is the PLC, which is used from manufacturing plants to oil production wells. Getting the data from PLCs to the cloud and AI applications requires a data pipeline.

Organizations that are collecting massive amounts of industrial data from the edge should consider PLC data pipelines that would suit their operational needs. These include:

  • High-speed. AI requires high-resolution data, so data may need to be collected at sub-second or even milli-second intervals from up to hundreds of data tags per PLC.

  • High-volume. Some applications require data collection from thousands of PLCs, placing a constraint on the data pipeline.

  • Custom configurations. In almost every project, groups of PLCs require different configurations, either because of application requirements, site variations, or end customer requirements. Configurations can include tag definitions, tag names, tag properties, polling rates, and more.

  • Data volume optimization. Data volume typically needs to be optimized to reduce transmission cost or cloud computing or storage cost. Settings such hot (real-time), warm (downsampled), or cold (send on demand) need to be configured per groups of PLCs, further adding to more configurations.

  • Batch transmission. Due to data volume, often not all data should be sent in real-time. Instead, batch data should be sent based on events or on-demand. This means data needs to be buffered locally and has the option to be sent on-demand.

  • Data reliability. Whether the transmission medium is cellular or ethernet, the network can go down. So data needs to be buffered locally and sent when the network recovers.

  • Edge data processing. A lot of data processing can be done directly at the edge to improve data quality and reduce data volume. For example, data should be cleaned, validated, and unified directly at the edge, and event detection algorithms can run to send data only when event criteria are met.

Data rate, data volume, and data cost need to be optimized iteratively, and they will change over time. A dynamically configurable pipeline is one where these configuration parameters can be updated in real-time. This enables engineers to optimize the solution easily and quickly.

For example, tag definitions can be created for groups of PLCs, polling rate can be individualized per tag, cellular data usage can be monitored, and batch data can be pulled on-demand. This flexibility enables the pipeline to meet advanced requirements for today and for the future.

Building the right data pipeline for your industrial data needs

Prescient provides dynamically configurable data pipelines for PLCs, sensors, and other edge data sources. We support popular PLCs from Allen Bradley to Siemens. If you are interested to learn more about the benefits, please contact us for a demo.

Previous
Previous

What is an automated IO-Link data pipeline?

Next
Next

6 Ways to improve your Oil and Gas Asset Management with operational Digital Twins