In this webinar, learn how a long-time Industrial IT Consultant helps his customer make the leap into providing visibility of their processes to everyone in the plant. This journey led to the discovery of untapped opportunity to improve operations, reduce energy consumption, and minimize plant downtime. The collection of data from the individual sensors has led to powerful Grafana dashboards shared across the organization.
3. Customer
●Fr. Ant Niedermayr GmbH & Co. KG (https://niedermayr.net)
●218 Years of company history, est. 1801 by Franz Anton Niedermayr
●situated in the city of Regensburg, Bavaria, southern Germany
●Owner operated by Johannes Helmberger, Niedermayr ancestor in the
6th generation
●205 Employees
●Presscompany, Creative Department, IT + Datacenter Services
●Approved presenting major parts of the Project
4. Technical origin
●Already experience with various Timeseries DB through personal
opensource project involvement
●IT monitoring: Done plenty of times, everything well documented.
●Had nice IT Dashboards, why not apply this visibility approach to
Industrial Process?
●Aimed for four targets: controlling, prediction, save $$$, escape vendor
lock
●Quick first results: initial implementation took just a few days.
●Most of the work: interpreting and validating the numbers
5. Print?
●1 Lithoman IV 80p, 2 Lithoman-S 96p web offset printing Machines +
smaller Machines
●Output: up to 4.8M A4 pages per hour per Unit
●24/7 Production
●About 15 different Suppliers of Main subunits: simplified interfaces to
each other, very proprietary, high complexity. (Front-to-back: Splicer, 4x
Inking, Dryer, Remoistener, Web Cutter, Folder, Stitcher, Conveyor,
Trimmer, Stacker, Strapper, palettizer/Robot, Foliator)
8. The Dilemma
●Industrial Plant suffer from a notorious high heterogenity of datasources and
accessprotocols throughout the sub units.
●A manual or semi-automated reporting / aggregation of the different sources
of data doesn’t scale and is often paid with high amounts of manual labor and
is prone to errors.
●Existing reportings: job-bound and only available after job completion. Exact
time-reference for metrics/events is impossible to achieve.
9. Datasources that ma5er
●Plantcontrol Pecom PMI: Postgres (ODBC) RDBMS
●MAN Roland IDC: MQTT
●Quadtech QTI IDC on 80p
●Energy: Janitza/Gridvis REST API
●ERP: System “Chroma”: Oracle 12 based Client/Server Application without
any usable API
●Robot Cell: MSSQL without client access, but Access to a OPCUA Server
●Baldwin Fluidmanagemend: MQTT
●Technotrans Ink Supply System
10. Possible Approaches
●Excel (rly?)
●RRD Collector
●Collection of Data in a tablerelational Structure (Postgres, Mysql etc) with
attached Visualisation (Cacti, Zabbix etc)
●Elastic Stack (Elastic Search, Beats, Logstash, Kibana)
●Graphite (Carbon/Whisper + Graphite + Grafana)
●Tick/G/L (Telegraf + InfluxDB + Chronograf + Kapacitor + Grafana) + LoudML
(disruptive Machine Learning API)
11. Decision for TICK
●Scales well at high ingest rates. Good to eat up to >500k data points per
second on a single Instance (we are at about 800 Datapoints / Machine /
Second)
●Compelling storage Engine (in terms of speed, space eficiency, space reclaim,
retention)
●Extensive ecosystem of plugins on the input and output (Telegraf)
●Proven production ready: many big names in IT rely on it
14. Steps
1.Identification of datasources that matter
2.Deploy instrumentation and extend where required
3.Technical interface design: Some work with plain telegraf, some require
moderate coding
4.Dashboard design (Grafana, Chronograf)
5.Derive KPI
6.Define criteria for Alerts
15. Difficulties
1.Reverse Engineering might be required
2.Dealing with outdated Hard- and Software is not uncommon
3.Negotiations with Machinesuppliers can be challenging
4.Data Validation
16. Good habbits
●Implement security right away (At least some reasonable Password for
MQTT Brokers, even better TLS Client certificates)
●Seperate VLAN
●Collecting everything that is available isn’t a good Idea either
●Avoid redundancy of Values
●Do a Interpretation Documentation (at what physical Points do
Measurements orginate, are they raw or already calculated)
●Don’t end up in having a Directory full of Customscripts – developed a
standard in Node Red
27. Achievements so far
●Production realtime data (some near realtime (10-30s at most), some real
streaming metrics <3)
●Significant energy savings (Upper 6 Digit Number/yr)
●Fine grained values
●LoudML / Tensorflow in place, ML Models applied and constantly developed
●Anomaly detection throughout raw datasources
●Close interval validation of business numbers with actual real measurements
●Successfully escaped Vendor lock-in
28. Future
●Deploy more Instrumentation: Vibration and waveformanalysis (eg.
precursoridentifikation bearing fails, conveyor drives, fans) with
specialized Hardware
●Even More metrics
●Continue ongoing Talks with vendors: Deliver all metrics on a MQTT
broker
●Signalling to Production: Reduce Washing waste by using IDC Signalling
derived from Dotgain Values in InfluxDB (Beta run ongoing)
29. Thank you for your attention
Questions ?
Bastian Mäuser
<bma@netz.org>