Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

How Sensor Data Can Help Manufacturers Gain Insight to Reduce Waste, Energy Consumption, and Get Rid of Pesky Spreadsheets

In this webinar, learn how a long-time Industrial IT Consultant helps his customer make the leap into providing visibility of their processes to everyone in the plant. This journey led to the discovery of untapped opportunity to improve operations, reduce energy consumption, and minimize plant downtime. The collection of data from the individual sensors has led to powerful Grafana dashboards shared across the organization.

  • Login to see the comments

  • Be the first to like this

How Sensor Data Can Help Manufacturers Gain Insight to Reduce Waste, Energy Consumption, and Get Rid of Pesky Spreadsheets

  1. 1. Paradigmshift in Industry: Telemetry driven production - All Seeing Eyes - (c) 2018/2019 Bastian Mäuser / NETZConsult (Germany)
  2. 2. Situation (not so uncommon)
  3. 3. Customer ●Fr. Ant Niedermayr GmbH & Co. KG (https://niedermayr.net) ●218 Years of company history, est. 1801 by Franz Anton Niedermayr ●situated in the city of Regensburg, Bavaria, southern Germany ●Owner operated by Johannes Helmberger, Niedermayr ancestor in the 6th generation ●205 Employees ●Presscompany, Creative Department, IT + Datacenter Services ●Approved presenting major parts of the Project
  4. 4. Technical origin ●Already experience with various Timeseries DB through personal opensource project involvement ●IT monitoring: Done plenty of times, everything well documented. ●Had nice IT Dashboards, why not apply this visibility approach to Industrial Process? ●Aimed for four targets: controlling, prediction, save $$$, escape vendor lock ●Quick first results: initial implementation took just a few days. ●Most of the work: interpreting and validating the numbers
  5. 5. Print? ●1 Lithoman IV 80p, 2 Lithoman-S 96p web offset printing Machines + smaller Machines ●Output: up to 4.8M A4 pages per hour per Unit ●24/7 Production ●About 15 different Suppliers of Main subunits: simplified interfaces to each other, very proprietary, high complexity. (Front-to-back: Splicer, 4x Inking, Dryer, Remoistener, Web Cutter, Folder, Stitcher, Conveyor, Trimmer, Stacker, Strapper, palettizer/Robot, Foliator)
  6. 6. Plant
  7. 7. Some Pictures
  8. 8. The Dilemma ●Industrial Plant suffer from a notorious high heterogenity of datasources and accessprotocols throughout the sub units. ●A manual or semi-automated reporting / aggregation of the different sources of data doesn’t scale and is often paid with high amounts of manual labor and is prone to errors. ●Existing reportings: job-bound and only available after job completion. Exact time-reference for metrics/events is impossible to achieve.
  9. 9. Datasources that ma5er ●Plantcontrol Pecom PMI: Postgres (ODBC) RDBMS ●MAN Roland IDC: MQTT ●Quadtech QTI IDC on 80p ●Energy: Janitza/Gridvis REST API ●ERP: System “Chroma”: Oracle 12 based Client/Server Application without any usable API ●Robot Cell: MSSQL without client access, but Access to a OPCUA Server ●Baldwin Fluidmanagemend: MQTT ●Technotrans Ink Supply System
  10. 10. Possible Approaches ●Excel (rly?) ●RRD Collector ●Collection of Data in a tablerelational Structure (Postgres, Mysql etc) with attached Visualisation (Cacti, Zabbix etc) ●Elastic Stack (Elastic Search, Beats, Logstash, Kibana) ●Graphite (Carbon/Whisper + Graphite + Grafana) ●Tick/G/L (Telegraf + InfluxDB + Chronograf + Kapacitor + Grafana) + LoudML (disruptive Machine Learning API)
  11. 11. Decision for TICK ●Scales well at high ingest rates. Good to eat up to >500k data points per second on a single Instance (we are at about 800 Datapoints / Machine / Second) ●Compelling storage Engine (in terms of speed, space eficiency, space reclaim, retention) ●Extensive ecosystem of plugins on the input and output (Telegraf) ●Proven production ready: many big names in IT rely on it
  12. 12. Chosen Approach: Node Red +TICK/G/L
  13. 13. Example View of a Nodered Flow for IDC
  14. 14. Steps 1.Identification of datasources that matter 2.Deploy instrumentation and extend where required 3.Technical interface design: Some work with plain telegraf, some require moderate coding 4.Dashboard design (Grafana, Chronograf) 5.Derive KPI 6.Define criteria for Alerts
  15. 15. Difficulties 1.Reverse Engineering might be required 2.Dealing with outdated Hard- and Software is not uncommon 3.Negotiations with Machinesuppliers can be challenging 4.Data Validation
  16. 16. Good habbits ●Implement security right away (At least some reasonable Password for MQTT Brokers, even better TLS Client certificates) ●Seperate VLAN ●Collecting everything that is available isn’t a good Idea either ●Avoid redundancy of Values ●Do a Interpretation Documentation (at what physical Points do Measurements orginate, are they raw or already calculated) ●Don’t end up in having a Directory full of Customscripts – developed a standard in Node Red
  17. 17. Electrical Power ●Consumption up to 4MW (electrical) ●Biggest savings
  18. 18. Paper ●100000 metric Tons/yr ●Quantify waste ●Identify waste causes ●Reduce waste by reducing Washcycles ●Predict situations to avoid unplanned downtime
  19. 19. Central Ink supply ●2700 metric tons/yr ●Validate consumption ●Forecast required deliveries
  20. 20. Result: tacIcal Overview
  21. 21. QA KPI (Dotgain)
  22. 22. Instrumetation: ΔE Deviation (densitometric)
  23. 23. Waste quantification and causes
  24. 24. more interessIng metrics in print ●Overall Waste ●Washing Waste ●Reel Numbers ●Web Width
  25. 25. Deep Analysis Consumption vs. Efficiency vs. Staff vs. Jobs/Customer vs. Consumable vs. Quality KPI
  26. 26. More Deep Analysis Consumeables in $$$ Incidents in Time and $$$
  27. 27. Achievements so far ●Production realtime data (some near realtime (10-30s at most), some real streaming metrics <3) ●Significant energy savings (Upper 6 Digit Number/yr) ●Fine grained values ●LoudML / Tensorflow in place, ML Models applied and constantly developed ●Anomaly detection throughout raw datasources ●Close interval validation of business numbers with actual real measurements ●Successfully escaped Vendor lock-in
  28. 28. Future ●Deploy more Instrumentation: Vibration and waveformanalysis (eg. precursoridentifikation bearing fails, conveyor drives, fans) with specialized Hardware ●Even More metrics ●Continue ongoing Talks with vendors: Deliver all metrics on a MQTT broker ●Signalling to Production: Reduce Washing waste by using IDC Signalling derived from Dotgain Values in InfluxDB (Beta run ongoing)
  29. 29. Thank you for your attention Questions ? Bastian Mäuser <bma@netz.org>

×