media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events. How can me make sure that all these event are accepted and forwarded in an efficient and reliable way? This is where Apache Kafaka comes into play, a distirbuted, highly-scalable messaging broker, build for exchanging huge amount of messages between a source and a target.
This session will start with an introduction into Apache and presents the role of Apache Kafka in a modern data / information architecture and the advantages it brings to the table. Additionally the Kafka ecosystem will be covered as well as the integration of Kafka in the Oracle Stack, with products such as Golden Gate, Service Bus and Oracle Stream Analytics all being able to act as a Kafka consumer or producer.
2. Guido Schmutz
Working at Trivadis for more than 20 years
Oracle ACE Director for Fusion Middleware and SOA
Consultant, Trainer Software Architect for Java, Oracle, SOA and
Big Data / Fast Data
Head of Trivadis Architecture Board
Technology Manager @ Trivadis
More than 30 years of software development experience
Contact: guido.schmutz@trivadis.com
Blog: http://guidoschmutz.wordpress.com
Slideshare: http://www.slideshare.net/gschmutz
Twitter: gschmutz
8. Kafka High Level Architecture
The who is who
• Producers write data to brokers.
• Consumers read data from
brokers.
• All this is distributed.
The data
• Data is stored in topics.
• Topics are split into partitions,
which are replicated.
Kafka Cluster
Consumer Consumer Consumer
Producer Producer Producer
Broker 1 Broker 2 Broker 3
Zookeeper
Ensemble
10. Kafka Producer
Write Ahead Log / Commit Log
Producers always append to tail (append to file, i.e. segment)
Order is preserved for messages within same partition
Kafka Broker
Movement Topic
1 2 3 4 5
Truck
6 6
11. Kafka Consumer - Partition offsets
Offset – A sequential id number assigned to messages in the partitions. Uniquely
identifies a message within a partition.
• Consumers track their pointers via (offset, partition, topic) tuples
• Kafka 0.10: seek to offset by given timestamp using method KafkaConsumer#offsetsForTimes
Consumer Group A Consumer Group B
1 2 3 4 5 6
Consumer at
“earliest” offset
Consumer at
“latest” offset
New data
from Producer
Consumer at
specific offset
12. Data Retention – 3 options
1. Never:
2. Time based (TTL): log.retention.{ms | minutes | hours}
3. Size based: log.retention.bytes
4. Log compaction based (entries with same key are removed):
kafka-topics.sh --zookeeper zk:2181
--create --topic customers
--replication-factor 1
--partitions 1
--config cleanup.policy=compact
13. Data Retention - Log Compaction
ensures that Kafka always retain at least the last known value for each message key
within a single topic partition
compaction is done in the background by periodically recopying log segments.
0 1 2 3 4 5 6 7 8 9 10 11
K1 K2 K1 K1 K3 K2 K4 K5 K5 K2 K6 K2
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11
Offset
Key
Value
3 4 6 8 9 10
K1 K3 K4 K5 K2 K6
V4 V5 V7 V9 V10 V11
Offset
Key
Value
Compaction
17. Demo (I) – Run Producer and Kafka-Console-Consumer
18. Demo (I) – Java Producer to “truck_position”
Constructing a Kafka Producer
private Properties kafkaProps = new Properties();
kafkaProps.put("bootstrap.servers","broker-1:9092);
kafkaProps.put("key.serializer", "...StringSerializer");
kafkaProps.put("value.serializer", "...StringSerializer");
producer = new KafkaProducer<String, String>(kafkaProps);
ProducerRecord<String, String> record =
new ProducerRecord<>(”truck_position", driverId, eventData);
try {
metadata = producer.send(record).get();
} catch (Exception e) {}
19. Demo (II) – devices send to MQTT instead of Kafka
Truck-2
truck/nn/
position
Truck-1
Truck-3
2016-06-02 14:39:56.605|98|27|803014426|
Wichita to Little Rock Route2|
Normal|38.65|90.21|5187297736652502631
20. Demo (II) – devices send to MQTT instead of Kafka
21. Demo (II) - devices send to MQTT instead of Kafka –
how to get the data into Kafka?
Truck-2
truck/nn/
position
Truck-1
Truck-3
truck
position raw
?
2016-06-02 14:39:56.605|98|27|803014426|
Wichita to Little Rock Route2|
Normal|38.65|90.21|5187297736652502631
24. Kafka Connect – Single Message Transforms (SMT)
Simple Transformations for a single message
Defined as part of Kafka Connect
• some useful transforms provided out-of-the-box
• Easily implement your own
Optionally deploy 1+ transforms with each
connector
• Modify messages produced by source
connector
• Modify messages sent to sink connectors
Makes it much easier to mix and match connectors
Some of currently available
transforms:
• InsertField
• ReplaceField
• MaskField
• ValueToKey
• ExtractField
• TimestampRouter
• RegexRouter
• SetSchemaMetaData
• Flatten
• TimestampConverter
25. Kafka Connect – Many Connectors
60+ since first release (0.9+)
20+ from Confluent and Partners
Source: http://www.confluent.io/product/connectors
Confluent supported Connectors
Certified Connectors Community Connectors
31. Kafka Streams - Overview
• Designed as a simple and lightweight library in Apache
Kafka
• no external dependencies on systems other than Apache
Kafka
• Part of open source Apache Kafka, introduced in 0.10+
• Leverages Kafka as its internal messaging layer
• Supports fault-tolerant local state
• Event-at-a-time processing (not microbatch) with millisecond
latency
• Windowing with out-of-order data using a Google DataFlow-like
model
32. Kafka Stream DSL and Processor Topology
KStream<Integer, String> stream1 =
builder.stream(”in-1");
KStream<Integer, String> stream2=
builder.stream(”in-2");
KStream<Integer, String> joined =
stream1.leftJoin(stream2, …);
KTable<> aggregated =
joined.groupBy(…).count(“store”);
aggregated.to(“out-1”);
1 2
lj
a
t
State
33. Kafka Stream DSL and Processor Topology
KStream<Integer, String> stream1 =
builder.stream(”in-1");
KStream<Integer, String> stream2=
builder.stream(”in-2");
KStream<Integer, String> joined =
stream1.leftJoin(stream2, …);
KTable<> aggregated =
joined.groupBy(…).count(“store”);
aggregated.to(“out-1”);
1 2
lj
a
t
State
38. KSQL: a Streaming SQL Engine for Apache Kafka
• Enables stream processing with zero coding required
• The simples way to process streams of data in real-time
• Powered by Kafka and Kafka Streams: scalable, distributed, mature
• All you need is Kafka – no complex deployments
• available as Developer preview!
• STREAM and TABLE as first-class citizens
• STREAM = data in motion
• TABLE = collected state of a stream
• join STREAM and TABLE
50. Demo (V) – Create JDBC Connect through REST API
51. Demo (V) - Create Table with Driver State
ksql> CREATE TABLE driver_t
(id BIGINT,
name VARCHAR)
WITH (kafka_topic=‘trucking_driver',
value_format='JSON');
Message
----------------
Table created
52. Demo (V) - Create Table with Driver State
ksql> CREATE STREAM truck_position_and_driver_s
WITH (kafka_topic='truck_position_and_driver_s',
value_format='JSON')
AS SELECT driverid, name, truckid, routeid,routename, eventtype
FROM truck_position_s
LEFT JOIN driver_t
ON truck_position_s.driverid = driver_t.id;
Message
----------------------------
Stream created and running
ksql> select * from truck_position_and_driver_s;
1506922849375 | "truck/11/position0 | �2017-10-02T07:40:49 | 90 | 11 | 160779139
| Des Moines to Chicago Route 2 | Overspeed | 41.48 | -88.07 |
3569183071347898366
1506922866488 | "truck/11/position0 | �2017-10-02T07:41:06 | 90 | 11 | 160779139
| Des Moines to Chicago Route 2 | Overspeed | 40.38 | -89.17 |
3569183071347898366
53. Demo (V) - Create Table with Driver State
ksql> CREATE STREAM truck_position_and_driver_s
WITH (kafka_topic='truck_position_and_driver_s',
value_format='JSON')
AS SELECT driverid, name, truckid, routeid,routename, eventtype
FROM truck_position_s
LEFT JOIN driver_t
ON truck_position_s.driverid = driver_t.id;
Message
----------------------------
Stream created and running
ksql> select * from truck_position_and_driver_s;
1506976928603 | 11 | 11 | Jamie Engesser | 14 | 1961634315 | Saint Louis to
Memphis | Normal
1506976930143 | 11 | 11 | Jamie Engesser | 14 | 1961634315 | Saint Louis to
Memphis | Normal
1506976931824 | 11 | 11 | Jamie Engesser | 14 | 1961634315 | Saint Louis to
Memphis | Overspeed