Hi I'm Sanjay Nagchowdhury, Development Lead for the IBM App Connect Enterprise
product. In this video I will show some
new capability that has been added for
integration with a Kafka server. I
will demonstrate using the new KafkaRead
node and using a Kafka policy.
I'll be using a local integration server
that is started in the toolkit.
For details on how to use a local
integration server please watch the
previous video.
I have an Apache Kafka server running
without security enabled.
I will create a topic called BondFilms.
I have a message flow which reads
lines from a file which has a list of
James Bond films. The message flow
publishes each line as a message to the
Kafka topic BondFilms.
The message flow uses a KafkaProducer node to publish the message.
The details of the machine where the
Apache Kafka server is running is held
in a policy.
I can check that the messages have been
published by running a script to consume
the messages from the start of the topic.
Using the new KafkaRead node, I can read a message
from any offset in the topic. I have a
flow using the KafkaRead node in which
the topic, partition and offset are set
using a Local Environment override.
The server details are using the same policy
that was used by the Kafka Producer node
in the previous message flow.
When I send in
message to drive the flow, I can specify
which offset to read the message from in
the Kafka topic.
I can see fields that are set in the
Local Environment tree before and after
the KafkaRead node.
At offset 23 I can see that the
James Bond film was Skyfall.
If an offset is specified which does not
exist, then you can specify what action
to take.
By default the input message tree is
propagated to the No Match terminal.
You can alternatively specify the
earliest message or the latest message in the topic.
If I change the action to be earliest, I
see the first message on the topic as the message has not expired yet . If
'latest' is specified, then the message
flow will block waiting for the next
message to be published to the topic.
In a separate message flow, I have
another KafkaRead node.
In this flow, I am using a Mapper node to
map values from the input message to use
in the Local Environment override. The
input message has a top-level element
named 'original' with a child element named
'film_number'.
Each message in the Kafka topic has a top-level
element named 'film'.
When I put a message through the flow,
the data that is read from the Kafka
topic replaces the Input message tree, so
the Output message tree has a
top-level element named 'film'.
If I wish to merge the Input message
tree with the data read from the Kafka
topic so they are both present in the Output
message tree, I can configure the
Result tab.
I can specify what part of the message data to read from the Kafka topic
in the Result Data location.
I can specify what part of the output message tree
to place the data that was read from the Kafka topic.
After redeploying and putting a message
through I can now see that the original
message has been augmented with the data
that was read from the Kafka topic.
I have demonstrated how to use the new
KafkaRead node to read a message from a
specified offset in a Kafka topic. The
new Kafka policy type has been added
which can be used by all Kafka nodes to
specify details for the Kafka server
that is being used.
