>> BEFORE THAT, IT'S CONNECTED CAR 
DATA AND ANALYTICS SOFTWARE PLATFORM. 
WE HELP ENTERPRISE CUSTOMERS ACROSS 
USE CASES ACCESS DATA AND WEBSITES 
FROM A WIDE RANGE OF DATA SOURCES. 
WE WORK WITH NEW AND EMERGING DATA 
SETS, INCLUDING THE AUTOMOTIVE DIRECTLY. 
THIS IS A HIGH LEVEL SIMPLIFIED 
ARCHITECTURE DIAGRAM FOR ONE OF 
THE WORKFLOW DUs. WE RECEIVE THE 
DATA FROM MULTIPLE SERVERS AND INGEST 
THE DATA. WE HAVE MULTIPLE CONSUMERS 
PROCESSING THE DATA AND TRANSFORM 
THEM AND STORE THEM IN COSMOS DB. 
WE USE APPS TO EXPOSE SOME OF THE 
STORED DATA VIA APA, AND WE ALSO 
LEVERAGE COSMOS DB CHANGE FEED TO 
MOVE THE DATA TO BLOB STORAGE AND 
OUR DATA WAREHOUSES. FOR THOSE OF 
YOU NEW TO US, IT'S A STREAMING 
PLATFORM. WE CAN POTENTIALLY INGEST 
MILLIONS OF MESSAGES PER SECOND, 
AND WE CAN HAVE MULTIPLE CONSUMERS 
CONCURRENTLY PROCESSING THE DATA 
UNDER ONE PLACE. SO THE REASON WE 
CHOSE IS BECAUSE IT'S A FULLY MANAGED 
SERVICE. WE DON'T WANT TO SPEND 
ANY TIME MANAGING THE INFRASTRUCTURE 
OURSELVES. THAT'S ONE OF THE MAIN 
REASONS. AND THEN IT ALSO MEANS 
SOME OF OUR ENTERPRISE CUSTOMERS 
. IT PROVIDES 99. 9 SLA, AND THE 
DATA IS ENCRYPTED BOTH ADDRESSED, 
AND IT PROVIDES OUT OF THE BOX. 
TOTAL COST OF OWNERSHIP IS VERY 
LOW. FOR 1 MEGABYTE PER SECOND, 
IT COSTS AROUND $20 PER MONTH. THE 
REASON IS PROVIDES KAFKA SUPPORT. 
KAFKA HAS A RICH ECOSYSTEM, SO WE 
CAN TAKE ADVANTAGE OF ITS RICH ECOSYSTEM. 
LET'S DISCUSS IN DETAIL HOW WE CAN 
INGEST DATA. IN THIS DIAGRAM, YOU 
CAN SEE MULTIPLE PARTITIONS IN UN 
DUBS. YOU WANT TO MAXIMIZE THE RIGHT 
THROUGHPUT AND CONCURRENTLY PROCESS 
THE DATA. SO I'LL SHOW YOU A DEMO. 
I'LL SHOW YOU A DEMO ABOUT SETTING 
UP EVENTHUBS IN AZURE PORTAL. I'M 
GOING TO CLICK CREATE A RESOURCE. 
SO YOU NEED TO DEPLOY A NICKNAME 
TO THE EVENTHUB. I'M GOING TO GIVE 
DEMO EH1. FOR PRODUCTION USE CASES, 
YOU WANT TO SELECT THE STANDARD, 
AND IF YOU WANT KAFKA SUPPORT, YOU 
CAN ENABLE KAFKA. I'M GOING TO SKIP 
THAT. IF YOU WANT THIS, YOU CAN 
MAKE THE NAME SPACE ZONE. FOR RESOURCE 
GROUP, I'M GOING TO SELECT THE EXISTING 
RESOURCE GROUP. WE ALLOCATE CAPACITY 
TO EVENTHUBS IN TERMS OF THROUGHPUT 
UNIT. ONE THROUGHPUT UNIT WILL SUPPORT 
2 UNIT PER SECOND RATE. IN CASE 
YOU ARE EXPECTING A SUDDEN BURST 
OF VOLUME, AND YOU WANT EVENTHUBS 
TO AUTOMATICALLY SCALE, YOU CAN 
ENABLE THIS. THIS WILL TAKE A FEW 
SECONDS, BUT WHEN IT IS DONE, THIS 
IS HOW IT WILL LOOK LIKE. WE HAVE 
CREATED AN EVENTHUB NAME SPACE. 
UNDER THIS NAME SPACE, WE CAN CREATE 
MULTIPLE EVENTHUBS. I'M GOING TO 
CREATE ONE EVENT HUB. I'M GOING 
TO CALL THIS SOURCE DATA 2. WE CAN 
SELECT. IF YOU WANT THE DATA TO 
BE RETAINED FOR MORE THAN ONE DATA, 
WE CAN CHANGE IT. IN CASE YOU WANT 
TO ARCHIVE THE DATA INTO BLOB STORAGE, 
WE CAN MAKE USE OF THIS CAPTURE. 
SO I'M GOING TO SKIP THAT. ONCE 
CREATING AZURE EVENTHUBS, YOU CAN 
GET THE ACCESS KEYS FOR THAT. SO 
I HAVE ALREADY CREATED A PRODUCER 
ACCESS KEY AND CONSUMER ACCESS KEY. 
THE PRODUCER ACCESS KEY CAN ONLY 
SEND DATA TO THE EVENTHUBS, AND 
THE CONSUMER ACCESS KEY CAN CONSUME 
DATA FROM THE EVENTHUB. I'LL SHOW 
YOU THE CODE. I HAVE TWO CODES, 
EVENT PROCESSOR AND EVENT PRODUCER. 
I AM AN ANONYMIZED CREDIT CARD DATA. 
LET ME SHOW YOU THE PACKAGES THAT 
I'M USING. SO I'M USING AZURE EVENTHUBS 
SDK, AND I'M USING THIS DOT ENV 
PACKAGE. WE'LL BE PROVIDING EVENTHUB 
STREAM AND VALUES. IF YOU USE THIS 
. ENV PACKAGE, YOU CAN KEEP THEM 
IN THE . ENV FILE. SO I'M IMPORTING 
THE EVENTHUB CLIENT MODEL, AND THEN 
I'M CALLING THIS EVENTHUB CLIENT. 
CREATE. STREAM. IF YOU PROVIDE AN 
EVENTHUB CONNECTION NAME SPACE END 
POINT AND THE NAME, IT WILL ESTABLISH 
A CONNECTION TO THE EVENT HUB. IT 
WILL USE EVENT KEY PROTOCOL TO CONNECT 
TO EVENTHUBS. I'M USING THIS RUNTIME 
INFORMATION PARTICULARLY TO GET 
THE PARTITION COUNT. LET'S SWITCH 
BACK TO THE SLIDE. AT THE HIGH LEVEL, 
AT THE DATA HUB, YOU RECEIVE DATA 
FROM MULTIPLE SOURCES. AND YOU CAN 
GROUP BY TARGET PARTITION AD BY 
WHICH YOU WANT TO INGEST DATA TO, 
AND IN MY EXAMPLE QUOTA, I AM COMPUTING 
THE PARTS IN THE DEVICE SERIAL NUMBER. 
DEVICE SERIAL NUMBER IS ONE OF THE 
FIELDS IN MY EVENS, AND I'M COMPUTING 
THE MODEL BASED ON THE EVENTHUB 
PARTITION COUNT. SO TO MAXIMIZE 
RIGHT THROUGHPUT UNIT TO SEND THE 
MESSAGES IN BATCHES, THE MAXIMUM 
BATCH IS 256K. IF YOUR MESSAGE SIZE 
IS ONE KILOBIT, AT MAXIMUM YOU CAN 
SEND AROUND 200 MESSAGES. THE REST 
OF THE SITE WILL BE USED FOR SENDING 
SOME METADATA. SO BACK TO THE CORE. 
I AM IMPORTING THE SAMPLE DATA FROM 
THE JSON FILE. IN THE MAIN, I AM 
GROUPING THE EVENS BASED ON THE 
TARGET PARTITION AD. I'M COMPUTING 
THE TARGET PARTITION USING THIS 
LAMBDA EXPRESSION. AND THEN I AM 
READING ALL THE PARTITIONS AND GETTING 
THE EVENS FOR THE TARGET PARTITION 
AND INSERTING THEM INTO MULTIPLE 
BATCHES. SO YOU NEED TO PACK THE 
MESSAGES IN A FIELD CALLED THE BODY, 
AND THEN I'M USING EVENTHUB CLIENT. 
SEND BATCH METHOD. THIS WILL SEND 
ALL THE MESSAGES TO THE TARGET PARTITION. 
IF YOU DON'T SPECIFICALLY PARTITION 
AD, IT WILL PUSH THE MESSAGES TO 
ALL THE AVAILABLE PARTITIONS. DEPENDING 
ON YOUR PROCESSING IT RECOMMENDS, 
IT MAY OR MAY NOT BE APPROPRIATE. 
IN MY CASE, I WANT TO SEND ALL MESSAGES 
FOR A GIVEN DEVICE TO THE SAME PARTITION 
AD. I'M EXPLICITLY PASSING THE PARTITION 
AD. I'M GOING TO RUN THIS. SO IT 
HAS STARTED SENDING TO THE DIFFERENT 
PARTITIONS. SO NOW WE LOOK AT THE 
WHOLE WE CAN PROCESS THE DATA FROM 
THE EVENTHUBS. ONE WAY IS YOU CAN 
USE THE AZURE EVENTHUBS LIBRARY, 
BUT YOU'RE ALLOWED TO RECEIVE ALL 
THE MESSAGES FROM THE PARTITION, 
AND YOU NEED TO KEEP TRACK OF MANAGING 
THE PARTITIONS YOURSELVES. YOU DON'T 
WANT MULTIPLE INSTANCES TO WORK 
ON THE SAME PARTITION. YOU CAN USE 
EVEN PROCESS. THERE'S AN MPM PACKAGE, 
EVENT PROCESS AT WORST. SO IF YOU 
USE THAT, IT WILL TAKE IT OFF MANAGING 
THE PARTITIONS. I'M RUNNING A PROCESSOR. 
IN THE EVENTHUB, THERE'S FOUR PARTITIONS. 
IF I RUN TWO INSTANCES OF THE EVENT 
PROCESSOR, THE FIRST WILL CONSUME 
TWO PARTITIONS, AND THE SECOND HOST 
WILL PROCESS DATA FROM THE NEXT 
TWO PARTITIONS. IN CASE OF ONE OF 
THE INSTANCES, THE OTHER WILL TAKE 
IT OFF PROCESSING DATA FROM ALL 
THE PARTITIONS. IF YOU THINK YOU 
NEED TO ADD MORE INSTANCES TO SUPPORT 
THE THROUGHPUT, YOU CAN ADD -- YOU 
CAN RUN ANOTHER INSTANCE. THESE 
ARE THE STEPS TO PROCESS THE DATA 
FROM EVENTHUBS AND RUN MULTIPLE 
INSTANCES TO CONCURRENTLY PROCESS 
DATA. IN THE EVENT PROCESSOR ROAST, 
YOU CHECKPOINT PERIODICALLY. AND 
THEN IN CASE THE PROCESS RESTARTS 
AFTER A CRASH, IT WILL START PROCESSING 
DATA FROM THE LAST CHECKPOINT MESSAGE. 
SO THIS WILL RESULT IN AT LEAST 
ONE GUARANTEE. SO I'LL SHOW THE 
CODE FOR THE EVENT PROCESSOR HOST. 
I'M IMPORTING PROCESSOR HOST FROM 
THE HOST LIBRARY. SO I'M GOING TO 
PROCESS THE DATA ON COSMOS DB. SO 
I'M GOING TO WRAP THE MODULE DATA 
STORE. THIS IS THE MAIN COAST. I 
AM PASSING EVENTHUB CONNECTION DETAILS, 
AND I'M ALSO PASSING AZURE STORAGE 
CONNECTION STREAM. SO EVENT PROCESSOR 
HOST THIS PROCESSES STORAGE DATA 
TO MANAGE THE PARTITIONS AND THEN 
CHECKPOINTING THE MESS GE AS. SO 
IF YOU NEED TO CALL, IT TAKES TWO 
ARGUMENTS, TWO CALLBACKS, ON MESSAGE 
AND ON ERROR. THE ON MESSAGE CALLBACK, 
YOU'LL BE VISITING ALL THE MESSAGES 
AND THE PARTITION IT BELONGS TO. 
SO I AM UNPACKING THE DATA, AND 
THEN I HAVE IMPLEMENTED A SAMPLE 
TRANSFORMATION. IF THE SPEED IS 
GREATER, I'M GOING TO CREATE A RECORD 
AND STORE IT IN COSMOS DB. HERE 
I'M CHECKPOINTING EVERY MESSAGE. 
YOU DON'T WANT A CHECKPOINT FOR 
EVERY MESSAGE. YOU LIMIT THE PROCESSING 
THROUGHPUT. SO WE COLLECT A STORAGE, 
AND WE TRY TO CREATE FOR THE PARTITION. 
IT WILL TAKE A WHILE. ALSO, WILL 
PARTLY SEND MESSAGES. I THINK IT'S 
TAKING SOME TIME, PROCESSING THE 
DATA. THIS IS THE STORAGE POINT 
I HAVE CON CONFIGURED. I THINK THIS 
IS NOT CONNECTING. I HAVE CREATED 
A CONTAINER CALLED EP CHECKPOINTS. 
THE EVEN PROCESSOR HOST WILL USE 
THIS CONTAINER TO KEEP TRACK OF 
THE LEASES I GET FOR THE PARTITIONS. 
FOR ALL THE THREE PARTITIONS, IT 
HAS CREATED A BLOB FILE . IF YOU 
LOOK AT THIS BLOB, YOU'LL SEE THE 
DETAILS OF THE CHECKPOINTING. YET 
YOU CAN SEE THAT FOR PARTITION, 
THE LAST CHECKPOINT, SO IT DOESN'T 
PROCESS ANY DATA FROM THIS PARTITION. 
YOU'LL SEE THE LAST MESSAGE CHECKPOINTED 
HERE. I'M ALSO GOING TO CREATE A 
COSMOS DB COLLECTION TO STORE THIS 
DATA. CALL THE SAMPLE. I'M GOING 
TO SPECIFY THE STORAGE CAPACITY 
IS UNLIMITED. I'M GOING TO SET PARTITION 
FOR THE COSMOS DB COLLECTION. NOW 
THE EVENT PROCESSOR HOST SHOULD 
BE STORING DATA IN THIS COLLECTION. 
SO WE CAN RUN MULTIPLE INSTANCES 
OF THIS AND PROCESS THE DATA CONCURRENTLY. 
I'D LIKE TO SHARE SOME OF OUR EXPERIENCES 
WITH COSMOS DB. SO THE REASON WE 
CHOSE COSMOS DB, IT'S ABOUT ELASTIC 
SCALEABILITY OF THROUGHPUT AND STORAGE. 
IT ALSO PROVIDES GUARANTEED LOW 
RATE LATENCY, AND ALSO A FULLY MANAGED 
SERVICE. SO IN COSMOS, WHILE CREATING 
COLLECTIONS IN COSMOS DB, YOU NEED 
TO CREATE SEPARATE COLLECTIONS. 
THE AUDIO REQUIREMENTS AND ACCESS 
BUTTONS ARE GOING TO BE DIFFERENT. 
I HAVE CREATED A TELEMATICS COLLECTION, 
IN CASE I WANT TO STORE DEVICE, 
I WILL HAVE CREATED A DIFFERENT 
COLLECTION. IF YOU HAVE A UA, YOU 
TYPICALLY WANT TO GET ALL THE DEVICES 
AND THEN SHOW IT IN THE UA. IN THE 
TELEMATICS COLLECTION, I HAVE PARTITIONED 
USING THE NUMBER. IF I HAVE STORED 
DEVICES OR VEHICLES IN THE SAME 
COLLECTION, THEN QUESTIONERYING 
THE DEVICES IN PARTITION QUERIES, 
THAT WOULD CONSUME IR USE. WE HAVE 
TO CHOOSE THE RIGHT PARTITION KEY 
TO DISTRIBUTE THE LOAD UNIFORMLY 
ACROSS THE COSMOS DB PARTITIONS. 
SO IF YOU WANT TO ACHIEVE THE MAXIMUM 
RIGHT THROUGHPUT FOR YOUR COSMOS 
DB, YOU HAVE TO USE STORED PROCEDURES. 
THERE ARE SOME LIMITATIONS WITH 
STORED PROCEDURES. DOCUMENTS WITH 
THE SAME PARTITION KEY CAN BE STORED 
USING THE SINGLE SERVER. YOU'VE 
GOT MULTIPLE RECORDS WITH DIFFERENT 
PARTITION KEYS, THEN YOU NEED TO 
MAKE PARALLELS TOWARDS THE STORED 
PROCEDURES, PASSING THEM ON A DIFFERENT 
SET OF RECORDS. IN CASE YOU WANT 
TO -- CONCURRENT RATES ARE POSSIBLE 
FOR THE SAME RECORD AND YOU WANT 
TO PROVIDE DATA, YOU CAN USE THE 
SAME CONCURRENCY BY LEVERAGING THE 
TECH AVAILABLE IN THE DOCUMENTS. 
WE USE COSMOS DB A LOT. WE USE THE 
CHANGE FEED TO MOVE FROM COSMOS 
DB FOR DATA ANALYSIS AND ALSO MOVE 
DATA FOR COLD STORAGE. WE USE CHANGE 
FEED TO UPDATE OUR CACHE OF THE 
PROCESSING LOAD. SO IN THE EVENTHUB 
PROCESSOR HOST, I'LL SHOW THE SPEED. 
IN CASE I'M STORING THE CONNECTION 
IN COSMOS DB, AND IF SOMEONE UPDATES 
IT USING AN APA I CAN UPDATE THE 
CACHE BY CONTINUOUSLY PULLING THE 
CHANGE FEED AND UPDATING THE MEMORY 
CACHE. SO SIMILAR TO EVENTHUB PROCESSOR 
ROAST, THERE IS A CHANGE FEED PROCESSOR 
LIBRARY AVAILABLE. IT WORKS SIMILAR 
TO THE EVENTHUB PROCESSOR ROAST. 
IT WILL TAKE CARE OF THE CHANGE 
FEED PARTITIONS. YOU CAN ALSO USE 
AZURE FUNCTIONS. THIS IS HOW THE 
CHANGE FEED LIBRARY WORKS. WHEN 
YOU CREATE A COSMOS DB COLLECTION, 
IT ALLOWS MULTIPLE PARTITIONS. THE 
CHANGE FEED PROCESSOR LIBRARY, HERE 
WE ARE RUNNING TWO INSTANCES. ONE 
OF THEM IS CONSUMING DATA FROM TWO 
PARTITIONS, AND THE OTHER ONE IS 
CONSUMING DATA FROM THE OTHER TWO 
PARTITIONS. IT WORKS EXACTLY SIMILAR 
TO THE EVENTHUB PROCESSOR ROAST. 
AND THESE ARE SOME OF THE COMMON 
WITH COSMOS DB. THE DEFAULT LEVEL 
IS CONSISTENCY. SO IF YOU CREATE 
IT AS PART OF THE HTTP REQUEST AND 
ATTEMPT TO READ IT AS PART OF ANOTHER 
REQUEST, IT WILL NOT BE IMMEDIATELY 
AVAILABLE. YOU CAN UPDATE THE INDEXING 
POLICY NEAR COSMOS DB. IF YOU HAVE 
A LOT OF DATA IN YOUR COSMOS DB 
AND YOU UPDATE THE INDEXING POLICY, 
IT WILL HAVE SOME IMPACT ON THE 
CONSISTENCY. THERE ARE SOME THAT 
GO WITH THE CHANGE FEED. MANY UPDATES 
TO THE SAME RECORD. ONE OF THE MOST 
RECENT UPDATES WILL BE AVAILABLE 
IN THE CHANGE FEED. IF YOU DO YOUR 
FIELD AND IF SOME OF THE RECORDS 
DON'T HAVE THAT FIELD, THEN THOSE 
RECORDS WON'T SHOW UP IN THE QUERY 
RESULT. ALSO, IF YOU DO YOUR , IT 
MIGHT RETURN LESSER RESULTS. IF 
YOU HAVE A TOKEN, YOU NEED TO CALL 
AGAIN PASSING THE CONTINUATION TOKEN 
TO GET THE REST OF THE RECORDS. 
AND THE SIZE OF THE CONTINUATION 
TOKEN COULD BE IN KILO BYTES. PASSING 
THE TOKEN TO THE UAR IN USER, AND 
IF YOU'RE EXPECTING THEM BACK, TO 
PASS IT AS A QUERY PROGRAM, IT MAY 
BREAK. INDEX THE RECORD FEES. THIS 
IS THE RIGHT OR USE CONSUMPTION, 
AND THEN YOU CAN HAVE A JOB TO SELECT 
THE FILE DURING THE NON-PEAK HOURS. 
AND YOU CAN ALSO ACCESS THE DATA 
BY LEVERAGING THE PPL IN THE COLLECTION. 
IF YOU DON'T NEED DATA OLDER THAN 
THREE DAYS, YOU CAN SET THE TTL 
TO THREE DAYS. AND THEN FOR SOME 
USE CASES, YOU CAN PROVISION THE 
AUDIO SO THE DATA IS AT THE COLLECTION 
LEVEL. SO THE MINIMUM IUs YOU CAN 
ALLOCATE IS THE IUs. THEN THAT WOULD 
REDUCE THE COST. AND IN YOUR NON-PRODUCTION 
ENVIRONMENTS, YOU CAN DISABLE GEO 
APPLICATION. BY DEFAULT, . WE ALSO 
USE AZURE APPS. AND THEN WE USE 
STAGING SLOTS TO DO ZERO DOWNTIME 
DEPLOYMENTS, AND WE ALSO USE AZURE 
DEVOPS TO DO CACD. WITH THESE VARIOUS 
AZURE TECHNOLOGIES AVAILABLE, WE 
BUILT A SCALEABLE PLATFORM VERY 
QUICKLY. THESE ARE SOME OF THE STREAM 
PROCESSING PROBLEMS WE SOLVE. WE 
GET OUT OF OUR DATA. WE GET DUPLICATE 
DATA. SOME SOURCES PROVIDE SPREAD 
DATA INTO MULTIPLE FEEDS. WE ARE 
TO FETCH DATA FROM MULTIPLE FEEDS 
AND THEN JOIN THEM. SOME OF THE 
PROCESSING RECORDS STATE FULL PROCESSING. 
WE ARE CHECKPOINTING THE STATE AND 
PROCESSING THE STATE. ONE OPTION 
IS YOU CAN HAVE A BACKGROUND THREAD 
THAT CHECKPOINTS THE STATE TO YOUR 
BLOB STORAGE. YOU CAN RECORD THE 
STATE FROM YOUR BLOG. SO THAT'S 
ALL FOR THE DEMO. SO IF YOU ARE 
INTERESTED TO KNOW MORE ABOUT OUR 
BUSINESS, YOU CAN REACH OUT TO US. 
WE ARE ALSO HIRING FOR THESE POSITIONS. 
THANKS FOR JOINING ME. I HOPE YOU 
FOUND THIS HELPFUL. IF YOU HAVE 
ANY QUESTIONS, YOU CAN ASK. >> FOR 
COSMOS DB TO BALANCE OUT THE RUs 
ACROSS PARTITIONS, WHAT DO YOU USE 
AS A PARTITION? WE USE DEVICE ID. 
WE RECEIVE DATA FROM MULTIPLE DEVICES. 
SO WE USE DEVICE ID. >> AND AS A 
EVENTHUB, THERE'S A C++ LIBRARY 
WE USE. I KNOW YOU HAVE NODE. JS, 
BUT DO YOU KNOW IF THERE'S A C++. 
>> I'M NOT SURE. >> I KNOW THERE'S 
C#. I'M NOT AWARE OF ANY C++. ANY 
OTHER QUESTIONS? >> HOW YOU HANDLE 
OUT OF ORDER DATA? >> SO IT DEPENDS 
ON THE REQUIREMENTS. WE'LL USE AN 
IN MEMORY BUFFER, AND THEN WE KEEP 
THE DATA AND THEN STOP THEM BEFORE 
PROCESSING IT. IT WON'T WORK FOR 
ALL THE USE CASES. CASES. >> I WORK 
ON PRODUCTS VERY SIMILAR TO THIS. 
>> ANY OTHER QUESTIONS? >> BEFORE 
CREATING THE EVENTHUB, AND YOU HAD 
THE KAFKA OPTION THERE. I DON'T 
KNOW WHAT THAT IS. >> SO EVENTHUB 
SUPPORTS KAFKA PROTOCOL. IT SUPPORTS 
GDPs AND ALSO KAFKA. IF YOU ENABLE 
KAFKA, YOU CAN HAVE ALL YOUR KAFKA 
CLAIMS TO WORK WITH EVENTHUBS. SO 
IT ACTS AS IF IT'S KAFKA. >> SO 
THE DEFAULT IS -- >> [NO MICROPHONE] 
>> YEAH. SO EVENTHUBS WILL GENERATE 
THE MESSAGES. WITH A PARTITION. 
IF NO OTHER QUESTIONS, THANK YOU. 
