Welcome, everyone. 
And thank you for joining us today on this live webinar
 about digital twin technology. 
Now, it seems that this is a popular buzzword right now. 
But is it just another marketing label?
 Is there real value in what the technology has to offer process industries? 
Well, you can be the judge after listening to this webinar. 
This webinar is a collaboration between Yokogawa and KBC, a Yokogawa company. 
Now I'm not sure if you're aware, but KBC was founded in 1979 
by Krikor Krikorian, John Brice, and Peter Close. 
And the name KBC was formed from the initial letters of their surnames. 
In 2016, KBC was acquired by Yokogawa, 
and integration of KBC is distinct consulting and software capabilities 
with Yokogawa’s excellence in the industrial automation field
 will ensure that we continue to deliver superior results for you, our customer. 
I would like to introduce our presenters, Duncan Micklem and Kevin Finnan to you today. 
Duncan is responsible for the business strategy and marketing functions of KBC, 
a Yokogawa company, as well as Yokogawa’s North America business. 
He is passionate about technology and the value it can unlock. 
Duncan started his career in the engineering industry with AMEC, 
where he provided health, safety, and environmental liability management advice 
on mergers and acquisitions, covering upstream production assets, 
refining, and petrochemicals. 
After moving to KBC, he has held business management roles 
focused on business development, strategy restructuring, and planning.
 Kevin Finnan is a system consultant at Yokogawa 
and has over 30 years of experience with oil and gas measurement, 
process automation and SCADA systems. 
He was previously an independent consultant serving the automation and measurement industries, 
Vice President of Marketing for CSE-Semaphore, and Director of Marketing at Bristol, Bangkok.
 Again, welcome and thank you for joining us today. 
Please remember, you can send in your questions and type them into the Q&A box at any time.
 All right, let's get started. 
Over to you, Duncan. Thank you.
Thank you, Christy. 
Well, good morning, good afternoon, good evening, everyone.
 Thank you for joining Kevin, Christy, and I on this webinar about digital twin.
 Now, digital twins can't just be thought of as a nebula concept 
or abstract enabling technology. 
They're an enabling technology in the journey towards autonomous operations. 
So, in this webinar, we'll look at the journey towards autonomous operations. 
And with that concept in mind, focus on the digital twin in terms of
 what it is, where does it exist, 
what is the scope of insight derived, the scalability of the digital twin, 
what it's not, 
how it can be hosted in the Cloud or on-premise, 
the potential business model impacts that the digital twin has. 
And finally, the value. 
Digital twins aren't all made the same so we'll focus on two case studies.
 I'd like to open by asking everyone on this webinar one question.
 As you seek to squeeze incremental value from your people, processes, technologies, and physical assets; 
are you guilty of just pursuing a faster horse?
 Or is there another perhaps better way? 
Your horse may have been serving you well over the years,
 and we're pleased for that. 
But will your horse carry you forward at the pace this new world we live in, 
which is a lot more digital affords? 
We look forward to sharing our passion on the other perhaps better way through the digital twin. 
Before we dive into the nuts and bolts of digital twins, 
the pursuit and adoption of these technologies in the context of a digitalization journey, 
driven by both internal and external factors. 
The process industries drive towards semi and fully autonomous operations, 
coupled with energy transition realities and pressures 
are forcing operating companies to seek digital platforms 
and fully adopt disruptive technologies as their normal way of doing business 
in their quest for superior results sustained. 
As Christy mentioned, Yokogawa, along with its subsidiary KBC, a Yokogawa company 
is digitalizing the energy and chemical industry through a unique combination of 
domain knowledge and experience, be it technical, commercial, or in operations; 
technology, be it AI, IT, digital twin and automation. 
And thirdly, through digitally wise and digitally savvy methodologies 
roadmaps and change management techniques.
But delivering superior results sustained does involve some subjectivity. 
For a business that is at the low end of the operational excellence maturity spectrum as you see here,
 success maybe continued loss minimization. 
Whereas for a more operationally mature business towards the right, 
success maybe positive cash flow or repeatable results.
 You can see these at the base of each of the vulnerable accepting and structured pillars, 
but irrespective of maturity, effective progression on the journey from left to right involves 
accepting your current maturity, deciding what the improvement ambition is, 
i.e, the desired end goal. 
For example, is it the structured maturity pillar? 
And then agreeing on a realistic timeframe over which the improvement needs to happen. 
What we found is things always take longer than you'd like. 
It's often a multi-year process for coherent progression of all aspects of the operating model; 
people, processes, technologies, and physical assets. 
The starting point of the evolution is always a set of high fidelity 
first principles-based models of the business incorporated within a multi-purpose
 lifecycle simulation platform.
 These are really the heartbeat of the business and provide situational awareness. 
Without them, you're just guessing. 
At the beginning of the journey, towards the left-hand side, there are low levels of automation.
 The plant is manually operated by a large frontline workforce, 
executing a collection of best practices supported by first principles, models. 
As the plant implements more and more manufacturing execution systems, 
it becomes more automated, and in doing so, 
achieves a smaller frontline workforce footprint 
who are primarily executing advice based actions in open loop. 
All the while the stability, controllability, and predictability 
of the plant is being enhanced for widespread adoption of closed-loop control and procedure automation. 
Ultimately, in closed-loop optimization. 
Different parts of the energy and chemical industry are further along in this journey than others. 
For example, discrete batch processes in specialty and fine chemicals
 have been able to advance faster than say, 
refining, liquefaction or other complex continuous processes. 
Key to the progression is the implementation of MES, analytics, AI, and machine learning technologies. 
Through this process, the plant becomes more empowered to run, learn, adapt, 
and thrive in an increasingly dynamic business environment.
 So, you're probably wondering, where does the digital twin fit in? 
What is interesting about the previous illustration is that each progression 
towards autonomous operation involves evolution in how decisions are made. 
A digital twin is a decision support tool
 that enables improved safety, reliability, and profit and profitability 
in design operations.
 It is a virtual digital copy of a device, a system, a human or process
 that accurately mimics actual performance in real-time 
that is executable and can be manipulated allowing a better future to be developed. 
So, to be a virtual digital copy of a device, system, human or process 
means that the digital twin can exist at any level within the traditional ISA 95 architecture,
 and be scalable to integrate with other components.
 In subsequent slides, I'll make further reference to the scalability of the digital twin 
as will my colleague Kevin. 
As far as scope of insight is concerned 
digital twins work in the present, mirroring the actual human device system will process
 in simulated mode, but with full knowledge of its historical performance 
and accurate understanding of its future potential.
 In this way, the digital twin allows the full scope of hindsight insight,
 foresight, and oversight to be delivered. 
But why is this important? 
Well, it allows understanding of what is happening or has happened.
 It allows an understanding of why it has happened. 
And then switching to the future, what will or might or can happen,
 and lastly, what should happen. 
And this is not just about a bunch of graphical dashboards,
 it goes well beyond that to incorporate first principles models. 
So, let's move away from conceptual and theoretical to real delivered value.
 We'll do this through two case studies,
 one involving a subsea production system feeding an FPSO, 
the other involving a 55,000 barrel a day FCC in refining. 
The concepts in realities are equally applicable in petrochemicals too.
 Let's start with the upstream example. 
So, reservoir fluid characterization on the seabed. 
KBC's multiflash software can be used to provide a digital twin 
of the reservoir fluid phase behavior under different pressures, volumes, and temperatures. 
This is vital for assuring the integrity of pipeline fluid flow. 
Building on the multiflash-based digital twin
 is a twin representation of the entire subsea production network 
of whatever complexity, including wells, chokes, flow lines, 
and a wide range of processing equipment. 
This is done using Maximus. 
With the multiflash twin capabilities natively integrated 
into the more expensive Maximus model of the production system, 
flow across the whole network can be optimized. 
Building yet again on the multiflash and Maximus models
 is a representation of the entire production system
 including topside facilities and power generation 
be it an FPSO or offshore platform. 
The incorporation of power generation is absolutely key 
because power generation systems apart from constituting a major variable cost,
 drive a number of critical production processes such as 
compression systems, oil export pumps, and utility systems. 
Offshore power generation systems of 50 to 100 megawatts are not uncommon. 
The challenge with offshore power generation is that the power generation feedstock
 is also the product for export. 
In order to ensure power generation resilience, 
there is a need to better understand production dynamics in conjunction with power generation. 
The Petro-SIM model incorporating multiflash and Maximus 
allow matching of power generation to well deliverability
 which is worth a lot of money, as we'll see on the next slide. 
How much money? 
In short, 180 million dollars per annum for this particular case, 
the asset comprise just over 20 wells feeding a self-powered FPSO. 
The FPSO had a capacity of 90,000 barrels a day of oil, 
10 to 20 million cubic feet per day of fuel gas handling, 
and treated water injection rates of up to 30,000 barrels a day. 
Matching well deliverability to topside power generation, 
and compressor availability in a single model environment 
was able to boost FPSO production by 9,000 barrels a day. 
These results involved no CAPEX investment,
 use onboard equipment only and matched subsurface to surface pressure and flow. 
First time power production balance was implemented resulting in a new production regime 
with altered production rates driving significant incremental value as you see here. 
As I move on to the next slide, I won't spend a lot of time on these.  
But I would like to point out how, in this first screenshot, you'll see 
how well models and subsurface templates along with FPSO trains, power generation, and onboard compressors 
were represented in an integrated manner. 
Cumulative and system-wide production dynamics were able to be rigorously represented.
 This next slide shows a little bit more detail when you drill down 
into the individual system components 
with a lot more detail available, which obviously you can't see through the screen 
but each of these components can be drilled down a lot further. 
Lastly, the chart on the next slide shows the nonlinear relationship 
between pressure and flow at different turbine intensities.
 Without the model 
the nonlinear relationship would not have been apparent, 
and the optimum operating conditions would have been a lot more difficult to observe. 
That's all I'm going to cover on this case study.
 But if you'd be interested in receiving a more detailed write up on this,
 please email me at Duncan.Micklem@us.Yokogawa.com. 
So, this next case study, I realized not everyone plays 
in the upstream world, so switching to downstream and specifically 55,000 barrels a day FCC unit. 
This is a key unit, as many of you know in around 50% of refineries worldwide
 for converting lower value, heavy gas oil, vacuum gas or residue feedstock components 
into significantly higher value products. 
The problem today is that the tools used in the industry refinery, 
in this case, the FCC to forecast economic performance
 are not necessarily conducive for use by the engineers actually operating the plant 
and who are responsible for delivering the operating plan. 
This case study’s about how 
over a million dollars per year can be made through much tighter monitoring of unit performance 
by comparing actual unit performance with high fidelity model simulated and the LP. 
On this chart on the next slide, the actual performance is in red. 
The simulated plant performance using high fidelity models is in blue,
 and the yellow line is the LP model of the plant. 
It's clear the best representation of the plant comes from the first principles models. 
So, it makes sense to use these for more of the operating decisions on the front line 
for identifying profit opportunities and re-optimizing for gap closure. 
Whilst the overarching benefit of the LP is its speed, 
enhancements in compute power, and multi-threading of first principles technologies
 is making them increasingly ripe to be used for more of the optimization decisions. 
This particular example involves an FCC.
 The same can be scaled for other units around the refinery. 
Of particular interest in today's business environment being say, 
the hydrocracker and the coker for bottoms conversion. 
So, far I've mentioned simulators, models, and digital twins. 
Are these the same? 
No. 
Models exist in the context of traditional simulators as well as digital twins. 
But traditional simulators are different to digital twins. 
The best analogy of a traditional simulator is a traditional calculator. 
Inputs are punched in manually for a particular calculation. 
The result is a static snapshot in time, 
and the result is available to only been a few people who are standing around the calculator can see it. 
On the other hand, a digital twin is an accurate representation over its full range of operation
 all the time with the history captured for say data mining, 
as well as the future for what if, what's best, and what's next analyses.
 Instead of being manual, it's automated. 
And the outputs are democratized or can be democratized 
much more easily across the organization for joined up thinking and action across silos. 
But what does the digital twin look like in practice? 
What's an example? 
And I'll show you on the next slide. 
It simply involves a high fidelity model of the asset built say using Petro-SIM. 
So, that can include upstream production facilities, LNG, gas processing plants, refineries, olefins, and aromatics units. 
Models built are then fed real-time production data from the assets DCS, 
historian, and lab system. 
And by the way, this is not difficult to set up. 
The application of complex first principles physics and chemistry-based algorithms
 to DCS, historian, and lab system data in real-time opens up phenomenal new insights
 for supply chain optimization and production management. 
But instead of the gems of insight remaining siloed to the engineers running the model,
 what's key is they liberated to the rest of the organization with both the PI System and the Petro-SIM based twin, 
always remaining aligned. 
Now this is a game-changer for driving convergence in decisions and actions 
across engineering operations and other consumers of data from the OSIsoft system and other plant technologies
 that feed in and feed off that. 
So, with that in mind, that's just one example. 
I'm going to hand over to my colleague, Kevin, who's going to talk to you about a few others. 
Thank you, Duncan.
 We do offer a broad portfolio of digital twins with widely varying purposes. 
I won’t describe all of them but should mention those that perhaps you might not expect. 
For instance, enterprise insight is a digital boardroom that comprises a series of business and financial KPIs, 
which are updated in real-time as part of an enterprise-wide balanced scorecard. 
The underlying KPI calculations combine simple dashboards into measured parameters with integrated logic. 
That can be linked to other digital twins that optimize the process, 
energy consumption in the supply chain. 
You might see where we're going with this, but I'll explain some of the other digital twins first.
 Capability assurance is a suite of digital twins that includes a human knowledge twin,
 which captures work processes. 
Those can be tracked and manipulated in real time. 
They comply with ISO 106 modular procedure automation. 
For example, one of our customers implemented modular procedural automation 
by putting their best operator practices on paper, 
not exactly digitalized, but still a quantum leap over their prior standard operating procedures. 
But then they did digitalize them to take advantage of change management. 
Then further that deployment of digital twins for all of the work processes
 has minimized the learning curve for incoming users 
and advanced the improved validation of operating scenarios. 
There's also a safety time I'll describe in a minute.
Operator training simulation or OTS digital twin substantially enhanced earlier 
model based and inferential OTS implementations through artificial intelligence and real-time enablement. 
OTS digital twins work seamlessly with human knowledge digital twins, 
to completely optimize all human aspects in the enterprise. 
Now, the automation and control integrity digital twins use digital copies 
of the life plan including all processes, and all automation algorithms. 
They allow engineers to conduct fundamental process control tests at their workstations. 
They include any proposed adjustments before they apply them to the life process. 
As an example, a customer wanted to see the interactions not only within the controller 
but exactly how the process responded to changes in the application code. 
A safety instrumented system digital twin also uses a digital copy of the life plan and processes, 
but incorporate safety logic in place of the process control logic. 
Now keeping up and certified Functional Safety Management Policies is a big issue. 
Safety systems change over time. 
Duncan earlier mentioned the hindsight, insight, foresight, and oversight a digital twin provides. 
Those have turned out to be key to maintaining that Functional Safety Certification 
as the safety system involves. 
Now in addition, all these digital twins can be combined like the safety instrumented system, 
and automation and control integrity digital twins could be combined for both 
integrated process control and safety. 
Also a combination of the safety digital twin, and the human knowledge digital twin 
mentioned earlier provides full safety visibility, including future scenarios 
to an operator in the management team. 
Now we can describe the other digital twins further in Q&A
 or even afterward if you want to contact us via email.
 I think they're probably more in line with solutions you'd expect from Yokogawa and KBC. 
That is evolutions of the simulation technologies that Duncan described earlier. 
So, again, this portfolio of digital twins can operate independently or together, 
providing a single version of the truth. 
But now the question is, where do all that lead?
 Our view of it is digital Nirvana.
 Contemporary digital twin technology is evolving in a way that is leading to a single multi-purpose digital twin 
instead of multiple digital twins, each of which serves a different purpose. 
The single digital twin encompasses today's multiple digital twins in a supplier agnostic manner, 
and it aligns all assets in the value chain. 
Ubiquitous data sources replace the ad hoc siloed data to provide a completely coordinated and connected environment. 
To illustrate a typical situation, subject matter experts use their domain knowledge to operate the plan. 
And then they combine that with facilities simulation technology for process and production optimization,
 the model built are placed online, and they're fed with real time operations data. 
Instead of a simulator, they had created a digital twin delivering real time insight 
and augmented where needed with insight from beyond the plant, 
rather than simply data as a service that in turn enables outcomes as a service. 
So, we've just mentioned the Cloud so far, but running the digital twin in the Cloud does
 shift the business model, sometimes substantially by the way.
 In the Cloud, the digital twin not only serves the entire enterprise, 
but can also engage subject matter expertise and technology from outside the corporate boundaries.
 That could be vast. 
External data feeds and analytics, expand your agility, and radically reduce infrastructure costs. 
This enables new business models that best explode the subject matter expert domain knowledge 
and simulation technologies to provide bi-directional added value. 
Now, that's been corporate assets, central or remote operations centers, support teams, and third parties. 
It also enables third party suppliers to offer outcome as a service again, for example, 
instead of a catalyst, a third party supplier could offer catalyst performance as a service. 
So, now I'll hand it back over to Duncan to conclude with where the digital twin fits in, 
and how it is delivered through a discipline roadmap. 
Thanks, Kevin. 
So, on this next slide, if these are the four main pillars of your operating model, 
in terms of your people, physical assets, 
systems, and practices, where does the digital twin fit in?
 Well, it sits at the interface of people and the physical assets being or to be operated. 
As I mentioned earlier, it is a decision support tool. 
It’s where each of the attributes meet where significant value can be unlocked. 
Where people interact with systems, it's important that the right data is available. 
It's clean, it's supported by the technology infrastructure.
 It's made available in a way that it is actionable, and that the people to act on the data are competent. 
Where systems interact with practices, the technologies in place need to support 
execution of business processes to deliver the hindsight, insight, foresight, and oversight. 
Often this information is sufficient to guide day to day frontline actions of operators. 
However, to really understand the implications of these possible actions, 
more rigorous tools are needed for decision making to be able to accommodate what if,
 what's next, and what's best analyses, particularly where, as I mentioned previous
 nonlinear relationships may exist. 
So, it's here that digital twins and first principles models are of the highest value. 
Lastly, to actually move the plant’s operations, so say decisions have been made, 
the actions need to be taken on the front line at the interface of practices and systems. 
Initially, operational execution or moving the plant is done manually based on best practices, 
and then moves to open loop and closed loop to procedure automation and eventually closed loop optimization. 
As I mentioned before, this doesn't happen overnight, and it does take a phased progression, 
but a deliberate progression. 
All of this underpins the KBC digitalization roadmap, 
which involves initially ensuring readiness, 
moving on to situational awareness, 
and only once these two steps have been accomplished,
 should the digital twin be considered for implementation. 
Because as I mentioned previously, when we were saying, 
well, what's the difference between a traditional simulator and a digital twin. 
If you are going to be automating this,
 then readiness is absolutely vital before you start to try and automate. 
And lastly, 
the final two steps beyond 
this involves execution and sustainment. 
The twin makes this significantly easier.
 With that said, I'm going to hand back to Christy but before doing that, I hope that
 if you are or have been guilty of pursuing a faster horse, 
please consider the digital twin as perhaps a better more valuable way
 of achieving your business goals. 
If you'd like it, a more detailed write up of the thinking behind the digitalization roadmap. 
Again, please do email me at Duncan.Micklem@us.Yokogawa.com. 
And lastly, Kevin did make reference to the different types of digital twin. 
Please do look out for other digital twin webinars that we'll be running as part of this digital twin series. 
These will be focused on the more specific twin technologies 
and their application and value.
 And so with that, back to you, Christy. Thank you. 
Thank you so much, Duncan and Kevin for that fascinating presentation. 
It is really great to hear about the two case studies and real life examples 
of where digital twin technology is bringing value 
to the process industries.
 All right. We're going to look at the Q&A now.
 So, if you would like to send through your questions, please type your question in the Q&A section
 and send it to all panelists. 
Our first question received, 
you stated that digital twins accurately mimic actual performance in real-time. 
How do you define real time? 
Kevin, do you want to tell us more about this?
Real-time is relative to the speed of the process. 
The key point that it is live as opposed to historical 
and it has to use sufficient resolution in terms of time to capture process dynamics.
 For example, turbomachinery requires a resolution in terms of 10s of milliseconds.
 Flow processes are typically about one second, 
and a level or temperature processes are slower. 
They could be in multiple seconds or even minutes. 
Thank you, Kevin. 
A second question received, are digital twins either real-time or historical but not both? 
They are both a digital twin works in the present mirroring the actual human device system 
or the process in simulated mode, 
but it has a full knowledge of its historical performance 
and an accurate understanding of its future potential. 
In this way, the digital twin allows the full scope of hindsight, insight, foresight, and oversight to be achieved. 
Thank you. 
We will gather up the rest of the questions and answer them by email. 
We appreciate the hosts, Kevin and Duncan joining us today. 
Thank you very much. 
And thank you for joining us. 
We look forward to having you on another Yokogawa webinar in the future. 
Everyone have a good day.
 Thank you so much.
