>> You'll use data entities
and data packages
when you manage data
in Microsoft Dynamics
365 for Finance and Operations.
The activities that you'll
use them for include:
Initial configuration
of a new implementation
from an empty environment.
Data migration from
legacy systems.
Copying a company configuration
within an existing environment.
And copying data
across environments.
In this video, we'll
share tips and tricks
for working with data entities
and data packages.
Finance and Operations includes
a data management framework
that can be used
in multiple ways.
Any data entity that
a developer creates on
top of this framework can
be used for data migration,
Microsoft Office integration,
and other types of integration.
Microsoft Dynamics
365 for Finance and
Operations includes
over 2200 data entities.
You can view the list of
standard data entities
in the Data
Management Workspace.
Note that in
a brand new environment,
this page might take
a few seconds to
be loaded the first
time that it's opened.
If there are any
changes, for example,
the new data entity was
introduced in
a recent deployment,
you can update
the data entity list.
In the Data
Management Workspace,
click "Framework Parameters."
Then, on the entity
settings tab,
click "Refresh Entity List."
To move data into and out
of Finance and Operations,
you must create
an import/export data project
in the Data
Management Workspace.
To import, you must create
an import data project by
specifying the name of
the data project, a description,
the name of the data entity,
and the source data format
such as comma separate values,
also known as CSV,
Microsoft Excel or XML.
You then supply a sample file
to help Finance and Operations
build a mapping between
your source file
and the internal staging table.
Automatic field mapping between
your source and
staging is based on
the column header in your file
and the column name
in the staging table.
If your source file has
different column headings,
you can manually map them
by using the View Map page.
To export, you must create
an export data project by
specifying the name of
the data project, a description,
the data entity, the target
data formats such as CSV,
Excel, tab-separated
values or package,
and the default refresh type.
If you want to export data
based on specific criteria,
you can specify a filter.
Sometimes, you might want to
export just the changes
to an entity.
In this case, you
can set the default
refresh type field to
incremental push only.
This option lets you
export changes based
on Microsoft Sequel
Server change tracking.
To use change tracking to
export changes to an entity,
you must first configure change
tracking for
your target data entity.
From the Data
Management Workspace,
open the entities page,
then select your
target data entity,
and turn on the
appropriate type of change
tracking based on
your requirements.
Note that Microsoft recently
introduced Data
Entity Versioning.
Whenever Microsoft makes
any breaking changes
to a data entity,
a new version of
the entity is released.
For example, check out
the Customer and Customer
V2 data entities in
Microsoft Dynamics 365 for
Finance and Operations
with platform update nine.
Often, you might have
to handle large volumes
of data as part of
data management activities.
For example, you
might have to migrate
your customer master data from
a legacy system to
Finance and Operations.
For those data entities that
don't support
set-based operations,
you can set up
parallel data loading to
occur during the movement of
data from staging to target.
To use parallel data loading,
you must first configure
entity execution
parameters for
your target data entity.
In the Data
Management Workspace,
click "Framework Parameters."
Then on the Entity Settings tab,
click "Configure Entity
Execution Parameters."
The current configuration
indicates that
the customer groups entity
uses the following algorithm.
If the record count
is less than 50,
a single batch task is
used to import all records.
If the record count
is more than or
equal to 50 but less than 100,
two batch tasks are
used to import records.
If the record count
is more than 100,
four batch tasks are
used to import records.
For all other data entities,
the following algorithm is used.
If the record count
is less than 30,
a single batch task is
used to load all records.
Otherwise, two batch tasks
are used to load records.
Parallel data loading is
built on top of
the batch framework.
Before you run an import job,
make sure that the import
in batch option is set in
the project "Import options"
in the Data
Management Workspace.
Data packages are
a new concept that
was introduced in
Finance and Operations.
A data package is
a single-compressed file,
that is a zip file,
that contains a data project
manifest and data files.
A data package lets
you import or export
multiple data entities
in a defined sequence
and in a single event.
Out of the box,
Finance and Operations
includes approximately
250 data packages.
To load Out-of-the-Box
data packages,
open Microsoft Dynamics
Lifecycle Services,
also known as LCS,
and click the "Asset
Library" tile.
Select Process Data Package
as the asset type,
and then click "Import."
You can then select packages
based on the industry
that you require.
After the data packaging
publishing process
has been completed,
the relevant data
package appears in
the Data Package Files List
in the Asset Library.
Data packages that you load
multiple data entities
at the same time.
For data entities that have
dependencies on each other,
the import sequence
is crucial for
successful and
correct data loading.
The order that entities are
loaded in is defined through
the entity sequence in the
data project that you created.
When a user adds data entities
to a data project,
a default sequence
is set for entities.
The first entity that's
added to the project
will be set as
the first entity to load.
The next entity that's
added will be second,
and the next entity will
be third, and so on.
For example, a user
adds three entities in
this order: customer
group product filters,
customer groups,
and vendor groups.
In this case, the customer
group product filters entity
will be assigned
an entity sequence of 1.1.1.
The customer groups
entity will be
assigned an entity
sequence of 1.1.2.
And the vendor groups entity
will be assigned
an entity sequence of 1.1.3.
The rules for entity sequences
are as follows:
Execution units
run completely in
parallel and have
no dependency on each other.
Within each execution unit,
the import is done one level
at a time in ascending order.
Within each level,
the import is done
one sequence at a time
in ascending order.
Entities that have
the same execution unit,
level, and sequence are
imported in parallel.
In our case, the customer
group product filters entity
has a dependency on
the customer groups entity.
Therefore, for data to
be loaded successfully,
the customer group
product filters entity
must be imported after
the customer groups entity.
So the entity sequence
should look like this.
Data entities are then
loaded based on
a defined sequence.
>> This brings us to the end
of this presentation.
We hope you found
this information
useful. Thank you for watching.
