Professional Documents
Culture Documents
1 Change History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
6 Foundation Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.1 Importing Foundation Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.2 Foundation Data Translation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Importing Data Translations for Legacy Foundation Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Importing Data Translations for Metadata Framework Foundation Objects. . . . . . . . . . . . . . . . . . . . .37
Learn about changes to the documentation for Mass Changes in Employee Central in recent releases.
1H 2022
Changed We removed a note that mentions Termi Attaching Documents with Data Imports
nation Details data import doesn't sup [page 157]
port attachments through .zip files.
Changed We removed the configuration information Job History Imports [page 70]Job Rela
about Centralized services for Job History tionships Imports [page 73]
and Job Relationships imports, since both
features are now universal.
Changed We updated the procedure to monitor Im Monitoring Jobs for Imports [page 108]
port jobs in Scheduled Job Monitor.
Changed Updated the list of Centralized services Centralized Services for Employee Data
entities that are now universaly sup Imports [page 43]
ported.
Changed Updated permission for Foundation Data. Foundation Data [page 27]
Changed Updated list of entities supported for Identical Record Suppression with Data
Identical record supression. Imports [page 126]
Changed • Removed uses cases related to Job Use Cases: Identical Record Suppression
Information. [page 128]
Changed Updated the list of supported data im Forward Propagation of Data with Im
ports for forward propogation. ports [page 133]
Changed Removed a list item from the pre-requisi Configuring Business Rules for Data Im
ties section related to Job History. ports [page 159]
Changed Removed a note from the pre-requisite Assigning a Business Rule to an Em
section. ployee Central Entity [page 162]
Changed Added a note regarding the validation of Data Validation and Error Management
inactive Foundation Objects and Generic with Data Imports [page 115]
Objects for imports.
Changed Added a note for Non-Recurring Pay Com Configuring Workflows for Data Imports
ponent. [page 171]
Changed Added a note in the Performance Bench Performance Benchmarks [page 52]
mark topic. This topic has been moved
under Prerequisites for Importing Em
ployee Data.
New Created a new topic on working with view- Working with View-only Fields [page
only fields 50]
Changed Updated a note about retaining filter val Creating Mass Change Requests [page
ues. 13]
Changed Updated a note regarding the text input in Creating Mass Change Requests [page
the search field is no longer case sensi 13]
tive.
Changed Updated a note about the updates related Creating Mass Change Requests [page
to Matrix Relationship for Position. 13]
Changed Updated a note about the Modified tab Creating Mass Change Requests [page
displaying the list of all modified records 13]
irrespective of any sort or filter options
applied to the list of Position records.
Changed Updated the bullet list regarding informa Mass Data Management [page 10]
tion about custom date filter.
Changed Updated a note regarding modifying or Configuring the Mass Change Request UI
deleting any field from the mass data con [page 11]
figuration.
New Added data validations, changes to work Non-Recurring Pay Component Imports
flow configuration, and behavior of the se [page 77]
quence number in a non-recurring pay
component data import process.
Changed Updated the behavior of the sequence Non-Recurring Pay Component Imports
number when enabled in Full Purge mode [page 77]
during a non-recurring pay component
data import process.
Changed Removed non-recurring pay component Identical Record Suppression with Data
from the list of entities supported with Imports [page 126]
Centralized services.
Changed Address Imports, Work Permit Imports, Centralized Services for Employee Data
and Social Accounts Information Imports Imports [page 43]
are now universally supported by Central
ized Services.
Changed Added information about forward propa Forward Propagation of Data with Im
gation with same and different entity type. ports [page 133]
Currently, if you insert a same entity type
within an existing time slice, the data isn't
propagated forward. Insertion of only a
different entity type within an existing
time slice is propagated forward.
Added Added a note stating that Termination De Attaching Documents with Data Imports
tails doesn't support imports through [page 157]
a .zip file. Ensure that the imported file is
in a CSV format.
Added Added a new topic "Monitoring Jobs". Monitoring Jobs for Imports [page 108]
Added Added information about downloading the Downloading Import File Templates
base model fields for CSF configured enti [page 58]
ties.
Changed Added a note of type 'Restriction' that for Forward Propagation of Data with Im
ward propagation of data for Pay Compo ports [page 133]
nent Recurring entity currently doesn't
support calculated fields.
Added Added a note mentioning that when a full Uploading Import Files in Full Purge
purge is performed by changing only the Mode [page 97]
end dates and enabling the flag Suppress
update of identical records during
Employee Central import for supported
entities. on the Company System and Logo
Setting page, a warning is displayed which
mentions that identical records are sup
pressed. However, this warning can be ig
nored, and the records have been saved
successfully.
<July 9, 2021>
Changed Added a note saying that before importing Consolidated Dependents Imports [page
data for new dependents with Consoli 93]
dated Dependent import, ensure that
their Biographical Information is im
ported.
<May 7, 2021>
Changed Created a separate topic for Foundation Foundation Data [page 27]
data and related information. Moved the
related content out of the Employee Data
Imports topic.
Added Added a use case about how business rule Use Cases: Business Rules [page 164]
derive a previous value while evaluating
conditions, as part of the data import
process.
Added Added a use case to explain the behavior Global Information Imports [page 80]
of a permission that lets you update exist
ing Global Information records, when im
porting data in Full Purge mode.
New Job History, Compensation Information, Centralized Services for Employee Data
Pay Component Recurring, and Social Ac Imports [page 43]
counts Information are new entities sup
ported by Centralized services.
Changed Addresses entity is now universally sup Centralized Services for Employee Data
ported by Centralized services. Imports [page 43]
Added Added a topic about how Centralized serv End Date Correction with Data Imports
ices support end date correction with vari [page 116]
ous data imports.
Added Added a topic about the error manage Data Validation and Error Management
ment and data validations by Centralized with Data Imports [page 115]
services.
Changed Added information that the base model of Uploading Import Files in Full Purge
an entity will be considered always while Mode [page 97]
importing data.
Added Added a note describing the characteris Configuring Workflows for Data Imports
tics of workflows associated with Job His [page 171]
tory entity.
Changed Revised information about setting a batch Configuring Data Import Settings [page
size for employee and foundation data im 47]
ports.
Added Added information about automatic inter Automatic Interruption and Cancellation
ruption and cancellation of data import of Import Jobs [page 176]
jobs upon failing to meet the required cri
teria.
Added Added a note about business rules getting Assigning a Business Rule to an Em
executed based on rule context settings. ployee Central Entity [page 162]
Added Added information about managing pass Basic User Information Imports [page
words for users created as part of basic 64]
user imports.
Added Added a prerequisite about the permis Creating Mass Change Requests [page
sions required to work with fields which 13]
are Generic Objects when creating a mass
change request.
Added Added a note about how to resolve de Configuring the Mass Change Request UI
pendency when a field added to the mass [page 11]
change configuration has a parent.
Changed Refactored the topic about suppressing Identical Record Suppression with Data
identical records from a task to concept Imports [page 126]
as there is no configuration required to
perform. The setting is enabled by default.
Added Added a new topic to describe how se Sequence Number Generation and Cor
quence number generation and correction rection with Data Imports [page 146]
works with Centralized services, with re
lated use cases.
Added Added a topic to list unique use cases re Use Cases: Identical Record Suppression
lated to identical record suppression with [page 128]
various data imports.
Added Added a topic to list unique use cases re Use Cases: Business Rules [page 164]
lated to business rule execution with vari
ous data imports.
Added Added a sub-section about sequence • Job History Imports [page 70]
number and its significance while prepar
ing data for Job History and Compensa
• Compensation Information Imports
[page 74]
tion Information imports.
Everything about the different methods in Employee Central to perform mass transactions on employee and/or
company data.
There can be many scenarios for performing mass changes can be many, ranging from a system-wide data
migration to ad-hoc data updates.
SAP SuccessFactors provides dedicated solutions to help achieve your objective. This guide elaborates on the
different options available in Employee Central for mass data changes.
Mass Data Management [page Use this solution to perform Position data objects. System Administrators, HR
10] bulk updates to data objects Representatives
from the UI.
Foundation Data [page 27] Use this solution to import Foundation data objects, in System Administrators
foundation data, and perform cluding MDF (Generic) data
mass changes to foundation objects.
data.
Employee Data Imports [page Use this solution to import All HRIS entities. System Administrators
43] employee data, and perform
mass changes to employee
data.
Managing Mass Changes Use this solution to perform Job Information and Job Rela System Administrators
[page 17] mass changes to Job Informa tionship entities.
tion and Job Relationship
data. This solution performs
mass changes on the basis of
business rules.
Mass Changes to Positions Use this solution to perform Position data objects. System Administrators
[page 19] mass changes to Position
data. This solution performs
mass changes on the basis of
business rules.
Mass Data Management is a UI based solution to perform bulk changes to data in Employee Central.
Mass Data Management is simple and effective solution for performing data modifications in bulk. It moves away
from the traditional approach of configuring rules and a backend job for performing mass changes, to a UI based
approach where you can exercise much better control over the data you want to modify.
Whether you are an administrator with a strong technical background or an HR representative with a functional
(non-technical) background, you can use this solution to address your mass change requirements.
• A Fiori based UI, providing a simple, adaptive, and intuitive user experience.
• Comparatively flat learning curve, requires little to no prior technical knowledge.
• Readily extensible to support a wide range of data objects in Employee Central.
Currently, Mass Data Management supports changes to Position data objects only.
To perform mass changes and related tasks, you must have the following permissions.
Administrator Manage Mass Data Enable Mass Data Provides access to the
Management Management Admin Center Mass Data
Management page.
Before you create your first mass change request, you must configure the UI to include data object fields that can
be filterable, editable, and visible.
Context
The mass change UI configuration acts as a canvas for all your mass data transactions. The different sections of
the mass change UI aren't preconfigured by default. As the administrator, you can configure these sections by
adding fields from the target data object to:
• Search data
• Display data, and
• Modify data.
Therefore, creating a mass change UI configuration is necessary to be able to create change requests.
Note
You can only have one mass change UI configuration for each instance.
Procedure
Based on this value, your mass change request is processed synchronously (instantly) or asynchronously (in
the background). If the number of records you’ve modified exceeds this value, the mass change request is
processed asynchronously. The data to process includes your changes, and changes resulting out of
associated business rule execution. Changes resulting out of business rule execution are always processed
asynchronously.
Note
5. Add fields from the position data object to the configuration, by selecting from the fieldName dropdown.
Note
• The <effectiveStatusStr> field is added to the configuration by default, and can’t be removed.
• Fields of data type User aren’t supported currently.
6. To configure a field to be filterable by default on the mass change UI, select the corresponding defaultFilter
value to Yes.
7. To configure a field as a filter, select Details and set it as filterable.
Note
If the field has a parent, ensure to add the parent field to the configuration and set it as filterable.
8. To configure a field to be editable as part of a mass change request, select Details and set it as editable.
Note
• If the field has a parent, ensure to add the parent field to the configuration and set it as editable.
• Except <externalCode>, all other fields can be configured as editable.
9. Optional: Add matrix relationship to the configuration by selecting from the associationName dropdown.
Note
By default, matrix relationship is editable as part of a mass change request. You can’t configure the
association to be filterable as it isn't supported.
Results
You’ve created a request job. You can perform either of the following actions from Take Action:
• Modify a configuration.
Note
You can't modify a configuration if there are mass change requests in the Processing Draft status. You can
edit once the requests are moved to the Draft status.
Caution
All drafts associated with a mass change configuration are deleted if you:
• Modify or delete fields from the job configuration.
• Delete the job configuration that includes draft requests.
Next Steps
Prerequisites
Context
You can create a mass change request to update position records only.
Procedure
Note
Effective Date is the only filter criteria that's mandatory. Though it's initially preselected to show the current
date, you can select any other date in the past or future, as applicable.
You can retain filter values while viewing the Draft screen in Mass Data Management. By hovering over
Other Filters you can also view the list of filter values applicable for that search result. Currently, a filter
group can contain a maximum 8 filters (excluding Effective Date).
Records matching the filter criteria are fetched. Position and Start Date columns are added to the table
configuration by default.
Note
If you initiate a search with no criteria in the search box, an empty page appears with no data. You need to
refresh the page to load the data again.
To change the table configuration, select (Columns). All fields added to your mass change UI configuration
are displayed here.
Recommendation
For a better user experience, we recommend adding not more than 15 columns.
Note
Note
• In the Matrix Relationship for Position section, you should select the values in a sequential order to
ensure the values are editable, if not the values will be in read-only mode.
For example, you configure 5 fields for Matrix Relationship For Position such as, Type, Related
Position, Custom Field 1, Custom Field 2, and Custom Field 3. Type and Related Position values are
required to be provided in order to make Customer Field 1 editable. If the Custom Field 1 value is
provided the system would enable Custom Field 2 and so on.
• When you modify a matrix relationship for a position, you can view the changes in the Matrix
Relationship for Position dialog box. The old matrix relationship record is crossed out and the new
value is displayed.
• You can't add a new matrix relationship by selecting a relationship type that already exists for any
of the selected positions.
Modified records are shown with a clickable link Modified. Select the link to see your edits, and the changes
resulting out of business rule execution.
11. Choose Apply and then OK to apply the changes to the selected records.
The Modified page appears. You can view a list of all modified records on this page.
Note
The onChange rules configured on the modified fields are triggered on applying the changes. All changes
are applied to the selected records, and your mass change request is saved as draft. Depending on the
number of records you’ve modified, your mass change request is processed synchronously or
asynchronously. During asynchronous processing, your changes are being processed and the results will
be available. The status of your mass change request is momentarily set to DRAFT PROCESSING. You
cannot perform any changes to the mass change request until the status changes back to DRAFT. You can
have at most 10 drafts active at a given point in time. Drafts that remain unchanged for 30 days from the
date of creation are automatically deleted.
12. Optional: If you want to edit other records from the list, choose the All tab and make the necessary changes.
13. If you want to review records containing error, choose the Containing Errors tab.
You can also cancel the mass change request anytime as long as the request is in the draft status.
Note
16. Optional: At this point, you can share the mass change request with other users. To do so, go to the Mass
Actions Overview page and select the checkbox against your mass change request. Then select Share.
You can also add a custom message before sharing. An email notification is sent to all the recipients.
Note
A mass change request is successfully created, and you’re redirected to the Mass Actions Overview page. The
change request is saved in the PROCESSING status.
Results
You’ve successfully initiated a mass change request. You'll receive an email notification after the request is
processed successfully. A log file is also created for reference purposes, which is available for 6 months from its
creation date.
Mass change requests failing to complete successfully or containing any errors are classified with the status
ERROR. You can review the change request or notify someone to resolve the errors.
Related Information
Simultaneously make changes to a lot of job information and job relationship records.
Prerequisites
You have view and edit permissions for mass changes: Administrator Permissions Manage Mass Changes
Context
You can make effective-dated changes to employee records in 1 batch job. Common example for creating a mass
change is to move a lot of employees to a new manager, a new building, or a new project.
Procedure
Employee Group Select a group from the list or create a new group. If you se
lect an existing group, you can select View to review the
members of that group.
Effective Date Select the date the change comes into effect.
Mass Change Area Select Job Information or Job Relationship or fields in those
areas
Event Reason Select the relevant event reason, for example, Data Change.
4. Select Save to save this as a draft version. Select Save and Initiate to run the batch the immediately.
In the Mass Changes list, you can see all mass changes created in the system or filter to see those created by
you. For mass changes with the status Draft, select the mass change to open it again.
Once the batch is run, the status changes to Completed Successfully or Completed with Errors. For batches with
errors, you cannot reinitiate them. However, you can copy them using Actions to create a new batch.
For successful batches, new time slices are created in the Job History for the affected employees.
You can see the results of the batch job in a CSV where necessary.
There is a mass change feature you can use to make changes simultaneously to a large number of positions.
Overview
• You can use a single rule to define the position target population and the change attributes.
• Changes are effective dated.
• Changes to positions can be synced to incumbents.
• The default is for the Mass Change Run object to be secured by a role-based permission (RBP). Only users with
the relevant permission can make mass changes.
Prerequisites
• The Mass Change Run object is RBP-secured by default. You grant access to it by going to the Admin Center
and choosing Manage Permission Roles . You can find the permission under Miscellaneous Permissions.
• You grant access to Manage Mass Changes for Metadata Objects also in Manage Permission Roles, this time
under Metadata Framework.
Restrictions
• If a pending position is valid for the change, but the effective start date is before the change date, the relevant
record can't be updated.
• No role-based permissions are applied to selecting and changing the positions.
• Only Set statements are allowed in the Select And Update Rule THEN condition.
• If any records in the mass change run contain an optimistic locking exception, then the whole batch will be
rolled back. For more information, see the Related Links section below.
You create a mass change run by going to the Admin Center and choosing Manage Mass Changes for Metadata
Objects. Here's an example, showing the sort of entries you can make:
• Code
Unique code for the new mass change run.
• Name
Translatable name for the new mass change run.
• Object Type To Be Changed
Indicates whether the object is a position or a time object.
• Change Date
This is the date on which the changes take effect. All records (active, inactive, pending) valid from this date that
match the IF condition of the Select And Update Rule are included.
• Synchronize To Incumbents
This indicates whether the changes in the position objects should be synchronized to the incumbents.
• Select And Update Rule
Enter a rule that defines which objects are selected and what is updated. Use IF conditions to restrict the
number of objects to be changed by this mass change run. Use SET statements in the THEN condition to
define the new values of objects.
Note
Only rules created with the Update Rule for Mass Change Run rule scenario can be selected. Only SET
statements are supported in the THEN condition.
• Execution Mode
You can select Run or Simulate. When you choose Simulate, the mass changes aren't saved, but you can see
the result in the log. When you choose Run, the mass changes are executed and saved. You can see the result in
the log.
• Execution Status
This field shows the status of the mass change run:
• Scheduled means that the mass change run is scheduled and will run soon
• In Progress means that the mass change run is still running.
Note
If you select Simulate or Run as the execution mode and then save the mass change run, the system triggers
a QUARTZ Job (Name MassChangeRun_<Code><UniqueNumber>) that processes the change. So, it may take
a while before the run starts. You can review the status of the QUARTZ Job if you reload the mass change run in
the Execution Status or the job directly by going to the Admin Center and choosing Monitor Job.
After the mass change run has finished, the user who started the mass change run receives an email. The
details can be found in the Log section of the mass change run.
Note
In order to ensure that the mass change runs execute as smoothly as possible, we recommend that you enable
the rule cache.
The result is that the mass change run is much quicker than before, and you're significantly less likely to
encounter a timeout.
Processing Details
Let's assume the mass change run discussed above is executed. The assigned rule "PosJobTitleChange" would
look like this:
It is also possible to modify the data of a composite object, such as Matrix Relationship.The rule shown in the
picture below is an example of such a rule.
The IF condition returns all positions that are assigned to company = SAP and that already have a matrix
relationship maintained with Type = HR Manager Position and Related Positions is not equal to Expert Developer.
The SET statement updates this existing matrix relationship and sets the related positions to Expert Developer.
The SET statement on a composite object creates a new record if no record specified by the Select-Statement
in the SET-Condition was found. So, it's important to restrict this in the IF condition as shown in the screenshot
above.
Related Information
Optimistic Locking
Foundation Data consists of information that defines your company, and everything else related to it such as the
organization, pay, job structures, and so on.
Foundation Data is commonly shared across the company. Therefore, it’s important to import foundation data
before importing employee data. You can import foundation data and also translate it to ensure that your
proprietary information of your company is available in different languages.
There are preconfigured templates available in Employee Cental to import Foundation Data. Each template
corresponds to a data object, also known as a Foundation Object (FO). Each Foundation Object stores a specific
type of foundation data.
Foundation Objects can be broadly classified into the following types, which include legacy Foundation Objects as
well as Metadata Framework Foundation Objects (Generic Objects):
Remember
Ensure you have the Administrator Permission Manage Foundation Objects Import Transaltions
permission to import data and perform translations for selected foundation objects
Related Information
Prerequisites
Corporate Data Model must be configured with the required Foundation Object definitions.
Importing Foundation Data is one of the first steps in the data import process. You can import Foundation Data
using preconfigured file templates available in Employee Central for each Foundation Object.
Tip
To ensure you’re using the up-to-date version of CSV template, always download a copy from the system.
This topic covers the process of importing Foundation Data with legacy Foundation Objects. For information about
importing Foundation Data and Translations with MDF Objects (also known as Generic Objects), refer to the
Related Information section.
• Legal Entity
• Local Legal Entity
• Pay Group
• Department
• Business Unit
• Job Function
• Job Classification
• Local Job Classification
• Cost Center
• Pay Calendar
• Division
Procedure
Before importing Foundation Data, it’s important that you keep note of the associations established between
Foundation Objects while setting up the Corporate Data Model. To ensure that the import is successful, you
must import data with a 'Child' Foundation Object before importing data with a 'Parent' Foundation Object.
Example
Let's suppose that you’ve created an association between Legal Entity and Department Foundation Objects,
where Legal Entity is the parent of Department Foundation Object. As a result, the import file template of
For more information about associations between different Foundation Objects, refer to the Related
Information section.
7. Choose a method of data import.
• Select Full Purge, if you’re importing the data for the first time, or you want to completely overwrite existing
data with information included in the template. Existing records not included in your import file are
unaffected.
Example
You add a new record for Cost Center B by importing data in Full Purge mode.
RESULT:
• Existing records with Cost Center ID as A remain unchanged.
• There is a single record for Cost Center ID as B.
Note
This option is selectable for all Foundation Objects except Dynamic Role and Position Dynamic Role, which
support data imports in Full Purge mode only.
Note
• If the number of records is greater than 10, the data is imported in the background.
Next Steps
Repeat the process to import data with other Foundation Objects as required.
Related Information
By translating Foundation Data, you can ensure that proprietary information of your company is available in
different languages.
Since foundation data includes fundamental information about your organization, pay, and job structure, which is
reflected globally, you can create multilingual versions to facilitate users across the globe to view this information in
their preferred language. Foundation data is translatable only if the user's preferred language is supported by the
system. To find the list of supported languages, go to Options Change Language .
Note
Foundation data isn’t translated in the People Profile. The People Profile shows basic foundation data such as
organizational information, such as division, department, location, and so on.
Related Information
Prerequisites
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Context
In terms of classification, the following data objects are identified as legacy Foundation Objects.
• Location Group
• Geo Zone
• Location
• Pay Range
• Pay Grade
• Pay Component
• Pay Component Group
• Frequency
• Dynamic Role
• Workflow Config
• Event Reason
You can translate the name and decription attributes in all these Foundation Objects.
You can also translate custom fields in the Foundation Objects, provided they are of type translatable.
Note
The following procedure shows how you can perform a mass translation of Foundation Object data. If you want
to add or update translations to specific instances of legacy Foundation Objects, refer to the Related
Information section.
Procedure
Field Value
Note
A business key for an MDF object is a set of fields of
the MDF object that can be used as a unique key.
Hide External Code - visible only when Key Preference is set Select one of the following:
to Business Key. • Yes
• No (default value)
3. Select Download.
A popup window appears prompting you to select a location on your computer to download the zip file. Select
an appropriate location and save the zip file on your computer.
4. Prepare the import file template with data to import.
Title Description
externalCode This column must contain the external code assigned to all
the existing legacy Foundation Objects by the system. This
value doesn’t correspond to any external code that you’ve
created or imported as an Administrator. It’s the
FoTranslation-specific external code that's created when
you configure your system to enable Foundation Object
translation.
foType This column must contain the type of the legacy Foundation
Object.
Example
businessUnit, jobFunction, company, to name a few.
value.<locale ID> There's one column for each language pack selected in
Provisioning. These columns must contain translations of
the data that you’ve entered for the foField of a given foType
respectively.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Example
5. To upload the template, select Import Data from the Select an action to perform dropdown on the Import and
Export Data page.
6. Select one of the following options:
• CSV File, if you’ve downloaded the template excluding any related dependencies. On select, a form appears
with the following fields:
Field Value
File Encoding Choose the correct file encoding for your data.
Note
A business key for an MDF object is a set of
fields of the MDF object that can be used as a
unique key.
• ZIP File, if you’ve downloaded the template including related dependencies. On select, a form appears with
the following fields:
Field Value
If successful, data translations for legacy Foundation Objects are imported into the system.
Results
After the import, the system determines the language in which foundation data is shown to users based on the
following order of precedence:
1. Log on language of the user: The user's preferred language as selected in the Options Change Language
section.
2. Default company language: When there’s no translation is available in the logon language of the user. Default
company language is based on the configuration setting made in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Prerequisites
Data translations for legacy foundation objects are imported in the system.
Context
After importing data translations for legacy foundation objects through the mass import method, you can modify
translations in an existing instance of a legacy foundation object if required. To do so, you must edit the respective
foundation object instance and modify the translations by adding or updating data as necessary.
Recommendation
Follow this method only you want to add or change only a few terms in the legacy foundation object instance.
Example: Revising terms that have not been translated appropriately.
For information about mass importing translations for legacy foundation objects, refer to the Related Information
section.
Procedure
Example
Location Group
3. Select a corresponding instance of legacy foundation object type from the second Search dropdown.
The selected data object instance is displayed on screen.
Note
Translatable fields of the data object are associated with the icon.
A popup window appears with a list of supported languages and free text fields for entering translations. The
list of languages correspond to the language packs activated in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
6. Click Finished.
7. Enter the translations for other fields in a similar manner.
8. Click Save.
The foundation object instance is updated with the new translations.
Related Information
Create multilingual versions of data imported with Metadata Framework foundation objects.
Prerequisites
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Context
You can import data as well as translations for Metadata Framework Objects at the same time. In Employee
Centrall, the following data objects are identified as Metadata Framework foundation objects (also known as
Generic Objects).
• Legal Entity
• Business Unit
• Cost Center
• Division
• Department
• Job Classification
• Job Function
• Pay Group
• Pay Calendar
You can translate the name and decription attributes in all these foundation objects.
Note
You can also translate custom fields in the foundation objects only if they are of type translatable.
Note
The following procedure shows how you can perform a mass import of Generic Object data and translations. If
you want to add or update translations in specific Generic Object instances, refer to the Related Information
section.
Procedure
Field Value
Example
Legal Entity
Note
A business key for an MDF object is a set of fields of
the MDF object that can be used as a unique key.
Hide External Code - visible only when Key Preference is set Select one of the following:
to Business Key. • Yes
• No (default value)
3. Select Download.
A popup window appears prompting you to select a location on your computer to download the zip file. Select
an appropriate location and save the zip file on your computer.
4. Prepare the import file template with the required data.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
5. To upload the template, select Import Data from the Select an action to perform dropdown on the Import and
Export Data page.
6. Select one of the following options:
• CSV File, if you’ve downloaded the template excluding any related dependencies. On select, a form appears
with the following fields:
Field Value
Select Generic Object Select the respective foundation object for which you
want to import translations.
File Encoding Choose the correct file encoding for your data.
Note
A business key for an MDF object is a set of
fields of the MDF object that can be used as a
unique key.
• ZIP File, if you’ve downloaded the template including related dependencies. On select, a form appears with
the following fields:
Results
1. Log on language of the user: The user's preferred language as selected in the Options Change Language
section.
2. Default company language: When there’s no translation is available in the logon language of the user. Default
company language is based on the configuration setting made in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Related Information
Prerequisites
Data translations for Metadata Framework foundation objects (also known as generic objects) are imported in the
system.
After importing data translations for generic foundation objects through the mass import method, you can modify
translations in an existing instance of a generic foundation object if required. To do so, you must edit the respective
foundation object instance and modify the translations by adding or updating data as necessary.
Recommendation
Follow this method only you want to add or change only a few terms in the generic foundation object instance.
Example: Revising terms that have not been translated appropriately.
For information about mass importing translations for generic foundation objects, refer to the Related Information
section.
Procedure
Example
Legal Entity
3. Select a corresponding instance of generic foundation object type from the second Search dropdown.
The selected data object instance is displayed on screen.
Note
Translatable fields of the data object are associated with the icon.
A popup window appears with a list of supported languages and free text fields for entering translations. There
is also an entry that shows the default value currently set for the selected field. The list of languages
correspond to the language packs activated in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
6. Click Finished.
7. Enter the translations for other fields in a similar manner.
8. Click Save.
The foundation object instance is updated with the new translations.
A solution part of SAP SuccessFactors Employee Central to perform mass changes to employee data.
Employee Data Imports features as a dedicated method to help you execute changes to employee data in bulk.
Using CSV templates corresponding to each HRIS element in Employee Central, Employee Data Imports enables
you to manage changes to employee data efficiently.
Employee Data Imports can be used to manage data for different types of users in your company such as
employees, contingent workers, and so on.
Following are some of the prominent scenarios where mass data transactions can be administrated using
Employee Data Imports:
Currently, Centralized services support several participating entities to bring data consistency and a host of
performance-related improvements to the data import process. To know more about Centralized services, refer to
the Related Information section.
Related Information
Centralized services is an umbrella term for a collection of specialized services governing different processes in
Employee Central.
Centralized services is basically a framework that acts as a common platform supporting various features in
Employee Central, with the aim to bring consistency without compromising on quality.
Centralized services are enabled by default, and are applicable to data imports initiated from the Import Employee
Data page or OData APIs.
Note
You have the option to deselect settings that are either Admin Opt-out or Admin Opt-in, in which case the
respective entity exhibits legacy behavior without Centralized services.
The following HRIS entities are currently supported by Centralized services on Imports:
Address Universal
Recurring Pay Component Admin Opt-out Admin Center Company System and
Logo Settings Enable Centralized
Services for Recurring Pay Component
Information
The following HRIS entities are currently not supported by Centralized services on Imports:
Unsupported Entities
Basic Import
Background Import
Consolidated Dependents
Extended Import
Emergency Contact
Person Relationship
Before beginning the data import process, you must prepare your system to meet certain prerequisites.
Key Considerations
• To ensure that data is imported successfully, it’s imperative that you have a thorough knowledge of the HR data
to be migrated as well as the data models across both the applications.
• You must account for dependencies or constraints that can impact the data import process, if any. If there’s
Personally Identifiable Information or confidential data present, you must be aware of the parties who can
review and validate the data.
• Employee Central is set up and data models, rules, and foundation objects are configured in accordance with
the configuration work book.
• Role-based permission model is implemented in your system.
• Configuring the system for optimal import performance.
• Implementing input validation check on import data.
• To perform uploads in preferred languages supported by the system, ensure data translations are added to
FO's, GO's and Picklists.
Note
At any given time, you can execute a maximum of three Employee Data Import jobs, simultaneously. If a new
job is submitted during this time, the new job is added to the queue until one of the ongoing jobs has completed
its execution.
Related Information
Review the existing system settings to ensure an optimal performance during the data import process.
Context
Before you begin the data import process, be it foundation data or employee data, there are a few important
system settings you must review. Depending on the volume of data to be imported, you can configure these
settings accordingly. Based on established performance benchmarks, we also provide recommended values for
your convenience.
Procedure
Recommendation
3. Enter a value in the Set a batch size for employee and foundation data imports. (Enter a value between 1-100)
field.
The system divides and groups the records in your import file into smaller units called 'batches'. The batch size
is the number of records that the system processes in each batch. It's recommended that you set a batch size
value between 1-100.
Note
A batch size value greater than 100 is not considered, and is automatically truncated to the maximum limit
of 100.
This value corresponds to the number of days to retain details of the scheduled import jobs, for monitoring and
reporting purposes.
5. Save the settings.
The maximum number of records supported by Employee Central Imports is 30,000. If your import file has
more than 30,000 records, we recommend that you distribute the data into multiple files and upload each file
separately.
Note
• For Employment based entities, the maximum number of records supported by Employee Central
Imports is 30,000 without rules execution. If there are large number of rules configured or if there are
Results
To import employee data and perform related tasks, you must have the required permissions.
You can define permissions for different roles before assigning them to users, so that they can perform tasks
permitted by their role. It’s useful when there are many users who can access the Import Employee Data page, and
you want to define the level of access for each user.
Administrator Permissions Employee Central Import Enable Workflows for selected Workflows attached with the
Settings entities selected Employee Central Im
port entities are triggered
when you’re importing em
ployee data.
Manage Hires Rehire Inactive Employee You can rehire employees who
have previously worked at
your company.
Manage User Allow users to view all the jobs You can monitor jobs submit
ted by all users.
Admin Center Permissions Monitor Scheduled Jobs Allows you to monitor your im
port jobs using the Scheduled
Job Monitor tool.
User Permissions Employee Central Import Employee Central Import Allows you to import em
Entities ployee data with the selected
Entities
Employee Central Import enti
Supported Import Entities: ties.
Related Information
For HRIS fields that are configured with Visibility as View in combination with Allow Imports status in the Manage
Business Configuration page, you can choose to allow or disallow data uploads.
Visibility = View With imports you can upload field values for such configura-
tion.
Allow Imports = Yes
If you do not have these fields as part of the template, values
from previous record will be copied over to the record being im
ported.
Visibility = View With imports, the HRIS fields are not visible in your template
and you are not allowed to import the values.
Allow Imports = No
If you do not have the field as part of the template upload, then
values from previous record will not be copied over to the re
cord being imported.
Note
For entities such as - compensation, personal information
and job information, if you are importing data with other
fields, the system allows the values of view-only configured
fields to be copied over from previous records.
Configure the system to scan the data in the import files and take appropriate actions.
Context
As a security measure, you can setup your system to scan the data included in your import file and reject malicious
content, if any.
Procedure
The Platform Feature Settings page appears where you can enable or disable platform features.
2. Select Security Scan of User Inputs.
3. Click Save.
Performance benchmarks are standards established with respect to the amount of time taken to import a set of
records with each employee data import entity.
Note
If you encounter issues when performing mass uploads of more than 10k records, you may choose to
• Disable rule processing by disabling execution of rules from Employee Central Import Settings Enable
Business Rules for selected entities
• Trigger rules selectively by applying rule context when configuring them in the Manage Business
Configuration page.
The benchmarks for each entity have been established in a test environment under the following conditions:
• The maximum number of records in the import file template for each entity is 10,000, unless specified
otherwise.
• Business rules are not applied while importing these records. If you have business rules applicable, there can
be a difference in the reported benchmark data and the actual data.
• Centralized services for Employee Central Imports is enabled, unless specified otherwise.
Note
By default, a single CSV file supports an import of upto 30k records. The time required to import is proportional
to the number of records in your import file.
Basic Import - 18 -
Employment Details - 1 -
National ID Information Full Purge 2 (With Centralized Services Parallel imports using multiple
enabled) import files is supported.
Personal Documents - 1 -
Information
Emergency Contact - 1 -
Incremental Load 2 -
-
Note Incremental Load 2
Recurring
Incremental Load 6 -
Termination Details - 4` -
Tip
Suffix the ID with a “d” to
distinguish between em
ployees and their corre
sponding dependents.
Note
The duration for import remains unchanged as these dependent entities are not yet available on Centralized
services.
Related Information
A standard process to follow for performing various transactions on employee data through imports.
The Employee Data Import process involves 3 main steps. Following is a clickable diagram highlighting the different
stages in the process.
You must execute these 3 steps to perform any kind of bulk transactions on employee data.
The following use cases describe the data import sequence for regular or full-time employees.
Note
To know about the data import sequence for contingent workers, refer to the Related Information section.
Note
1. Basic User Information This step involves the following data imports:
Example
The social security number (SSN) in the United
States, the Permanent Account Number (PAN) in In
dia.
To modify existing user data, follow the data import process to import the corresponding entity data. There’s no
recommended import sequence in this case.
Example
To modify employment information of users, import data for Employment Details entity.
Deleting user data is a sensitive case as it can cause downstream implications, and must be carried out carefully.
You can import data to delete information associated with a single entity or multiple entities together. For more
information, see Employee Data Deletion with Imports [page 121]
Note
Data deletion through Imports is not supported for non-effective dated entities like Biographical Information,
Employment Information, and so on.
To terminate users, follow the data import process to import Termination Details.
Rehire Users
To rehire users with a new user ID, the recommended import sequence is as follows,
1. Employment Details
2. Job History
To rehire users with an existing user ID, follow the data import process to import Job History details.
For more information, see Rehire Employees with Data Imports [page 154].
Related Information
Begin the data import process by downloading an up-to-date version of the preconfigured import file template.
Prerequisites
• The HRIS elements corresponding to the data you want to import must be configured to include the required
data fields.
• The HRIS elements that support country/region-specific templates must be configured, to download country/
region-specific import templates.
• To include country/region specific address information with Emergency Contact and Person Relationship
imports, Enable CSF support for Address in Emergency Contact and Personal Relationships Imports setting
must be enabled from the Company System and Logo Settings page.
Note
By enabling this setting, country/region-specific fields configured for different countries in the
homeAddress HRIS element will be available for selection while downloading import templates for
Emergency Contact or Person Relationship entities.
Downloading the import file template from the server is recommended to ensure that you have the latest and up-
to-date version of the template. You can also choose to download a country/region-specific template, or customize
your template by selecting from the list of available fields.
Procedure
Choose this option to download the template for the required country/region. This option appears only if the
entity you've selected supports Country/Region specific information under the Available Data Fields column.
You can select multiple countries/regions if required.
Note
For the CSF-configured entities, you have an option to download the base model fields by deselecting all
the countries in the Select country/region.
5. (Optional) If you want to encode the template in a format other than Unicode (UTF-8), select an encoding
format from the File encoding dropdown.
6. Select the data fields to include in the import template from Available Data Fields.
This option appears only if the entity you've selected supports data field selection.
The selected fields appear under the Selected Data Fields column.
Note
To remove a selected data field, click (Delete) against the respective field.
Mandatory fields or 'Business Keys', are automatically included under the Selected Data Fields, and can’t
be removed. In SAP SuccessFactors, each record is identified by a set of unique identifiers known as
Business Keys. Based on the business keys, employee records created or updated respectively.
Note
If you have selected multiple countries/regions, ensure that you select only the required fields as
custom data fields configured for other countries/regions that you haven’t selected can also appear in
the list.
Example: While downloading a Job Information template for Canada, custom fields configured for USA
also appear in the list of Available Data Fields.
Related Information
SAP SuccessFactors provides preconfigured templates for importing different types of employee data.
An import file template is a CSV file intended to serve the purpose of importing a specific type of employee data.
Since Employee Central identifies and stores employee data in what are known as HRIS elements, there is an
import file template corresponding to every HRIS element involved in the data import process.
Corresponding HRIS
element as present in Requires Role-based
Business Configuration Supports country- Supports manual data Permissions: Employee
Import Template UI (BCUI) specific templates? field selection? Central Import Entities
Extended Import No No No
Background Import No No No
Consolidated No No No
Dependents
Compound Delete No No No
• Biographical Information
• Employment Details
• National ID Information
Note
This template supports the import of multiple national IDs, phone numbers and email IDs.
Related Information
After downloading the import file template, the next step in the data import process is to prepare the data to import
in the required format.
Procedure
1. Ensure that all the required fields are present in the import file template.
2. Understand the format, the permitted values, and other attributes of each field in your import file template.
Refer to the corresponding subtopic of the entity for which you're importing data to for more information about
the business keys, Data Object tables, additional supported configurations, and related use cases.
3. Ensure that your spreadsheet management application supports the file encoding format you selected while
downloading the import file template.
Unsupported file encoding formats can cause special characters in your import file to get corrupt. If the field
labels or data contains a comma (,), ensure that the corresponding labels have quotes around them.
Note
4. To retain leading zeros in your import file template, change the format of the required cell. To do so:
a. Right-click the cell, and select Format Cells.
Note
We've described the process in reference to the Microsoft Excel application, but the same logic can be
applied to other popular spreadsheet management applications.
5. Pay special attention while mapping data with fields of the following data types:
Expected input in
Data Type Description Sample Field import file Sample Value
Remember
• Provide a
value
correspondin
g to your
locale.
• The system
always
considers
active picklist
values.
• Format for UK
English:
DD/MM/
YYYY
6. If your import file contains data of new employees, ensure that the <start-date> value in the initial records of
Job Information, Personal Information, and Compensation Information (with the Hire event) matches with
the Hire Date in their Employment Details.
With Centralized services enabled, the Employment Information <start-date> is updated with the effective
start date of the most recently processed Hire/Rehire record. This sync happens whenever a Hire/Rehire
record is modified or inserted.
Since Employee Central is an effective-dated system, any mismatch in the date values leads to data
inconsistencies.
7. If you're importing data for an entity other than Addresses and Job Relationship (Work Permit) imports, you
don't need to enter the <end-date> value in the import files.
Even if you specify a value, the end dates are calculated automatically based on the corresponding start dates.
8. Follow the recommended sequence for importing data for different roles.
Before you import employee data, the employee's manager and the HR representative data must be imported.
Related Information
Notable points to help you prepare data for importing Basic User Information.
General Information
• STATUS Basic User Information: Data Field Basic User entity is currently not
Definitions supported by Centralized services.
• USERID
• USERNAME
• FIRSTNAME
• LASTNAME
• GENDER
• EMAIL
• MANAGER
• HR
• DEPARTMENT
• TIMEZONE
• Assignment ID Definition
For more information about each configuration/process, refer to the Related Information section.
New passwords generated with basic user imports will serve for basic authentication and token-based SSO only.
No emails are generated for resetting passwords. To reset passwords, go to your identity provider's administrator
panel.
Related Information
Notable points to help you prepare data for importing Biographical Information.
General Information
Note
User ID is a case sensitive field. Ensure that you enter the correct value to avoid validation errors.
For more information about each configuration/process, refer to the Related Information section.
Related Information
Notable points to help you prepare data for importing Employment Details.
General Information
Note
User ID is a case sensitive field. Ensure that you enter the correct value to avoid validation errors.
Remember
You cannot hire a user and set the new hire as a manager in the same import. You must first hire the user and
then update the manager in a second import.
• Assignment ID Definition
• Business Rule Configuration
• Identical Record Suppression
• Rehire Former Employees (with a new User ID)
For more information about each configuration/process, refer to the Related Information section.
You can rehire previous employees either with their old user ID or give them a new ID to start a new employment
record. While rehiring inactive employees with a new employment, the <Original Start Date> field located
under the Employment Information section of the Employee Profile highlights the effective start date of the
employee. Since the <Original Start Date> of the new employment is different as compared to the old
employment, when you import the rehire data, the system automatically updates the <Original Start Date>
field with the value provided in your import file.
You can configure your system to keep the Original Start Date of employees unaffected by data imports. To do so,
you must:
As a result, when the data is imported, even if the <Original Start Date> is updated, the custom date field will
always show the start date of the old employment.
Note
To rehire a user with their old employment information and existing user ID, use the Job History import.
Related Information
Notable points to help you prepare data for importing Global Assignment information.
General Information
• User ID Global Assignment: Data Fields Definition The global assignment import is
currently supported by Centralized
• Assignment Start Date
services only if Enable Centralized
Services for Global Assignments
(Applicable only for data imports from
UI and API) is enabled on the Company
System and Logo Settings page.
• Assignment ID Definition
With Centralized services enabled, several supported configurations/processes are enhanced and a few new
configurations/processes are supported, which are as follows:
For more information about each configuration/process, refer to the Related Information section.
Data Validation
• When importing the Global Assignment record, the system checks whether there is an active job information
record, as of the global assignment start date and either successfully imports the global assignment record or
throws a validation, meaning that the import fails.
The status of the Job Information record, as of the global assignment start date, cannot contain any of the
following statuses: Termination, Retired, or No Show.
Example
Future termination record for Septem Import a global assignment with start Successful
ber 30th date on September 15th
Future termination record for Septem Import a global assignment with start Successful
ber 30th date on Oct 10th
Future termination record for Septem Import a global assignment with start Failed with Validations
ber 30th date on Oct 10th
Related Information
Notable points to help you prepare data for importing Job History.
General Information
• User ID Job History: Data Fields Definition Job (Information) History imports is
universally supported by Centralized
• Event Date
services.
• Sequence Number
• Business Rules
• Document Attachment
• Employee Data Deletion
• Identical Record Suppression
• Forward Propagation of Data
• Workflow Configuration
Points to note for workflow configuration:
• Workflows are triggered only if your import file contains 1 record for each user. If there are multiple records
for the same user with at least 1 record requiring approval, the data for this user isn’t imported.
• Workflows are triggered only by the entity used in the import, for example, Job History. If no workflow is
triggered by the entity in the import, all changes are immediately saved to the database, including derived
entities, such as cross-entity rule results.
• Workflows are triggered only when a new Job History record is inserted in the import.
• Workflows aren't supported for the Update and Delete operations. This means that a workflow isn't
triggered when a Job History record with a configured workflow is updated, with or without derived entities
involved, such as Compensation Information.
• Workflows aren't triggered for a new hire record, with an exception for internal hires.
• Carbon copy (Cc) role notifications are supported.
With Centralized services, several supported configurations/processes are enhanced and a few new
configurations/processes are supported, which are as follows:
Note
You must import Employment Details before importing Job History for Hire, Rehire, or Termination events,
because the start and/or end dates of the employment are always adjusted with these events.
Data Validation
Centralized services introduce several validations to ensure data consistency. Job History imports are validated for
the following scenarios:
• To ensure that active employees don't have inactive managers on the start date of the newly imported Job
Information record. To ensure that active employees don't have inactive managers during the validity of the
newly imported Job Information record, a warning message is raised. The validation runs for active employees
only.
• To ensure that exactly 1 hire record exists for each user, which is also the first record.
• To ensure the employment status doesn't change when the event or event reason is changed.
The validation checks whether your changes are compatible when you edit an existing record. You can change a
Location Change record and make it a Department Change by selecting another event reason, or you can
change 'New Hire' into 'Hire from Affiliate' - meaning it can be the same event or a different event, so long as
neither the old nor the new event introduces a status change.
It is possible that termination information changes the employment status, however, follow-on processes, such
as for Position Management, are not triggered.
• To ensure that working days for each week is greater than or equal to 0, or less than or equal to 7.
• To ensure that only the Set action is used in rules where Job Information or Job Information Model is the target
object.
Position to Job Sync Validation: If you've Position Management enabled, you can execute position-specific
processes with Job History imports to validate the modified data. For more information, refer to the Related
Information section.
Higher Duty: During the Job History import of higher duty or temporary assignment records, the validations
related to higher duty or temporary assignment are executed and the reference salaries and higher duty allowance
are calculated and updated, where required.
Employments Information: Derivation to ensure that the start and/or end date of Employment Information will
always be adjusted whenever a Job Information Hire/Termination/Rehire record is modified.
Global Assignment: Derivation to ensure that the start Global Assignment and end Global Assignment dates on
Global Assignment records are always adjusted when the corresponding records are imported using the Job
History import.
Pension Payouts: Derivation to ensure that the obsolete pension payout dates are always adjusted when the
corresponding records are imported using the Job History import.
• Update fields other than the event reason field for a leave of absence.
Note
A leave of absence can only be added in Time Off. The system does not allow Job Information events to be
created in the import.
• Update the expected return date field when a enabled in Time Management.
Related Information
Notable points to help you prepare data for importing Job Relationships information
General Information
• User ID Job Relationships: Data Fields Definition Job Relationships entity is universally
supported by Centralized services.
• Event Date
• Relationship Type
Job Relationships entity is universally supported by Centralized services. Several supported configurations/
processes are enhanced and a few new configurations/processes are supported, which are as follows:
• Business Rules
• Employee Data Deletion
• End Date Correction
• Identical Record Suppression
• Forward Propagation of Data
For more information about each configuration/process, refer to the Related Information section.
Data Validation
Centralized services introduce several validations to ensure data consistency. Job Relationships imports are
validated for the following scenarios:
• To ensure that the start date is not before the first Job Information Hire record.
Related Information
Notable points to help you prepare data for importing Compensation Information.
General Information
Recommendation
We recommend that you include the corresponding Recurring Pay Component record on the same date as the
Compensation Information record because onSave rules on the recurring pay component are not triggered by a
Compensation Information import. This should save manual effort later.
For more information about each configuration/process, refer to the Related Information section.
Data Validation
Centralized services introduce several validations to ensure data consistency. Compensation Information imports
are validated for the following scenarios:
• When Compensation Information is imported, the system checks that the associated Pay Component
Recurring is imported and validated.
• If the Allow Retroactive Employee Data Changes permission setting is activated, the system validates changes
prior to the earliest retroactive date but only shows a warning message. If the Allow Retroactive Employee Data
Changes permission setting is deactivated, an error message is shown.
Related Information
Notable points to help you prepare data for importing Recurring Pay Component Information.
General Information
• Event Date Recurring Pay Component: Data Fields The Recurring Pay Component import
Definition is currently supported by Centralized
• User ID
services only if Enable Centralized
• Event Date Services for Recurring Pay Component
• Sequence Number Information (Applicable only for data
imports from UI and API) is enabled on
the Company System and Logo
Settings page.
With Centralized services enabled, several supported configurations/processes are enhanced and a few new
configurations/processes are supported, which are as follows:
For more information about each configuration/process, refer to the Related Information section.
Centralized services introduce several validations to ensure data consistency. Recurring Pay Component imports
are validated for the following scenarios:
• To ensure that the Recurring Pay Component is validated along with the corresponding Compensation
Information when the import is executed.
• To ensure that a business rule with recurring pay components as the base object can't create or delete other
recurring pay components.
• To ensure that only pay components defined as recurring can be created or updated for Compensation
Information.
Pay Component Value: For pay components of type Amount and Percentage, the system ensures that a null value
is not accepted. However, a value of 0.0 is accepted. For pay components of type Number, usually the value is not
provided by the user since it is calculated and set by the system. Any value is accepted, including null.
Frequency: Frequency is only checked if the canOverride field is set to False. An error is raised when the frequency
provided in the import does not match the value provided in the object definition. Both values can be null.
Related Information
Notable points to help you prepare data for importing non-recurring pay component information.
General Information
• Issue Date Non-Recurring Pay Component Data The non-recurring pay component
Fields import is currently supported by
• Sequence Number (if enabled)
Centralized services only if Enable
• Type Centralized Services for Non-Recurring
• User ID Pay Component Information
(Applicable only for data imports from
UI and API) is enabled on the Company
System and Logo Settings page.
Note
In the Incremental Load mode, if the
sequence number field,
<allow-import>=false: user_id +
pay_date + pay_comp_code is the
business key.
<allow-import>=true: user_id +
sequence_number (when the
sequence number is enabled) is the
business key.
Sequence Number
For the non-recurring pay component data import process, the <sequence-number> field is a required field, but
not a business key if it is disabled. This field accepts an alphanumeric value with the character limit set to 38. With
a unique sequence number, you can import multiple records having the same pay component on the same day,
without any risk of existing data being overwritten.
Based on your business requirements, there are two cases related to sequence number configuration. To check
your existing configuration, go to Admin Center Manage Business Configuration Employee Central HRIS
Elements payComponentNonRecurring .
The system automatically generates a sequence number for each user record. However, the value isn’t displayed on
the UI.
• In the Full Purge mode, the sequence number is automatically generated if it doesn't exist in the import record.
• In the Incremental Load mode, ensure that there are unique values in the <sequence-number> field for each
user record.
Centralized services introduce several validations to ensure data consistency. Non-Recurring Pay Component
imports are validated for these scenarios:
• A non-recurring pay component record isn't updated or deleted if it's referenced by a user's time account
payout record.
• A non-recurring pay component record is imported only if a user's Job Information record exists.
• The pay component associated with a non-recurring pay component record is active and non-recurring.
• An import file doesn't have a non-recurring pay component record with multiple data operations for the same
business key. For example, an import record doesn't have an entry for a Create operation and then a Delete
operation with the same business key.
• Duplicate sequence numbers do not exist in an import record.
Related Information
Notable points to help you prepare data for importing Personal Information.
General Information
Related Information
Notable points to help you prepare data for importing Global Information.
General Information
• Country Global Information: Data Fields Definition Global Information entity is universally
supported by Centralized services.
• Event Date
• Person ID External
For more information about each configuration/process, refer to the Related Information section.
Global Information supports updating existing country/region-specific information, even when you're importing
data in Full Purge mode.
To do so, you must have the Support cumulative update of country-specific data for global information import in full
purge mode permission.
Example
You add new records by importing Global Information in Full Purge mode.
If you've the Support cumulative update of country-specific data for global information import in full purge mode
permission, the existing country/region-specific record of the user is updated.
If you don't have Support cumulative update of country-specific data for global information import in full purge
mode permission, the first record is updated with the new import data since the business keys match and the
second record is deleted.
Related Information
Notable points to help you prepare data for importing Phone Information.
General Information
• Person ID External Phone Information: Data Fields Definition Phone Information entity is universally
supported by Centralized services.
• Phone Type
For more information about each configuration/process, refer to the Related Information section.
Related Information
Notable points to help you prepare data for importing Email Information.
General Information
• Person ID External Email Information: Data Fields Definition Email Information entity is universally
supported by Centralized services.
• Email Type
For more information about each configuration/process, refer to the Related Information section.
Related Information
Notable points to help you prepare data for importing Social Account Information.
General Information
• Person ID External Social Account Information: Data Fields Social Accounts Information is
Definition universally supported by Centralized
• Domain
services.
With Centralized services enabled, Social Accounts Information entity supports the following configurations/
processes:
For more information about each configuration/process, refer to the Related Information section.
Related Information
Notable points to help you prepare data for importing National ID Information.
General Information
For more information about each configuration/process, refer to the Related Information section.
Related Information
Provide a temporary national ID in the import template for employees not having a valid national ID during the
hiring process.
Prerequisites
Note
For information about a complete list of fields available for nationalIdCard element, refer to the Related
Information section.
Procedure
Note
That this is optional. You can also leave this column blank.
On a successful import, the employee's temporary national ID will be stored in the system. You can view the
national ID information on the Employee Profile page.
Notable points to help you prepare data for importing Address information.
General Information
Note
If the Company System and Logo Settings Suppress update of identical records during Employee
Central import for supported entities setting is active, but the <Country> field in the Home address is
missing, then the system warns you that all fields are identical.
Providing an end date in the address record for an employee isn’t a mandatory requirement as it is automatically
calculated based on the respective start date. But if you intend to provide an end date, note the following.
Note
End dates explicitly provided in the import template will be considered only with Full Purge imports. If you are
importing data in Incremental Load mode, end dates are calculated automatically.
End date correction is a method by which the system determines whether or not there can be gaps (time frame
between the end date of one record and the start date of the next record) between different address records of an
employee. While preparing import data, you can choose to specify an end date value along with other information
in your import file template. Based on whether you import the data in Incremental Load or Full Purge mode, the end
dates are adjusted automatically.
Example
Let's suppose you import the following records in Incremental Load mode.
Any date gaps between the address records are automatically adjusted. The end date of the last record is also
adjusted.
Now, if you add a new record that is active anywhere between the start date of the first record and the end date
of the last record.
The end date of the first record is corrected in accordance with the start date of the newly added record. Also,
the end date of the new record is corrected in accordance with the start date of the third record. The reason
being, the end of any record can’t be greater than the start date of the subsequent record. This process of end
date calculation holds true even if you don’t provide an end date value in your import file template.
If you’re importing data in Full Purge mode for the same address type with end dates provided, gaps in between
records are considered and end dates aren't corrected.
Let' suppose your import file template has the following records ready to be imported in Full Purge mode.
Notable points to help you prepare data for importing Emergency Contact Information.
General Information
• Person ID External Emergency Contact Information: Data Emergency Contact Information entity
Fields Definition is currently not supported by
• Name
Centralized services.
• Primary
• Relationship
• Document Attachment
• Employee Data Deletion
For more information about each configuration/process, refer to the Related Information section.
• Deleting Emergency Contact Information: Deletion of the primary emergency contact information record of
employees isn’t supported through the data import process. You must either update an existing record as the
primary record or import a new record (that will act as the primary record) before deleting the existing primary
record. However, to delete the primary record only, you can do it manually from the Employee Profile.
• Updating Emergency Contact Information: If you’re importing primary emergency contact information (in
Incremental Load mode) for an employee already having a primary emergency contact record, the existing
information is overwritten. Therefore, to add a new record and keep the existing information intact, you must
Related Information
Notable points to help you prepare data for importing Person Relationship Information.
General Information
• Person ID External Person Relationship: Data Fields Definition Person Relationship Information entity
is currently not supported by
• Event Date
Centralized services.
You can assign a dependent as a beneficiary only for Australian employees. To do so, you must enter the details of
the dependent beneficiary in the Is-Beneficiary column of your import file template.
Note
If you’ve assigned a dependent beneficiary for non-Australian employees, the system skips importing such
data. However, if the non-Australian employees have a pension payout, you can assign as many dependent
beneficiaries as required.
When preparing to import work permit data on the Centralized services, understand and comply with the following
requirements.
General Information
Note
• Document Attachment
• Identical Record Suppression
For more information about each configuration/process, refer to the Related Information section.
Related Information
Notable points to help you prepare data for importing Consolidated Dependent information.
The Consolidated Dependents template includes data specific to the following entities:
• Person Relationship
• Biographical Information
• Personal Information
• National ID Information
• Addresses
The recommended method is to upload the dependents' data entity by entity. You can begin by first importing the
biographical information of the dependent by assigning a suitable <person-id-external> to it. Remember to
leave the User ID column blank while importing the Biographical Information for dependents.
1. Import the Person Relationship entity. Enter the unique <person-id-external> assigned while importing
Biographical Information, to the Related Person Id External column. Then, add the employee’s Person ID in the
Related Person Id External field.
2. Import Addresses, National ID, and Personal Information.
However, if you choose to import the dependents in a consolidated manner, import the biographical information of
the dependent by assigning a suitable <person-id-external> to it. Remember to leave the User ID column
blank while importing the Biographical Information for dependents.
Next, import the Consolidated Dependents. Enter a unique <person-id-external> assigned while importing
Biographical Information, to the Related Person Id External column. Then, add the employee’s Person ID in the
Related Person Id External field.
Note
Before importing data for new dependents with Consolidated Dependent import, ensure that their Biographical
Information is imported. Failure to do this leads to data corruption.
In the Consolidated Dependents import file template, you can define a dependent's relationship, and add (or
update) Address information, Personal Information, Biographical Information, and National ID information of an
employee's dependents. However, you must ensure that there exists at least one non-effective dated record per
dependent.
Note
Related Information
KBA for Missing First Name and Last Name after Consolidated Dependents Import
Notable points to help you prepare data for importing employee termination details.
General Information
User ID Termination Details: Data Fields Definition Termination Details imports are
currently supported by Centralized
services only if Enable Centralized
Services for Termination Details
(Applicable only for data imports from
UI and API) is enabled on the Company
System and Logo Settings page.
Note
This setting is not available in
settings where Global
Assignments or Pension Payouts
are activated.
• Document Attachment
• Workflow Configuration
With Centralized services enabled, several supported configurations/processes are enhanced and a few new
configurations/processes are supported, which are as follows:
• Business Rules
• onSave rules with Job Information as source element and Job Information as the target element are
supported with Basic rule scenario.
• onChange rules for Job Information as source element are supported only with Basic rule scenario.
• onPostSave rules are supported with Job Information as the source element.
• We do not recommend making any changes to the hire date or employment end date using rules. It is
technically possible, but may cause data inconsistencies.
• Cross-entity rules with Job Information or Employment Details as the source element have several
changes. Please refer to the Cross-Entity Rules with Centralized Services topic linked in the Related
Information section.
• End Date Correction
• Identical Record Suppression
For more information about each configuration/process, refer to the Related Information section.
The system allows for the termination of multiple employments for an employee in same import file.
For terminating the main employment of employees with concurrent employment, you must ensure that you also
set one of their active employments as their new main employment.
Example
The effective start date of concurrent employment 1 is July 1, 2019 whereas the effective start date of
concurrent employment 2 is December 1, 2019.
You can terminate the main employment of John only if you assign him new main employment. In this case, you
must assign concurrent employment 1 as the new main employment as its effective start date is before
concurrent employment 2.
To assign a new main employment, you must enter the user ID of an active concurrent employment under the <New
Main Employment> column in your import file template.
Data Validation
Centralized services introduce several validations to ensure data consistency. Termination Details imports are
validated for the following scenarios:
Related Information
Create new records or update existing records by uploading import files prepared with the required information.
Context
After you’ve downloaded the import file template and prepared the import data, you must upload the file to
Employee Central for creating or updating employee records as applicable. Once the file is successfully uploaded,
data is added or updated in the system accordingly.
Depending on the selected entity, you can configure how the data must be imported based on your requirement.
Employee Central provides two options:
Full Purge Adding new data, or replacing existing data with new data.
Note
These options are also available while importing data using OData APIs.
Related Information
Overwrite existing employee data with information in your import file by uploading data in full purge mode.
Prerequisites
• You've updated the import file with the required data to upload.
• There are no dependencies on the import data. For instance, before importing data for an employee, data for
the employee's manager and HR representative must be existing in the system.
• The data import settings are properly configured.
• You've reviewed the configuration of the respective HRIS element on the Business Configuration UI.
Note
During data imports, the base configuration of the HRIS element takes precedence over its country/region-
specific and person type configurations. For instance, if <custom-string 1> is not a mandatory field in
the jobInfo HRIS element but mandatory in its contingent worker person type configuration, it is a non-
mandatory field for Job History data imports. If such fields are present in your import file without a value, or
not present in your import file at all, a null value is automatically added to them.
Full purge imports are recommended when you’re importing data for the first time, or want to completely overwrite
existing information in the employee data files. You also have the option to upload multiple files together. For more
details, refer to the Related Information section.
Procedure
Based on your selection, additional options appear on the page. If you’ve selected Basic Import, select More
Options to further customize your import. For information about configuring advanced options with basic
imports, refer to the Related Information section.
4. If the entity you've selected supports a Purge Type, select Full Purge.
5. Select Browse to attach the import file from your computer.
6. (Optional) Depending on the selected entity, the Import Description field appears. Enter a brief job description.
7. (Optional) To select a data encoding format other than Unicode (UTF-8), choose from the File encoding
dropdown.
By selecting a proper file encoding format, you can represent data in different languages as necessary.
8. (Optional) To select the file locale other than English (United States), choose from the File Locale dropdown.
The options appearing in the dropdown correspond to the language packs selected during Employee Central
configuration. File Locale allows you to choose the language and locale for the data you’re importing, which is
especially important for date, number, and picklist fields, as the format can change based on the locale.
Selecting a file locale wouldn’t affect your default logon language or the language of your UI components.
9. (Optional) Depending on the selected entity, the Real-Time Threshold field is shown. Enter an appropriate value.
You can change the Real-Time Threshold to specify a value lower than the default value shown. The user-
defined threshold value can’t exceed the default value. A default value is automatically displayed for the
selected entity.s If the number of records in your import file is greater than the Real-Time Threshold value, data
imports happens asynchronously. Otherwise, data is synchronously imported.
Note
The mode of importing data (synchronous or asynchronous) is automatically determined depending on the
number of records in your import file.
10. (Optional) Select Validate Import File Data to check if there are any discrepancies in the data provided in the
template.
Run this check to ensure that there are matching import headers, and the CSV file you're uploading contains
valid data. If the entity for which you’re importing data supports Centralized services, and has a parent or child
entity, then data associated with the parent or child entity is also validated.
When importing Personal Information, the associated Global Information is also validated and vice versa.
When importing Compensation Information, the associated Pay Component Recurring information is also
validated and vice versa.
Results
The data is validated and the import process is initiated. After the import process is complete, email notifications
are generated. The email notifications also include details about any errors encountered during the process.
If you are importing data for an entity that has related entities, the related entity records are implicitly copied over.
For instance, Personal Information entity is related to Global Information, and they share a parent-child
relationship. When you're importing Personal Information on a future date, the Global Information associated with
the existing Personal Information is implicitly copied over and vice-versa.
Note
Note
When you perform full purge imports by changing only the end dates and enabling the flag Suppress update of
identical records during Employee Central import for supported entities. on the Company System and Logo
Setting page, a warning is displayed which mentions that identical records are suppressed. However, please
note that this warning can be ignored, and the records have been saved successfully.
Next Steps
Tip
If some batches fail because of picklist unavailability, reimport the failed records.
If a job isn’t showing any progress, it can be because the UI isn’t displaying updated results. In such cases,
note the time stamp and report an incident.
At any given time, you can execute a maximum of three Employee Data Import jobs, simultaneously. If a
new job is submitted during this time, the new job is added to the queue until one of the ongoing jobs has
completed its execution.
Related Information
Selectively update existing employee data by uploading data in incremental load mode.
Prerequisites
• You've updated the import file with the required data to upload.
• There are no dependencies on the import data. For instance, before importing data for an employee, data for
the employee's manager and HR representative must be existing in the system.
• The data import settings are properly configured.
• You've reviewed the configuration of the respective HRIS element on the Business Configuration UI.
Note
During data imports, the base configuration of the HRIS element takes precedence over its country/region-
specific and person type configurations. For instance, if <custom-string 1> is not a mandatory field in
the jobInfo HRIS element but mandatory in its contingent worker person type configuration, it is a non-
mandatory field for Job History data imports. If such fields are present in your import file without a value, or
not present in your import file at all, a null value is automatically added to them.
Context
Importing data in the Incremental Load mode is beneficial when you want to update specific information while
retaining a majority of existing employee data. You can also perform what is known as a Partial Import.
Note
• You cannot enter &&NO_OVERWRITE&& against fields that are business keys.
• Business rules consider updating fields marked as not to be overwritten, if you’ve the Enable execution of
rules against NO_OVERWRITE permission.
Procedure
Based on your selection, additional options appear on the page. If you’ve selected Basic Import, select More
Options to further customize your import. For information about configuring advanced options with basic
imports, refer to the Related Information section.
3. If the entity you've selected supports a Purge Type, select Incremental Load.
4. Select Browse to attach the import file from your computer.
5. (Optional) Depending on the selected entity, the Import Description field appears. Enter a brief job description.
6. (Optional) To select a data encoding format other than Unicode (UTF-8), choose from the File encoding
dropdown.
By selecting a proper file encoding format, you can represent data in different languages as necessary.
7. (Optional) To select the file locale other than English (United States), choose from the File Locale dropdown.
The options appearing in the dropdown correspond to the language packs selected during Employee Central
configuration. File Locale allows you to choose the language and locale for the data you’re importing, which is
especially important for date, number, and picklist fields, as the format can change based on the locale.
Selecting a file locale wouldn’t affect your default logon language or the language of your UI components.
8. (Conditional) Define the Real-Time Threshold value.
You can change the Real-Time Threshold to specify a value lower than the default value shown. The user-
defined threshold value can’t exceed the default value. A default value is automatically displayed when you
select an entity. If the number of records in your import file is greater than the Real-Time Threshold value, the
system imports the data asynchronously. Otherwise, data is imported synchronously. Partial imports are
always asynchronous.
Note
The mode of importing data (synchronous or asynchronous) is automatically determined depending on the
number of records in your import file.
9. (Optional) Select Validate Import File Data to check if there are any discrepancies in the data provided in the
template.
Run this check to ensure that there are matching import headers, and the CSV file you're uploading contains
valid data. If the entity for which you’re importing data supports Centralized services, and has a parent entity,
then data associated with the parent entity is also validated.
Results
Fields not having any values in your import file are substituted with default values. The default value can be:
If you are importing data for an entity that has related entities, the related entity records are implicitly copied over.
For instance, Personal Information entity is related to Global Information, and they share a parent-child
relationship. When you're importing Personal Information on a future date, the Global Information associated with
the existing Personal Information is implicitly copied over and vice-versa.
Note
After the import process is complete, email notifications are generated. The email notifications also include details
about any errors encountered during the process.
Next Steps
Tip
If some batches fail because of picklist unavailability, reimporting the failed records resolve the problem. If
a job is taking unusually long time to complete, it’s possible that the UI isn’t displaying updated results. In
such cases, capture the time stamp and report the incident.
• Repeat the procedure to upload another import file. Generally, you upload a single file to import data
corresponding to a respective entity. However, you also have the option to upload multiple files together. For
more details, refer to the Related Information section.
At any given time, you can execute a maximum of three Employee Data Import jobs, simultaneously. If a
new job is submitted during this time, the new job is added to the queue until one of the ongoing jobs has
completed its execution.
Related Information
Only certain fields in your import file template support Partial Imports.
If you want to perform a Partial Import while importing data in Incremental Load mode, you cannot update
information for:
• ADDRESS
• The default Address entity does not support Partial Import.
• Location and Emergency Contact Foundation Objects also refer to this address. Therefore these two
columns will not support Partial Import as well.
• WORK_PERMIT_INFO
• PAY_CALENDAR
• JOB_RELATIONSHIPS
• JOB_FAMILY
• DYNAMIC_ROLE*
• WF_CONFIG*
• WF_CONFIG_CONTRIBUTOR*
• WF_CONFIG_CC*
Note
Prerequisites
You have selected More Options while importing basic employee data.
Context
You can configure specific tasks to be performed with basic user imports such as such as manager transfer,
document removal, and so on.
Procedure
Automatic Completed Document Copy to New Manager Move all the documents from the old manager's Completed
folder to the new manager's Completed folder.
Automatic Inbox Document Transfer To New Manager Move all the documents from the old manager's Inbox to the
new manager's Inbox.
Automatic En Route Document Transfer To New Manager Move all the documents from the old manager's En-Route
folder to the new manager's En-Route folder.
Automatic insertion of new manager as next document Make the newanager a part of the review process and re
recipient if not already move the old manager from accountability henceforth.
Remove Inactive Employees' In-Progress Documents Remove all in progress documents of inactive employees.
Remove Inactive Employees' Completed Documents Remove all completed documents of inactive employees.
To find options related to Automatic Manager Transfer and Automatic Document Removal, Effective Dated
fields in Basic Import setting must be enabled in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Next Steps
Compress multiple import files together and upload them simultaneously as a Zip file.
Prerequisites
• You've updated the import file with the required data to upload.
• There are no dependencies on the import data. For instance, before importing data for an employee, data for
the employee's manager and HR representative must be existing in the system.
• The data import settings are properly configured.
• You've reviewed the configuration of the respective HRIS element on the Business Configuration UI.
Note
During data imports, the base configuration of the HRIS element takes precedence over its country/region-
specific and person type configurations. For instance, if <custom-string 1> is not a mandatory field in
the jobInfo HRIS element but mandatory in its contingent worker person type configuration, it is a non-
mandatory field for Job History data imports. If such fields are present in your import file without a value, or
not present in your import file at all, a null value is automatically added to them.
Generally, you upload a single file to import data corresponding to a respective entity. However, if you have multiple
files to upload, you can compress them together into a Zip file and upload the single compressed file. The order in
which data is imported is as follows:
• Basic Import
• Biographical Information
• Employment Details
• Personal Information Import
• Job History
If you’re creating new employee records, ensure that your Zip file contains the following import files:
• Basic Import
• Biographical Information
• Employment Details
Procedure
• Full Purge, if you’re importing data for the first time or you want to completely overwrite existing data with
information included in the import file.
Caution
With this option, all existing employee data (historic data included) corresponding to the selected
entity is removed. To keep existing information, you must include the same in the import file or create a
data backup, as required.
While importing data, if the system encounters fields without any value, null values are added to them.
• Incremental Load, to update employee data already existing in the system.
You can also use this option to perform a Partial Import, when only the required employee data is updated
and all the other details remain unchanged. To partially import data, you to enter &&NO_OVERWRITE&&
under those fields in your import file, whose value you don't want to modify. As a result, after you upload
the file in the system, except business keys, and fields having a value &&NO_OVERWRITE&&, all other
values will be updated with values provided in your import file.
While importing data, if the system encounters a field without any value, &&NO_OVERWRITE&&, or null
value is added to it depending on whether it supports partial import or not.
5. Select Browse to attach the Zip file from your computer.
6. (Conditional) Depending on the entity you’ve selected, the Import Description field appears. Enter a brief job
description.
By selecting a proper file encoding format, you can represent data in different language as necessary.
8. (Optional) To select the file locale other than English (United States), choose from the File Locale dropdown.
Selecting a file locale wouldn’t affect your default logon language or the language of your UI components. It
corresponds to the locale for which data is being imported. The options appearing in the dropdown are based
on the language packs selected during Employee Central configuration.
9. (Conditional) Define the Real-Time Threshold value.
You can change the Real-Time Threshold to specify a value lower than the default value shown. The user-
defined threshold value can’t exceed the default value. A default value is automatically displayed when you
select an entity. If the number of records in your import file is greater than the Real-Time Threshold value, the
system imports the data asynchronously. Otherwise, data is imported synchronously. Partial imports are
always asynchronous.
Note
The mode of importing data (synchronous or asynchronous) is automatically determined by the system
depending on the number of records in your import file. During an asynchronous import, the system
validates the first 10 records. Discrepancies, if any, results in errors. After you rectify the errors and upload
the file again, the import process will continue.
10. (Optional) Select Validate Import File Data to check if there are any discrepancies in the data provided in the
template.
Tip
When importing data for new employees, validations like 'Person-ID-External' is invalid can appear. You can
ignore these validations and proceed with the import.
Results
The data is validated, and the import process is initiated. After the import process is complete, email notifications
are generated. The email notifications also include details about any errors encountered during the process.
Next Steps
After uploading the Zip file, you can monitor the import job by selecting Monitor Job.
Related Information
Check the execution details and other statistics of each job associated with your primary batch.
Prerequisites
To monitor jobs submitted by you, you need the Administrator Permissions Admin Center Permissions
Monitor Scheduled Jobs
To monitor jobs submitted by others, you need the Administrator Permissions Manage User Allow users to
view all the jobs permission to use this feature.
Context
If the number of records in your import file is greater than the specified real-time threshold value, data imports
happen asynchronously. A job is submitted for further processing.
Procedure
Remember
Alternatively, you can navigate through Admin Center Scheduled Job Monitor to directly access the
Scheduled Job Monitor page.
You are redirected to the Scheduled Job Monitor page which list all the jobs (of type Employee Data Import)
that you have submitted.
3. Identify the job that you want to monitor.
4. Choose Actions View Details Download Status corresponding to your job, if you want to download a
copy of the job execution report.
The status file in a CSV format is automatically downloaded to your local system.
5. Choose Close to return to the Scheduled Job Monitor page.
7.3.6 Troubleshooting
Information about typical error or warning messages that you may encounter during the data import process along
with their cause and resolution.
Error/Warning Message While Importing... What's the Cause? What's the Resolution?
Missing required - Required information was not Fill out all required
field provided. information and upload the
file again.
1. Manager’s data
2. HR Representative data
3. Employee data
Manager ID invalid - The manager's <USERID> Make sure that the manager's
assigned for the employee <USERID> exists in the
doesn't exist in the database database.
Internal system - The data in your import file is Verify they the data you're
error encountered invalid/incorrect, or the importing is correct/valid.
while importing existing data is invalid.
record.
The Date of Birth Biographical Information For some countries, the Verify the import data, make
does not correspond National Id information corrections as necessary and
to the National ID includes date of birth record import the data again.
associated with this of a user. This message most
The Gender does not Personal Information For some countries, the Verify the import data, make
correspond to the National Id information corrections as necessary and
National ID includes gender data of a import the data again.
associated with this user. This message most
Personal information Personal Information This message most likely • When <person-id-
does not exist appears when a user's external> record for a
<person-id-external> or dependent does not
a dependent's <person-id- exist, import biographical
external> is invalid. This information to define a
can happen due when: new person in the
system. Thereafter,
• <person-id-
import person
external> record for a
relationship information
dependent does not
to associate the
exist.
dependent with the
• <person-id- respective employee.
external> for a
• To import personal
dependent exists, but not
information of a user
associated with an
whose <person-id-
employee.
external> does not
• You’re importing exist, you must import
personal information for their basic user
a user whose <person- information to create a
id-external> does not <person-id-
exist. external>, and then
import personal
information.
Failed to perform Job History This error message most Depending on the cause of
country specific likely appears when: the error, you can change the
file locale or associate a
validation. Please • You’re importing job country with the legal entity
ensure this record history of employees, and and try importing the data
is associated with a the dates do not match again.
valid Company record the date format
for given dates associated with the file
locale selected during
import.
Example
If your file locale is
set to English (United
States) and you’d like
to import job history
for employees from
Germany. While
Germany supports
the date format
DD.MM.YYYY, the
data you upload is in
the format MM/DD/
YYYY
(corresponding to
the English (United
States)) file locale. If
you specify dates in
the German date
format, the system
will generate an
error.
Country is a • Emergency Contact This error message most Update the import file
required field and • Consolidated likely appears when you leave template with an appropriate
cannot be blank Dependents the <Country> field blank value in the <Country> field
while filling out the and try importing the data
<Address> field while again.
importing emergency contact
or dependents information.
<Country> is now a
mandatory field when the
visibility of the is-address-
same-as-person HRIS field is
set to No
User has already Termination Details This error message most • You must terminate the
likely appears when: active employee before
been rehired. You
can delete the • You’re importing a rehire initiating the rehire
record for an active process.
existing rehire
employee. • You must delete the
record or terminate
• A rehire record already existing rehire record
the employee to exists for the user. before importing the new
proceed with this rehire record.
rehire record
• While investigating the failed records in your import file, check if any records have the <End Date> field. If the
<End Date> in your import file does not correspond to the <End Date> value present in the system, the
import process will not be successful. While the system calculates the end dates based on the hierarchy of
event start dates, you can remove this field value, and try importing the file again.
• Ensure all the effective-dated records are in a single CSV file, when importing multiple batches in parallel.
• After a Basic Import, if you are unable to find users in your system, it is most likely that the list of users in the
“Everyone” group isn't refreshed. To fix this, re-import basic user information for a few users (not all). This will
refresh the list and you should be able to find the users now.
• After a Biographical Information import, if you face an error after clicking on Employee Information in an
employee profile to review the Biographical information, it is most likely that the import hasn't been completed.
Ensure that the import process is complete before reviewing the data in employee profiles.
• After importing dependents information, if you find that:
• A single dependent is shared by two employees, it might be due to an incorrect <Person-ID-External>
association. To resolve this, create one <Person-ID-External> for each dependent respectively.
Assignment ID can be used to define a relationship between the personnel (temporary or permanent) and your
company.
With Employee Central imports, there are 2 ways to define assignment ID for new employees who have multiple
employment records:
Prerequisite: The required HRIS elements must be configured to add the <assignment_id_external> field. To
check the existing configuration, go to Admin Center Manage Business Configuration Employee Central
HRIS Elements .
Note
While importing data, if the system finds records where you haven’t entered an Assignment ID, a value is
automatically assigned from the corresponding <users_sys_id> field.
2. Configuring the system to auto-generate an assignment ID: Create an employee setting configuration to
generate assignment ID during the data import process.
Related Information
Auto-generate assignment IDs while importing data by creating an employee setting configuration.
Context
To automate the process of generating the Assignment IDs for new employees, you must create an employee
settings configuration and attach a business rule to auto-generate assignment IDs. The business rule is executed
during the data import process.
Procedure
Note
Ensure that you create a business rule instance belonging to the Generate Assignment ID External scenario
in Employee Central Core rule scenario category.
8. After creating the business rule, select the new rule from the Rule to generate Assignment ID External
dropdown.
9. Save the changes.
Results
Your employee settings configuration is activated. With this configuration in place, the system generates an
assignment ID while importing Employment Details or Global Assignment information of employees.
Note
• The assignment ID generated by the system can be different from the user ID of employees.
• Assignment ID from the import file takes precedence over the assignment ID generated by the system.
Related Information
Data validation and error management are part of Centralized services to help ensure data imports and the
resultant changes are accurate and consistent.
General Validations
Data imports for all entities supported by Centralized services are validated. The validations include:
• Association validation: To check the associations between the various Foundation Objects (FOs) and Generic
Objects (GOs), and verify if the data you’ve entered in the import file is correct.
Note
Inactive FOs and GOs are not considered during imports. During an import, if an event reason status is
specified as Inactive in the import file, then the Event Reason is considered for the import. However, if the
Event Reason status is configured as Inactive in Manage Organization, Pay and Job Structures, then the
event reason is not considered for import.
• Derived entity validation: To check if there are any issues arising as a result of cross-block transactions with
data imports. Cross-block transactions can arise out of cases such as rule execution, forward propagation, and
so on.
Example
While importing Job Information, if a cross-block rule is executed to update Compensation Information, the
corresponding record is validated. Compensation Information is the derived entity in this case.
Note
The Job History import fails in Initial Load Mode if there is a cross-entity rule from Job Information to
Recurring Pay Component if there is no corresponding Compensation Information record.
• Duplicate record validation: To check if there are multiple records in your import file with same data.
• Forward data propagation validation: To validate existing data in all the target records qualifying for forward
propagation.
Example
Personal Information and Global Information share a parent-child relationship. When you're importing
Personal Information on a future date, the Global Information associated with the existing Personal
Information is implicitly copied over. If the existing Global Information has inconsistencies, they must be
resolved before importing Personal Information.
Similarly, with Global Information imports, the associated Personal Information record is validated for
consistency.
Example
When a new time slice for Comp Info is imported, the PCR is also copied over from the previous time slice.
During the copy, validation is completed for both Comp Info and PCR. To reduce data inconsistency. (Only
for SL)
• Sequence number validation: To check if a sequence number is entered for all the records having same
business keys. Applicable for data imports with entities that support addition or modification of multiple
records (having same business keys) on the basis of a sequence number, such as Job Information.
• Earliest date validation: To check if the effective date of changes is before the earliest date of the record and
that the logged in user has permission to make those changes. This validation is for all effective-dated entities
supported on Centralized services.
Error Management
Raise Message: While configuring business rules, you can add conditions to handle exceptional scenarios with a
warning or an error message using the Raise Message action. For more information about creating a rule that raises
a message, refer to the Related Information section.
Related Information
End date correction is a part of Centralized services to manage end dates of effective-dated employee records.
When you’re importing data for effective-dated entities, end date correction is required to maintain data
consistency in the following scenarios:
For employees who have multiple records, end date correction is a method to determine whether there can be date
gaps between different records or not.
Note
A 'date gap' is the time frame between the end date of one record and the start date of the next record.
Based on whether you import the data in Incremental Load or Full Purge mode, the end dates are adjusted
automatically.
Global Information No
Note
Addresses Yes
If the Company System and
Note
Logo Settings Suppress
Only for data imports in Full Purge update of identical records
mode. during Employee Central import
Adding new records without providing end date values in the import file
Result: End dates are automatically calculated. Date gaps, if any, between records are adjusted accordingly.
The end date of the last record is also adjusted.
Adding new records with end date values provided in the import file
Example
Let' suppose you’re importing address information of an employee, with end dates provided for all records.
Result: Existing address information of employees is deleted and replaced with data from your import file. If
there are any date gaps between records, they are not corrected.
Adding new records with start dates coming after the start date of the most recent existing
record
Prerequisites: You’re importing data in Incremental Load mode.
Example
The start date here means before the start date of an existing record.
You add two new records without providing an end date in the import file:
Result: The end date of the existing record is corrected in accordance to the new records added. The end dates of
the new records are also calculated.
Adding new records with start dates coming before the start date of the initial record
Prerequisites: You’re importing data in Incremental Load mode.
Example
You add a new record without providing an end date in the import file:
Result: The end date of the new record is corrected in accordance to the existing record.
Adding a new record with start date between the start dates of existing records
Prerequisites: You’re importing data in Incremental Load mode.
Example
Now, you add a new record between the start date of the first record and the end date of the last record.
Result: The end date of the first record is corrected according to the start date of the new record. Also, the end date
of the new record (second record) is corrected according to the start date of the third record. The reason being, the
end of any record can’t be greater than the start date of the subsequent record. This process of end date
calculation holds true even if you don’t provide an end date value in your import file template.
The principles of end date correction as demonstrated in the these use cases are also applicable to other effective-
dated entities.
Deleting employee data is a consequential process, and must be dealt with utmost importance. Data once deleted
can be difficult or impossible to recover.
Using data imports, you can choose to delete employee data associated with a single entity, or multiple entities as
applicable.
Prerequisites
• To delete country/region-specific information, the corresponding HRIS elements must be properly configured.
To review your existing configuration, go to the Admin Center Manage Business Configuration page.
• To delete Job History, ensure that the Company field is available in your import file template.
• To delete a leave of absence Job History record, you have Enable Leave of Absence Editing enabled in the Time
Management Configuration MDF object.
Context
Employee data can be deleted in Incremental Load mode, by choosing an operation type between DELETE and
DELIMIT. The operation type depends on the entity you select for importing data.
You cannot delete and add data at the same time. To do so, you must create two separate jobs.
Note
Business key changes are considered as a 'Delete' of existing business keys and 'Insert' of new business keys.
This applies whether the business key is changed or if the same business key is deleted and re-added.
• You must provide the sequence number with the records in your import file template.
• You can delete data associated with multiple employee records.
• For a given employee, if your import file contains data to be deleted and added at the same time, Recurring Pay
Component associated with the Compensation Information (to be deleted) is automatically linked with the new
Compensation Information (to be added).
• Deleting Compensation Information results in the deletion of the corresponding Recurring Pay Component
information also.
Procedure
Addresses DELIMIT -
Results
Related Information
Consolidate and import employee data with multiple entities for deletion.
Prerequisites
Enable Compound Delete must be enabled in Provisioning. This setting is required to download the required import
file template.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product Support.
• Addresses
• Biographical Information
Note
You can only delete the Biographical Information of employees if there is no existing Employment
Information and/or Dependent information for them.
• Email Information
• Job Relationships
• National ID Information
• Pay Component Non Recurring
• Phone Information
Context
You can use a dedicated import template to group together employee data belonging to different entities for
deletion.
Note
Data deletion with multiple entities always happens in Full Purge mode.
Procedure
Note
To find the HRIS element identifier, go to Admin Center Manage Business Configuration page.
Results
Related Information
Addresses, being an effective dated HRIS entity, supports employee data deletion when Centralized services are
enabled. You can delete address information of an employee by importing a record having a start date that doesn’t
exist in the system. If forward propagation is applicable, data changes are cascaded to subsequent records of the
employee.
Note
Only applicable in cases where the start date of the record you’re importing falls within the date range of
records existing in the system.
Example
Result:
• The address type is deleted as per the start date provided in the import file.
• The value is propagated to future records as applicable.
• End dates are adjusted accordingly.
A method to exclude duplicate records and redundant information while importing data.
Multiple records with same data for a person indicate inaccurate or stale data. It can cause issues with reporting,
analyzing metrics leading to performance inefficiency. Identical record suppression enables you to maintain data
consistency and avoid duplicate records from getting imported.
Identical record suppression is enabled by default with employee data imports. You can manage this setting on the
Admin Center Company System and Logo Settings page, using the Suppress update of identical records
during Employee Central import for supported entities option.
• Addresses
• Biographical Information
• Employment Details
• Email Information
• Global Information
• Job History
• Job Relationships
• National ID Information
• Compensation Information
• Personal Documents Information
• Phone Information
• Social Accounts Information
• Entities supported only with Centralized Services
• Compensation Information
• Recurring Pay Component
• Termination Details
Example
For effective-dated entities, if you're importing data that matches existing data for an employee on a given date,
the record isn’t imported. However, if you're importing data that matches existing data but on a different date,
the record is imported.
For non-effective dated entities, if you're importing data that matches existing data for an employee, the record
isn’t imported.
If you import Job History and Compensation Information with or without providing a seq-number in the import file,
the import data is validated against the existing employee data on the same effective date. If there’s no information
available as of that effective date, the system validates the data you’re importing with the previous effective dated
record. If there are still no changes, the data isn't updated.
Note
Identification of records to suppress isn’t a part of the validation process. However when you do an import, the
business rules are executed to see if there are any changes to the import data. If there are changes then the
record is saved. If the records are still identical even after rule execution then the record is suppressed.
Related Information
Notable scenarios related to identical record suppression with employee data imports.
• Admin Center Company System and Logo Settings Enable Centralized Services for Job Information is
enabled.
• Admin Center Company System and Logo Settings Suppress update of identical records during Employee
Central import for supported entities is not enabled.
Case 1: You add a new record by importing data in Full Purge mode.
Result: All existing records are deleted and replaced with the new record. The audit details are updated.
Case 2: You add a new record identical to one of the existing records by importing data in Full Purge mode.
Result: The first record is not imported as the data is identical to an existing record, however, the audit details are
updated. The second record is deleted.
Case 3: You add a new record by importing the data in Incremental Load mode.
Result: The new record is added to the existing set of employee records. The audit details are updated.
Case 4: You add a new record identical to an existing record by importing the data in Incremental Load mode.
Result: The record is not imported. However, the audit logs are updated.
• Admin Center Company System and Logo Settings Enable Centralized Services for Job Information is
enabled.
• Admin Center Company System and Logo Settings Suppress update of identical records during Employee
Central import for supported entities is enabled.
Case 1: You add a new record by importing data in Full Purge mode.
Result: The first record is imported as there is a change to Job Title.. However, second record is deleted and the
audit details are updated..
Case 2: You add a new record identical to an existing record by importing data in Full Purge mode.
Result: The first record is not imported as the data is identical to an existing record (suppressed) and the audit
details are not updated. However, the second record is deleted.
Case 3: You add a new record identical to an existing record by importing the data in Incremental Load mode.
Result: The record is not imported ('suppressed'). The audit details are not updated.
The following use cases are from the perspective of Compensation Information imports, but the behavior is the
same with Recurring Pay Component imports.
Note
For Compensation Information imports, if identical records are included with the import, the system executes
rules for Compensation Information to check if there are any changes to both Compensation Information and
Recurring Pay Component data. If there are changes to the data, the updated data is saved in the system.
However, for Recurring Pay Component imports, the system checks for changes only specific to Recurring Pay
Component data to Recurring Pay Component data included in the import file.
Importing Compensation Information with Centralized services enabled and Identical record
suppression not enabled
Prerequisites:
• Admin Center Company System and Logo Settings Enable Centralized Services for Compensation
Information is enabled.
• Admin Center Company System and Logo Settings Suppress update of identical records during Employee
Central import for supported entities is not enabled.
You add the following records by importing data in Incremental Load mode.
Result:
• The first record is not imported ('suppressed') as the data is identical to an existing record. The audit details
are updated.
• The second record is imported successfully. But its sequence number is corrected to match the order.
• Admin Center Company System and Logo Settings Enable Centralized Services for Compensation
Information is enabled.
• Admin Center Company System and Logo Settings Suppress update of identical records during Employee
Central import for supported entities is enabled.
Result: The record is not imported ('suppressed') as the data is identical to an existing record, and a warning
message is displayed on the UI. The audit details are not updated.
Forward propagation is a process by which data changes are cascaded or "propagated" to future dated records.
Forward propagation is supported only in Incremental Load mode.
Forward propagation of data is useful when modifications to employee data are applicable to corresponding
effective dated records in the future.
For records where forward propagation of data occurs, the system runs the validation checks against all the fields.
Currently, forward propagation of data is supported with the following data imports on Centralized services:
• Addresses
• Personal Information
• Global Information
• Job History
• Job Relationships
• Compensation Information
• Recurring Pay Component
• Termination Details (Job Information)
The following sections describe the behavior of these supported entities on Centralized services.
Note
Forward propagation of data when field values in the records subsequent to the newly added record match with the
values in the first record.
Example
These examples of forward propagation apply to Job History and Personal Information imports as well.
Example
Start Date Custom String 1 Custom String 2 Custom String 3 Custom String 4
April 1, 2019 X A O -
June 1, 2019 X B O -
September 1, 2019 Y C O -
Start Date Custom String 1 Custom String 2 Custom String 3 Custom String 4
May 1, 2019 Z B Q E
Start Date Custom String 1 Custom String 2 Custom String 3 Custom String 4
April 1, 2019 X A O -
May 1, 2019 Z B Q E
June 1, 2019 Z B Q E
September 1, 2019 Y C Q E
For the record with <Start Date> as June 1, 2019, the values in the Custom String 1, Custom String 3 and
Custom String 4 match with the values in the record with <Start Date> as April 1, 2019, but the value in
Custom String 2 doesn't match. Hence, data is propagated only to Custom String 1, Custom String 3, and
Custom String 4. Also for the record with <Start Date> as September 1, 2019, forward propagation happens
on Custom String 3 and Custom String 4 as the value was same as previous record. However, for Custom
String 1 the propagation stops as the value is different from previous record.
Restriction
Example
These examples of forward data propagation apply to both Recurring Pay Component and Global Information
imports.
When you add records of child entity to a time slice without existing child entity records, forward propagation of
data happens if the field value in the futures is empty.
Example
In continuation with the above sample database output, if you delete an existing record by importing the
following data:
In this case, only the pay component data is deleted while all the other data remains intact.
Example
Result:
No forward propagation happens, because the number of children starting from January 1, 2017 is "0" and is
different from the original value of the previous record, which is empty.
Example
Result:
The updated field value "1" is forward propagated and stops on December 31, 2018, because there's no record
of global information from January 1, 2016 to December 31, 2016 and the value of <Number of Children> in
the future record is empty.
Example
Result:
With address imports, forward propagation works when you add or modify (update or delete) data.
The following examples show how data is propagated when you add address information.
Example
The updated field value "30000" is forward propagated to the future record that starts from January 1, 2021
because before import, the future record has the same postal code as that of the previous record.
Example
The updated field value "30000" is forward propagated and stops on December 31, 2020 because there's no
record from January 1, 2021 to April 30, 2021.
Example
The newly added home address information is propagated to future dated records with other details in the
import file template like Address 1 and Attachment ID.
Example
Person ID External Address Type Start Date End Date Address 1 Country/Region
Here, when a same entity type (in this example, entity Address Type is Home), within an existing time slice is
imported, forward propagation of data doesn't take place.
Example
For the same Sample Address Information mentioned in the previous example, import data with Address Type
as Others.
Person ID External Address Type Start Date End Date Address 1 Country/Region
Here, when a different entity type (in this example, entity Address Type is Others), within an existing time slice is
imported, forward propagation of data takes place.
Example
Forward propagation is supported only when a new type of import is inserted between the records. See the
following example for Job Relationship imports.
1 1.1.2000 HR Manager
Now, insert a record with 1.3.2000 and Custom Manager between record 1 and 2. The result is that forward
propagation doesn't take place because Custom Manager exists in the future records already.
1 1.1.2000 HR Manager
3 1.3.2000 HR Manager
Now, in the following example, insert a record with 1.3.2000 and Matrix Manager between record 1 and 2.
1 1.1.2000 HR Manager
1 1.1.2000 HR Manager
3 1.3.2000 HR Manager
Example
2020
2020
Example
Result: The Billing records with the start date on January 1, 2013 and January 1, 2015 are deleted.
Related Information
Sequence number generation and correction is a part of Centralized services to differentiate between employee
records having identical information.
Example 1
Example
Let's suppose you’re importing four new records for a user without providing a sequence number in your import
file.
These records are not imported because the system is unable to determine the first record of the sequence.
Example 2
Example
Lets suppose there are three records existing for a user as follows:
You add a new record by importing data in Incremental Load mode, without providing a sequence number in
your import file.
After the data is imported, the new record is assigned a sequence number to match the order of the existing
records.
Let's suppose you import another record on the same date in Incremental Load mode. This time you provide a
sequence number in your import file.
Since the business keys (Event Date, User ID, and Sequence Number) match with an existing record, the
corresponding record is updated.
Database Output
Event Date Sequence Num
(dd/mm/yyyy) User ID Country/Region Supervisor Job Title ber
Remember
Sequence number is a part of the key that identifies an employee record. Therefore, to update an existing
record and not add a new one, you must ensure to provide the sequence number of the respective record.
Example
Result: The sequence number of the subsequent records are automatically corrected to maintain the order.
Sequence number generation and correction for records created by business rules
Example
Lets suppose an employee has the following Job Information, Compensation Information, and Recurring Pay
Component Information:
You add a new record by importing data in Incremental Load mode. You also have a business rule associated
with the Job Information entity to create new Compensation and Recurring Pay Component records.
Result: When the new records created by the rule are associated with the Compensation Information and the
Recurring Pay Component Information of the user, a sequence number is automatically adjusted.
Example
You add three new records by importing Job Information in Incremental Load mode, without providing a
sequence number:
Result: These records are not imported because the system is unable to determine the first record of the
sequence.
• Example
Case 1: You add a new record by importing data in Full Purge mode, without providing a sequence number.
Result: During the import process, a sequence number "1" is generated and assigned to the new record.
Data in the new record is matched against existing data. Since the business keys (Event Date, User ID,
Sequence Number) don't match with the existing record, the existing record is updated. The audit logs are
updated as well.
Case 2: You add a new record identical to the existing record by importing data in Full Purge mode, without
providing a sequence number.
Result: During the import process, a sequence number "1" is generated and assigned to the new record.
Data in the new record is matched against existing data. Since the business keys (Event Date, User ID,
Sequence Number) match with the existing record, the data is not imported ('suppressed'). The audit logs
are not updated.
• Example
Case 1: You add a new record by importing data in Full Purge mode, without providing a sequence number.
Result: During the import process, a sequence number "1" is generated and assigned to the new record.
Data in the new record is matched against existing data. Since the business keys (Event Date, User ID,
Sequence Number) match with an existing record, the existing record is updated and the remaining
records are deleted.
Case 2: You add a new record by importing data in Incremental Load mode, without providing a sequence
number.
Result: During the import process, a sequence number "1" is generated and assigned to the new record.
Data in the new record is matched against existing data. Since the business keys (Event Date, User ID,
Sequence Number) match with an existing record, the existing record is updated.
Case 3: You add multiple records by importing data in Full Purge mode, without providing a sequence
number.
Result: The records are not imported as the system is unable to determine the first record of the sequence.
• Example
01/01/2021 jdoe 1 4 1
01/01/2021 jdoe 2 4 2
You add recurring pay component information by importing data in Incremental Load mode, without
providing a sequence number.
Result: The Compensation Information record with sequence number 1 doesn't have any recurring pay
component records linked to it, whereas, the Compensation Information record with sequence number 2
has two recurring pay component records linked.
01/01/2021 jdoe 1 4 1
01/01/2021 jdoe 2 4 2
Persons who have previously worked at your company can be rehired if the required qualifications are met.
If a terminated employee is eligible to be rehired in your company, you can initiate the rehire process in Employee
Central. However, if a significant number of persons are to be rehired, you can use Employee Central Imports to
save time and manual effort.
There are two ways of rehiring former employees using employee data imports:
1. Rehire with the existing user ID of their previous employment. In this case, the previous employment data is
visible in the system.
2. Rehire with a new user ID. In this case, the previous employment data isn’t visible in the system. For more
information about rehiring users with a new user ID, refer to the Related Information section.
Retain and refer the old employment data by rehiring former employees using their existing user ID.
Prerequisites
Context
When you rehire former employees using their existing user ID, their old employment data is visible in the system,
and referred directly wherever possible. It is also possible to rehire former no-shows.
1. To rehire users with the same user ID, download a copy of the Job History import file template.
2. Prepare data to import.
Along with other information, enter the user ID of their previous employment under <User ID> column and
the event reason for rehiring users (as configured for your company) under <Event Reason> column. For
information about other notable points with Job History imports, refer to the Related Information section.
3. Upload the file to import the data.
Results
If successful, users are rehired on a new employment with their existing user ID.
During the import, most of the termination-specific fields are removed. Fields such as Benefits End Date as well as
custom strings are deleted. Only the Ok to Rehire, Eligible for Salary Continuation, and Regret Termination fields are
kept.
If the user had a global assignment, then all specific fields for global assignment details such as Payroll End Date
and custom fields are also removed.
Related Information
Create a fresh employment record by rehiring former employees with new user ID.
Prerequisites
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
• You must have the Manage Hires Rehire Inactive Employee with New Employment permission.
Context
When you rehire former employees by assigning a new user ID with their new employment, their old employment
data isn’t visible in the system. It is also possible to rehire former no-shows.
If a user being rehired already has an active employment, a concurrent employment is created.
Note
Procedure
Along with other information, enter the person ID of the user in the <Person ID External> column, and the
new user ID in the <User ID> column. The <Is Rehire> field must have a value relative to Yes.
Note
The <isRehire> field is a transient field used during import when a user is created for the Rehire with New
Employment scenario. This value is not saved to the database, it is only needed for the import.
To set the value as Yes, you can enter: Y, Yes, T, True, 1. By default, the system considers the field value as
1.
For information about other notable points with Employment Details imports, refer to the Related Information
section.
3. Upload the file to import the data.
Along with other information, enter the user ID of their previous employment under <User ID> column and
the event reason for rehiring users (as configured for your company) under <Event Reason> column. For
information about other notable points with Job History imports, refer to the Related Information section.
If successful, the HRIS sync job is scheduled. After the job is completed, the new user accounts are activated.
Related Information
Update employee records with supporting information by attaching a digital copy of documents with import file
templates.
Prerequisites
• The <attachment-id> field must be enabled and configured as visible in the HRIS elements of Emergency
Contact and Person Relationship entities.
Note
The <attachment-id> field appears by default in the import templates of all other supported entities.
• To download country/region-specific import file templates, the corresponding HRIS element must be
configured.
To check the existing configuration, go to Admin Center Manage Business Configuration page.
Context
You can attach various types of documents such as Address Proof, Job Info Letter, and so on, with employee
records as a source of verification to claims made by employees. Document attachment is supported with the
following data imports:
Documents in the following formats are supported: *.doc, *.docx, *.pdf, *.txt, *.htm, *.ppt, *.pptx, *.xls, *.xlsx, *.gif,
*.png, *.jpg, *.jpeg, *.html, *.rtf, *.bmp, and *.msg.
Procedure
Example: mhoff_visa.pdf
4. Create a new file in a text editor program and save the file as import.properties.
5. Edit the import.properties file, and add the parameter <importFileName> to hold the name of your import
file template.
Example: Let's suppose that you’ve downloaded the template to import Job History. Then the value for
<importFileName> must be the same as the actual file name of the CSV import file.
6. Compress the import file, the required documents, and the import.properties file into a zip file.
7. Upload the file to import the data.
Results
If successful, the documents included in your zip file are linked with corresponding employee records.
Related Information
Prerequisites
• You have set the Permission Settings Administrator Permissions Employee Central Import Settings
Enable Business Rules for selected entities permission.
• You have set the Permission Settings Administrator Permissions Employee Central Import Settings
Enable execution of rules against NO_OVERWRITE permission.
• Supported HRIS entities are as follows:
• Addresses
• Biographical Information
• Compensation Information
• Employment Details
• Email Information
• Global Information
• Job History
• Job Relationships
• National ID Information
• Non-Recurring Pay Component Information
• Personal Information
• Personal Documents Information
• Phone Information
• Recurring Pay Component Information
• Social Accounts Information
• Termination Details
• For configuring rules with Compensation Information entity, the Enable Centralized Services for Compensation
Information option must be enabled on the Company System And Logo Settings page.
Context
You can apply business rules with data imports to execute supplementary tasks when data is changed or saved.
Business rules can be linked to an HRIS element or any of its fields. Rules are triggered based on the event type
selected. For imports, HRIS elements support onSave, onPostSave event types, whereas HRIS fields support only
onChange event type.
Note
Restriction
onChange business rules attached to fields of an HRIS entity are triggered during the import process,
irrespective of whether there are any changes or not. However, this restriction isn’t applicable to HRIS entities
supported by Centralized Services as onChange rules attached with such entities are triggered only when there
are changes to the corresponding field values.
• With Non-Recurring Pay Component imports, onSave and onChange business rules configured for Job History
information aren't triggered. Similarly, rules aren't triggered when you update an existing record from the
Manager Self Service (MSS) page.
• With Job History imports, you cannot use event reasons in business rules to trigger changes in employment
status. For example, you cannot terminate an employment using a business rule.
• With Job Relationship imports, you can configure business rules that perform CREATE operation only. SET and
DELETE operations aren’t supported.
• With Personal Information imports for dependents, business rules attached to personalInfo HRIS element
for onSave or onChange events having base objects as Dependents or Personal Information aren't triggered.
• With Recurring Pay Component imports, the DELETE operation is not supported. The SET operation is
supported only when setting values in the same record; setting values in other recurring pay components is not
possible. When used to set values in the same record, the CREATE operation works like a set. When used to set
values in or create another record, the CREATE operation is supported only for Centralized services imports.
In cross-entity rules, forward propagation on a target entity (for example, Compensation Information) is applied if
the source entity (for example, Job Information) has permission for forward propagation.
Procedure
The Rule ID automatically picks up the value entered in the Rule Name field. However, it can be changed.
5. Select a Base Object.
The base object must correspond to the name of the HRIS element. Here are a few suggestions:
You can handle exceptional conditions with the Raise Message action. However, this feature is supported only
for specific entities supporting Centralized Services. For all other entities, you can handle exceptional
conditions by:
1. Assigning the error or warning message to a custom string in the corresponding HRIS element.
2. Monitoring the custom string to catch any exceptions.
7. Save the configuration.
Next Steps
Apply the business rule by assigning it to the corresponding Employee Central entity.
Prerequisites
If you’re assigning a business rule to an Employee Central entity optionally supported by Centralized services, and
you want to apply a rule context criteria with the rule, ensure that the Centralized services settings for the
optionally supported entities are enabled from Company System and Logo Settings page.
Context
You can apply a business rule to an HRIS entity or one of its fields. You can also define a rule context to prevent
unnecessary rule triggers.
Note
Rule contexts are applicable for onChange and onSave business rules only.
Procedure
Note
If you have Centralized services enabled, onSave business rules are executed with data imports only if
Imports setting is enabled.
Note
Note
If you have Centralized services enabled, onChange business rules are executed with data imports only
if Imports setting is enabled.
Results
Notable scenarios related to business rule execution with different data imports.
Cross-entity rules configured to create new records are now regulated by Centralized services. Newly created
records are added only if they’re unique and don't match with existing records. Otherwise, the existing record is
updated.
The following example explains the behavior from the perspective of Job Information, but the logic is applicable to
cross-entity rules between Compensation Information and Recurring Pay Component entities.
Example
You add a new Job Information record by importing data in Incremental Load mode.
An onSave rule is attached to the Job Information entity that creates a new Job Relationship record.
Rule-Created Data
Event Date (dd/mm/
User ID Relationship Type yyyy) Related User ID Notes
Result: This record is not added as the business keys (User ID, Start Date, Relationship Type) match with an
existing Job Relationship record.
Add a new Job Information record by importing data in Incremental Load mode.
Since the business keys of the new record match with an existing record, the existing record is updated with
new information instead.
onChange rules configured at the field level of an HRIS entity can update the value of other fields in your import file
only if the field value is null or empty.
Example
You add new Personal Information by importing data in Incremental Load mode.
Now, if there are onChange rules configured on <Custom String 1>, <Custom String 2> such that,
Sample Code
//Rule 1
if(Personal Information.custom-string1 == Personal Information Type C) THEN
SET Personal Information.custom-string2 == Personal Information Type E
//Rule 2
if(Personal Information.custom-string2 == Personal Information Type E) THEN
SET Personal Information.custom-string3 == Personal Information Type F
During the data import process, Rule 1 is evaluated as true. But since <Custom String 2> has a value in the
import file, it takes precedence over the rule evaluation. As a result, Rule 2 is evaluated as false.
Example
You add new Personal Information by importing data in Incremental Load mode.
Now, if there are onChange rules configured on <Custom String 1>, <Custom String 2> such that,
Sample Code
//Rule 1
if(Personal Information.custom-string1 == Personal Information Type C) THEN
SET Personal Information.custom-string2 == Personal Information Type E
//Rule 2
if(Personal Information.custom-string2 == Personal Information Type E) THEN
SET Personal Information.custom-string3 == Personal Information Type F
//Rule 3
if(Personal Information.custom-string3 == Personal Information Type F) THEN
SET Personal Information.custom-string1 == Personal Information Type G
During the data import process, Rule 1 and Rule 2 are evaluated as true. Therefore, the values of <Custom
String 2> and <Custom String 3> are updated accordingly. But since <Custom String 1> has a value in
the import file, it takes precedence over the rule evaluation. As a result, Rule 3 is evaluated as false.
Result:
ecsl Personal Information Type C Personal Information Type E Personal Information Type F
onChange rules configured on the field level of an HRIS entity don't update fields excluded in your import file or
marked as not to be overwritten. But if you’ve the Enable execution of rules against NO_OVERWRITE permission,
the rules are able to update fields not present in your import file, or fields marked as not to be overwritten.
Example
Prerequisites: You don't have the Enable execution of rules against NO_OVERWRITE permission.
You add new Personal Information by importing data in Incremental Load mode.
Sample Code
Result: When the data import job is in progress, Rule 1 is evaluated to be true. But <Custom String 2> is not
present in the import file, and has a definite value in the existing record. Therefore, its value isn’t updated.
You add new Personal Information by importing data in Incremental Load mode.
Result: When the data import job is in progress, Rule 1 is evaluated to be true. But <Custom String 2> is not
present in the import file, and doesn’t have a definite value in the existing record. Therefore, its value is
updated.
Example
Prerequisites: You have the Enable execution of rules against NO_OVERWRITE permission.
You add new Personal Information by importing data in Incremental Load mode.
Sample Code
Result: When the data import job is in progress, Rule 1 is evaluated to be true. <Custom String 2> is not
present in the import file, and has a definite value in the existing record. But since you've the Enable execution
of rules against NO_OVERWRITE permission, its value is updated.
You add new Personal Information by importing data in Incremental Load mode.
Sample Code
Result: When the data import job is in progress, Rule 1 is evaluated to be true. <Custom String 2> is not
present in the import file, and doesn’t have a definite value in the existing record. Therefore, its value is
updated.
When you're adding multiple records for a user in a chronological order, rules configured to evaluate based on the
previous value of a field (in correspondence with a given date) will refer to the existing records of the user, and not
the records in your import file. If there are no existing records present, the rule will evaluate against a null value.
Example
You add new Job Information for the same user by importing data in Full Purge mode.
You have the following rule configured with the Job Information entity for an onSave event.
Sample Code
When this rule is triggered for the record with event date (01/02/2020), the previous value of the <Business
Unit> is obtained as Business Unit DB, and not Business Unit Import.
These rules always pull the previous value by looking at the previous record in case of UI or a single record insert.
When you're adding multiple records for a user with imports or APIs, the system derives the previous value by
combining the import records and the existing records of the user.
By default, all transactions supported by Centralized services adhere to this behavior, for example, Position-to-Job
Information Synchronization.
Example
Let's suppose a user has the following Job Information saved in the system:
You add new Job Information for the same user by importing data in Incremental Load mode.
You have a rule configured with the Job Information entity for an onPostSave event to check the job-
info.custom-string 1 previous value.
When this rule is triggered for the record with start date (01/01/2005), the previous value of the <Custom-
String 1> is 100.
Prerequisites
• You've enabled Enable Business Rules for Workflow Derivation setting in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Context
A workflow is a sequence of connected steps that allows you to notify and seek approval from the concerned
stakeholders before importing data. You can configure a workflow with the following data imports:
• Job History
• Termination Details
• Non-Recurring Pay Component Information
Remember
• Are triggered only if your import file contains one record for each user. If there are multiple records for the
same user with at least one record requiring approval, the data for this user isn’t imported.
• Are triggered only by the entity used in the import, for example, Job History. If no workflow is triggered by
the entity in the import, all changes are immediately saved to the database, including derived entities, such
as cross entity rule results.
• Are triggered only when a new Job History record is inserted in the import.
• Are not supported for the Update operation. This means that a workflow isn't triggered when a Job History
record with a configured workflow is updated, with or without derived entities involved, such as
Compensation Information.
• Aren't triggered for a new hire record. They are triggered for internal hire though.
All workflows are triggered when data is imported in Incremental Load mode only.
Remember
A workflow is created for each record resulting in multiple records being processed for a user. Without
Centralized services setting enabled, a workflow would be created for only one user record. The remaining
records for the user will not be processed.
Procedure
Tip
Ensure that the base object of your business rule corresponds to the Employee Central entity with which
you want to attach the workflow.
Results
You've successfully configured and assigned a workflow. When the workflow is triggered, a task to review the import
data is created and assigned to each recipient defined in the workflow. Email notifications are generated to inform
the recipients about the actions. Based on their approval, data is imported.
Related Information
Govern who can, and who can't import basic user information by managing access to basic user import.
Context
Previously, admins having access to the Import Employee Data page could import all sorts of employee data,
including basic user information. With Q4 2019, you have the option to restrict access to basic user information
imports with the help of role-based permissions. You can configure your system so that admins having the Basic
User Import permission can import basic user information.
Note
If you choose not to configure role-based access to basic user information imports, admins can import basic
user information as long as they can access the Import Employee Data page.
Procedure
Results
Next Steps
Grant the Basic User Import permission to the required user roles.
Related Information
Prevent complications arising out of legal entity-related changes with Job Information imports.
Context
While importing Job Information of employees rehired on a new employment, you must ensure that the legal entity
associated with their new employment doesn’t match with the legal entity associated with their previous
employment. Because employees can only be associated with one legal entity, a new employment must be
associated with a new legal entity.
Procedure
Results
Create a recurring job to import a specific type of employee data to your Employee Central instance.
Prerequisites
You have an SFTP account. For security reasons, we recommend that you use SAP SuccessFactors hosted SFTP
server. In case you don't have an SFTP account, contact your Partner or SAP Cloud Support.
If you want to automate the employee data import process through an SFTP/FTP Job Scheduler, you can create a
recurring job request in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product Support.
Procedure
If you don’t have the up-to-date import template, select Download a blank CSV template.
For Basic Import, there are additional options available. Select the options as applicable.
Some import types provide the option of choosing how you want to import data. Select:
• Full Purge, if you want to delete or overwrite existing employee data with the data in your import template.
• Incremental Load, if you want to retain existing employee data and update only the required data present in
your import template.
If you want to encode the template in a format other than Unicode (UTF-8), select an encoding format from
the File encoding dropdown.
To select the file locale other than English (United States), choose from the File Locale dropdown.
Note
The options appearing in the dropdown correspond to the language packs selected during Employee
Central configuration. File Locale allows you to choose the language and locale for the data you’re
importing, which is especially important for date, number, and picklist fields, as the format can change
based on the locale. Selecting a file locale wouldn’t affect your default logon language or the language of
your UI components.
Tip
9. Select FTP Passive Mode if applicable, only after confirming with the Operations team.
If the job scheduler is unable to find your import file on the server, your scheduled job is not executed and
marked with status as "Skipped".
13. Set the job recurrence criteria as applicable.
14. Enter the date and time when you would like the job to start.
15. Enter the email addresses of users who must be kept notified about events related to the job execution.
16. Select the checkbox to send an email notification when the job starts, if necessary.
17. Complete the job request creation by selecting Create Job.
Results
If successful, your job request is created and listed under the Manage Scheduled Jobs page.
Next Steps
• Manually initiate the job for the first time. To do so, select Actions Submit followed by Actions Run It
Now against your job on the Manage Scheduled Jobs page by:
• Create a new job request for importing another type of employee data.
From the performance standpoint and resource utilization perspective, actively running data import jobs are
interrupted or canceled upon failing to meet the required criteria.
Data import jobs are now internally monitored in reference to metrics such as time taken since inception, memory
consumption and so on. If a job fails to meet a certain pre-defined threshold, it’s interrupted or cancelled as
applicable. The intention is to help reduce the overhead cost of scheduling and running jobs with errors, and enable
job owners to manage failed jobs more effectively.
This process of automatic interruption or cancellation applies to import jobs initiated from the UI or API, as well as
scheduled jobs in Provisioning.
If a job is canceled, the respective job owners are notified accordingly with the help of email notifications.
All interrupted jobs are recovered after some time with the same job request ID, and their status is set to
"Recovered".
A data import job is interrupted and recovered only twice. Thereafter, the job runs uninterrupted till its completion.
You can monitor the progress of your jobs at Admin Center Monitor Jobs page.
Learn how you can keep the personal data of your employees secure and private with SAP SuccessFactors.
Data protection and privacy features work best when implemented suite-wide, and not product-by-product. For this
reason, they’re documented centrally.
The Implementing and Managing Data Protection and Privacy guide provides instructions for setting up and using
data protection and privacy features throughout the SAP SuccessFactors HXM Suite . Please refer to the central
guide for details.
Note
SAP SuccessFactors values data protection as essential and is fully committed to help customers complying
with applicable regulations – including the requirements imposed by the General Data Protection Regulation
(GDPR).
By delivering features and functionalities that are designed to strengthen data protection and security
customers get valuable support in their compliance efforts. However it remains customer’s responsibility to
evaluate legal requirements and implement, configure and use the features provided by SAP SuccessFactors in
compliance with all applicable regulations.
Related Information
Identify which data purge function in the Data Retention Management tool meets your data protection and privacy
requirements.
The Data Retention Management tool supports two different data purge functions: the newer data retention time
management (DRTM) function and legacy non-DRTM function.
Remember
We encourage all customers to stop using the legacy purge function and start using data retention time
management (DRTM) instead. To get started using this and other data protection and privacy features, refer to
the Data Protection and Privacy guide.
If you already use the legacy data purge function as part of your current business process and you are sure that it
meets your company's data protection and privacy requirements, you can continue to use it, as long as you aware
of its limitations.
Note
If you are using the legacy data purge function, you can only purge a calibration session when there is at least
one facilitator assigned to the session.
Restriction
Be aware that the legacy data purge function may not meet your data protection and privacy requirements. It
doesn't cover the entire HXM Suite and it doesn't permit you to configure retention times for different countries
or legal entities.
In the longer term, we recommend that you also consider adopting the newer solution. In the meantime, to use
legacy data purge, please refer to the guide here.
Related Information
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
• Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your agreements
with SAP) to this:
• The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
• SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
• Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such links, you
agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax and
phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of example
code unless damages have been caused by SAP's gross negligence or willful misconduct.
Bias-Free Language
SAP supports a culture of diversity and inclusion. Whenever possible, we use unbiased language in our documentation to refer to people of all cultures, ethnicities, genders,
and abilities.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.