You are on page 1of 29

Informatica® Cloud Data Integration

April 2022

What's New
Informatica Cloud Data Integration What's New
April 2022
© Copyright Informatica LLC 2016, 2022

This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial
computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such,
the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the
extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License.

Informatica, Informatica Cloud, Informatica Intelligent Cloud Services, PowerCenter, PowerExchange, and the Informatica logo are trademarks or registered trademarks
of Informatica LLC in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://
www.informatica.com/trademarks.html. Other company and product names may be trade names or trademarks of their respective owners.

Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product.

The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at
infa_documentation@informatica.com.

Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE
INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.

Publication Date: 2022-04-20


Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Informatica Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Informatica Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Informatica Intelligent Cloud Services web site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Informatica Intelligent Cloud Services Communities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Informatica Intelligent Cloud Services Marketplace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Data Integration connector documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Informatica Knowledge Base. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Informatica Intelligent Cloud Services Trust Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Informatica Global Customer Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Chapter 1: New features and enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7


Data Integration Elastic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Expression autocomplete. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Flat files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Mapplet transformation names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Parameter files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Pushdown optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
SQL connection parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Taskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Intelligent structure models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Data Integration REST API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Platform REST API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Chapter 2: Changed behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12


Configuring advanced attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Taskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
File listener. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Chapter 3: Connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
New connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Enhanced connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Changed behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Chapter 4: Upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Preparing for the upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Post-upgrade tasks for the April 2022 release. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
TLS 1.0 and 1.1 disablement for the Secure Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Amazon Redshift V2 Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Table of Contents 3
Amazon S3 V2 Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Amazon S3 bucket policy for elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Connection with TLS 1.0 or 1.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Databricks Delta Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Elastic clusters in an AWS environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Flat files with UTF-8-BOM encoding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Microsoft Azure Synapse SQL Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Microsoft SQL Server Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
SAP Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
SSE-KMS encryption for elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
File Integration Service proxy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Chapter 5: Enhancements in previous releases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4 Table of Contents
Preface
Read What's New to learn about new features, enhancements, and behavior changes in Informatica Intelligent
Cloud Services℠ Data Integration for the April 2022 release. You can also learn about upgrade steps that you
might need to perform.

Informatica Resources
Informatica provides you with a range of product resources through the Informatica Network and other online
portals. Use the resources to get the most from your Informatica products and solutions and to learn from
other Informatica users and subject matter experts.

Informatica Documentation
Use the Informatica Documentation Portal to explore an extensive library of documentation for current and
recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.

If you have questions, comments, or ideas about the product documentation, contact the Informatica
Documentation team at infa_documentation@informatica.com.

Informatica Intelligent Cloud Services web site


You can access the Informatica Intelligent Cloud Services web site at http://www.informatica.com/cloud.
This site contains information about Informatica Cloud integration services.

Informatica Intelligent Cloud Services Communities


Use the Informatica Intelligent Cloud Services Community to discuss and resolve technical issues. You can
also find technical tips, documentation updates, and answers to frequently asked questions.

Access the Informatica Intelligent Cloud Services Community at:

https://network.informatica.com/community/informatica-network/products/cloud-integration

Developers can learn more and share tips at the Cloud Developer community:

https://network.informatica.com/community/informatica-network/products/cloud-integration/cloud-
developers

Informatica Intelligent Cloud Services Marketplace


Visit the Informatica Marketplace to try and buy Data Integration Connectors, templates, and mapplets:

5
https://marketplace.informatica.com/

Data Integration connector documentation


You can access documentation for Data Integration Connectors at the Documentation Portal. To explore the
Documentation Portal, visit https://docs.informatica.com.

Informatica Knowledge Base


Use the Informatica Knowledge Base to find product resources such as how-to articles, best practices, video
tutorials, and answers to frequently asked questions.

To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or
ideas about the Knowledge Base, contact the Informatica Knowledge Base team at
KB_Feedback@informatica.com.

Informatica Intelligent Cloud Services Trust Center


The Informatica Intelligent Cloud Services Trust Center provides information about Informatica security
policies and real-time system availability.

You can access the trust center at https://www.informatica.com/trust-center.html.

Subscribe to the Informatica Intelligent Cloud Services Trust Center to receive upgrade, maintenance, and
incident notifications. The Informatica Intelligent Cloud Services Status page displays the production status
of all the Informatica cloud products. All maintenance updates are posted to this page, and during an outage,
it will have the most current information. To ensure you are notified of updates and outages, you can
subscribe to receive updates for a single component or all Informatica Intelligent Cloud Services
components. Subscribing to all components is the best way to be certain you never miss an update.

To subscribe, go to https://status.informatica.com/ and click SUBSCRIBE TO UPDATES. You can then


choose to receive notifications sent as emails, SMS text messages, webhooks, RSS feeds, or any
combination of the four.

Informatica Global Customer Support


You can contact a Customer Support Center by telephone or online.

For online support, click Submit Support Request in Informatica Intelligent Cloud Services. You can also use
Online Support to log a case. Online Support requires a login. You can request a login at
https://network.informatica.com/welcome.

The telephone numbers for Informatica Global Customer Support are available from the Informatica web site
at https://www.informatica.com/services-and-training/support-services/contact-us.html.

6 Preface
Chapter 1

New features and enhancements


The April 2022 release of Informatica Intelligent Cloud Services℠ Data Integration includes the following new
features and enhancements.

Data Integration Elastic


After you incrementally load source files in an elastic mapping, you can run a job to reprocess files that were
modified in a specific time interval. Running a reprocessing job allows you to time travel so you can create a
snapshot of the data from a given time interval, debug and discover the source of bad data found in your
target, or restore deleted data.

For example, you have an elastic mapping task that incrementally loads files every day at 12:00:00 p.m. On
Monday, April 4, you realize that bad data was entered the previous Friday, April 1, that affected the jobs over
the weekend. To fix this, you configure a reprocessing job to reload files changed after 04/01/2022 12:00:01
p.m.

For more information, see Tasks.

Expression autocomplete
When you build an expression, Data Integration suggests functions, parameters, system variables, fields, and
user-defined functions to complete the expression.

Data Integration offers autocomplete suggestions for an expression when you configure an Expression
transformation with non-hierarchical data, or when you configure a user defined function.

Flat files
This release includes the following enhancements to flat files:

• When you search for an object in a flat file connection, you can browse and select an object from
subfolders within the default directory. When you create a flat file connection, the directory that you
specify is the default connection directory.

7
• You can edit the auto-generated field names in lookups for flat files with no header in a mapping.
• You can edit the metadata of flat file lookup return fields in a mapping task.
• When you configure a mapping task, you can edit field metadata for parameterized flat file source and
lookup file list objects.
• You can configure a mapping task to retain design-time metadata for a parameterized flat file object.

For more information about configuring flat file connections, see Data Integration Connections. For more
information about using flat file objects in mappings, see Transformations. For more information about
mapping tasks, see Tasks.

Mapplet transformation names


When you use a mapplet in a mapping created after the April 2022 release, Data Integration prefixes the
transformation names in the mapplet with the Mapplet transformation name at run time.

For example, a mapplet contains an Expression transformation named Expression_1. You create a mapping
and use the mapplet in the Mapplet transformation Mapplet_Tx_1. When you run the mapping, the Expression
transformation is renamed to Mapplet_Tx_1_Expression_1.

Data Integration does not update transformation names in mapplets that are used in mappings created prior
to the April 2022 upgrade.

For more information about mapplets, see Components. For more information about Mapplet
transformations, see Transformations.

Parameter files
You can create a new target when you use a parameter file. If a target with the same name as the target
specified in the parameter file doesn't exist, a new target is created.

For more information about parameter files, see Mappings.

Pushdown optimization
This release includes the following enhancements to pushdown optimization:

Optimization context type

You can provide details about the optimization context for multi-insert and slowly changing dimension
type 2 merge scenarios. Based on the context that you provide, Data Integration combines multiple
targets in the mapping and constructs a single query for pushdown optimization.

Cancel the task

If the pushdown optimization mode that you select is not possible, you can choose to cancel the
mapping task.

For more information about pushdown optimization, see Tasks or the help for the appropriate connector.

8 Chapter 1: New features and enhancements


SQL connection parameters
When you resolve SQL transformation connection parameters in a mapping task, you can configure advanced
attributes for some connection types.

To see if a connector supports configuring advanced attributes, see the help for the appropriate connector.
For more information about SQL transformations, see Transformations.

Taskflows
This release includes the following enhancements to taskflows:

APIs to resume a suspended taskflow


You can use the resumeWithFaultRetry resource to resume a suspended taskflow instance from a faulted
step. You can also use the resumeWithFaultSkip resource to skip a faulted step and resume a suspended
taskflow instance from the next step.

For more information about using the APIs to resume a suspended taskflow, see REST API Reference.

Display output fields when a Data Task step fails


When the Data Task step of a taskflow fails, you can view the output fields on the Fault tab of the My Jobs
page in Data Integration, and the All Jobs page and Running Jobs page in Monitor.

You can view the output fields on the Fault tab when one of the following conditions are met:

• The On Error field is set to Ignore or Custom error handling.


• The Fail taskflow on completion option is set to If this task fails.
Using the output fields of the failed data task, you can make decisions and update the taskflow design. When
you use a Decision step in a taskflow and select the field as the entire data task, the Decision step takes the
Is set path by default.

For more details about faulted data tasks, see Taskflows and Monitor.

Support for data transfer task and dynamic mapping task in a Data Task step
You can add a data transfer task and dynamic mapping task to a Data Task step of a taskflow.

You can use a data transfer task in a taskflow to transfer data from a source to a target. You can use a
dynamic mapping task in a taskflow to run specific groups and jobs configured in the task.

For more information about using a data transfer task and dynamic mapping task in a Data Task step, see
Taskflows.

SQL connection parameters 9


Intelligent structure models
This release includes the following enhancements to intelligent structure models:

Include schema elements in models based on Avro, Parquet, and ORC files
When you create an intelligent structure model that is based on an Avro, Parquet, or ORC file, Intelligent
Structure Discovery includes the schema elements in the model, thus making elements that don't contain
data part of the model.

Parse JSON-encoded Avro messages


You can use models that are based on an Avro schema to parse JSON-encoded Avro messages.

For more information about intelligent structure models, see Components.

Transformations
This release includes the following enhancements to transformations.

Hierarchy Builder transformation


The Hierarchy Builder transformation can write data to a flat file. Use the file output type when the
transformation processes a large amount of data and the output field size exceeds 100 MB.

Hierarchy Processor transformation


The Hierarchy Processor transformation includes a flattened option for output data. Use the flattened output
format to convert hierarchical input into denormalized output.

Machine Learning transformation


This release includes the following enhancements to the Machine Learning transformation.

Amazon SageMaker

The Machine Learning transformation can run a machine learning model that is deployed on Amazon
SageMaker.

Sending bulk requests

You can configure the Machine Learning transformation to combine multiple API requests into one bulk
request before sending the data to the machine learning model. Bulk requests can improve performance
by reducing processing overhead and the amount of time that it takes to communicate with the model.

Serverless runtime environments

You can run the Machine Learning transformation in a serverless runtime environment.

For more information about the transformations, see Transformations.

10 Chapter 1: New features and enhancements


Data Integration REST API
Use the code task API to submit Spark code written in Scala to an elastic cluster. You can view job results in
Monitor.

For more information about the code task API, see the REST API Reference.

Platform REST API


You can assign a Secure Agent to an existing Secure Agent group through the Informatica Intelligent Cloud
Services REST API using the runtimeEnvironment resource.

For more information, see the REST API Reference.

Data Integration REST API 11


Chapter 2

Changed behavior
The April 2022 release of Informatica Intelligent Cloud Services Data Integration includes the following
changed behaviors.

Configuring advanced attributes


When you resolve connection and object parameters for Source, Target, or Lookup transformations in a
mapping task, you configure advanced attributes for each object. If the transformation contains a connection
parameter but no object parameter, the configured object is displayed in the task.

Previously you configured advanced attributes for each connection parameter.

For more information about the advanced attributes that you can configure, see the help for the appropriate
connector.

Taskflows
The Publish button is added to the taskflow designer page.

Previously, the Publish option was available under the Actions menu on the taskflow designer page.

For more information about publishing taskflows, see Taskflows.

File listener
When you use a file listener as a source in a file ingestion task, if a notification about a file event doesn't
reach the file ingestion task, the file listener queues the event and includes it in the notification it sends to
the next file ingestion job. A file ingestion task thus receives a notification about each file at least once. This
ensures that the file ingestion task transfers all files to the target.

Previously, if a notification about a file event didn't reach the file ingestion task, the file listener didn't
continue to notify the file ingestion task about the event, and the task didn't transfer the files to the target.

For more information about file listener notifications, see Components.

12
Chapter 3

Connectors
The April 2022 release includes the following enhanced connectors.

New connectors
This release includes the following new connectors.

Adabas Connector
You can use Adabas Connector to connect to a PowerExchange Adabas environment to retrieve data in bulk
from an Adabas source database on a z/OS system. The PowerExchange Listener retrieves metadata from
the data map repository and data from the Adabas source. The data is returned to the PowerExchange Bulk
Reader, which sends the data to Data Integration. Data Integration can then send the data to a supported
target for a batch load.

Adabas CDC Connector


You can use Adabas CDC Connector to connect to a PowerExchange CDC environment to retrieve change
records that PowerExchange captures from Adabas PLOG data sets for an Adabas source database on a
z/OS system. Adabas CDC Connector extracts change records from PowerExchange Logger log files and
sends the change records to Data Integration. Data Integration can then transmit the change records to a
supported target.

Business 360 Events Connector


You can use Business 360 Events Connector to publish events from Business 360 applications to targets that
Data Integration supports, such as Kafka and Amazon S3. You can publish events related to actions on
business entity records, such as create, update, and delete.

Db2 for i Connector


You can use Db2 for i Connector to connect to a PowerExchange Db2 environment to move bulk data from or
to a Db2 for i database. For relational sources and targets such as Db2 for i tables, you do not need to create
a data map. The connector can import the metadata that PowerExchange reads from the Db2 catalog to
create a source or target.

Db2 for z/OS Connector


You can use Db2 for z/OS Connector to connect to a PowerExchange Db2 environment to move bulk data
from or to a Db2 for z/OS database. For relational sources and targets such as Db2 for z/OS tables, you do
not need to create a data map. The connector can import the metadata that PowerExchange reads from the
Db2 catalog to create a source or target.

13
IMS Connector
You can use IMS Connector to connect to a PowerExchange IMS environment to retrieve data in bulk from an
IMS source database on a z/OS system. The PowerExchange Listener retrieves metadata from the data map
repository and data from the IMS source. The data is returned to the PowerExchange Bulk Reader, which
sends the data to Data Integration. Data Integration can then send the data to a supported target for a batch
load.

IMS CDC Connector


You can use IMS CDC Connector to connect to a PowerExchange CDC environment to retrieve change
records that PowerExchange captures in near real time for an IMS source database on a z/OS system. IMS
CDC Connector extracts change records from PowerExchange Logger log files and sends the change records
to Data Integration. Data Integration can then transmit the change records to a supported target.

SAP IQ Connector
You can use SAP IQ Connector to connect to SAP IQ database from Data Integration. Use SAP IQ Connector
to write data to an SAP IQ database. You can use SAP IQ objects as targets in mappings and mapping tasks.
When you use these objects in mappings, you must configure properties specific to SAP IQ.

You can only insert records when you configure a Target transformation in an SAP IQ mapping.

Sequential File Connector


You can use Sequential File Connector to connect to a PowerExchange sequential file environment to retrieve
data in bulk from sequential source data sets on a z/OS system. The PowerExchange Listener retrieves
metadata from the data map repository and data from the sequential data sets. The data is returned to the
PowerExchange Bulk Reader, which sends the data to Data Integration. Data Integration can then send the
data to a supported target for a batch load.

VSAM Connector
You can use VSAM Connector to connect to a PowerExchange VSAM environment to retrieve data in bulk
from VSAM source data sets on a z/OS system. The PowerExchange Listener retrieves metadata from the
data map repository and data from the VSAM data sets. The data is returned to the PowerExchange Bulk
Reader, which sends the data to Data Integration. Data Integration can then send the data to a supported
target for a batch load.

Enhanced connectors
This release includes enhancements to the following connectors.

Amazon DynamoDB V2 Connector


This release includes the following enhancements for Amazon DynamoDB V2 Connector:

• You can use a serverless runtime environment to run Amazon DynamoDB V2 elastic mappings.
• You can use temporary security credentials, created by AssumeRole, to access AWS resources.
• You can use the Hierarchy Processor transformation in elastic mappings to convert hierarchical input into
relational output and relational input into hierarchical output.

14 Chapter 3: Connectors
Amazon Redshift V2 Connector
This release includes the following enhancements for Amazon Redshift V2 Connector:

• You can configure client-side encryption for Amazon Redshift V2 sources and targets when you use a
serverless runtime environment.
• When you configure a full or source pushdown optimization for an Expression transformation, you can use
variables to define calculations and store data temporarily.
• You can run elastic mappings on a self-service cluster.

Amazon S3 V2 Connector
This release includes the following enhancements for Amazon S3 V2 Connector:

• You can configure an Amazon S3-compatible storage, such as Scality RING and MinIO, to access and
manage the data that is stored over an S3 compliant interface.
• You can configure client-side encryption for Amazon S3 V2 sources and targets when you use a
serverless runtime environment.
• After you incrementally-load source files in an elastic mapping, you can run a job to reprocess files that
were modified in a specific time interval.
• You can write to a partition directory incrementally in an elastic mapping and append data to the partition
directory.
• You can run elastic mappings on a self-service cluster.

Databricks Delta Connector


This release includes the following enhancements for Databricks Delta Connector:

• You can use a Hosted Agent to run Databricks Delta mappings.


• You can run elastic mappings on a self-service cluster.
• Pushdown enhancements in mappings using Databricks Delta connection
- You can configure pushdown optimization for mappings in the following scenarios:

- Mappings that read from an Microsoft Azure Data Lake Storage Gen2 source and write to a Databricks
Delta target.
- Mappings that read from an Amazon S3 V2 source and write to a Databricks Delta target.

- When you configure full pushdown optimization for a task that reads from and writes to Databricks
Delta, you can determine how Data Integration handles the job when pushdown optimization does not
work. You can set the task to fail or run without pushdown optimization.
- When you configure a full pushdown optimization for an Aggregator or Expression transformation, you
can use variables to define calculations and store data temporarily.

Google BigQuery V2 Connector


This release includes the following enhancements for Google BigQuery V2 Connector:

• Pushdown enhancements in mappings using Google BigQuery V2 connection


- You can configure source pushdown optimization in mappings that read from Google BigQuery sources
and write to Google BigQuery targets using the Google BigQuery V2 connection.
- When you configure a full or source pushdown optimization for a mapping and a transformation is not
applicable, the task partially pushes down the mapping logic to the point where the transformation is
supported for pushdown optimization.

Enhanced connectors 15
- You can read data from Google BigQuery standard and materialized views as a source and lookup
object.
- When you configure a full or source pushdown optimization for an Expression transformation, you can
use variables to define calculations and store data temporarily.
- When you configure a mapping enabled for full pushdown optimization to read from a Google BigQuery
source and write to two Google BigQuery Target transformations that represents the same Google
BigQuery table, you can enable the SCD Type 2 merge optimization mode in the task properties. In SCD
Type 2 merge optimization mode, when you use two target transformations in a mapping, one to insert
data and the other to update data to the same Google BigQuery target table, Data Integration combines
the queries for both the Target transformations and issues a Merge query to optimize the task.
- If you want to stop a running job enabled for pushdown optimization, you can clean stop the job. When
you use clean stop, Data Integration terminates all the issued statements and processes spawned by the
job.
- When you enable full pushdown optimization for a task that reads from and writes to Google BigQuery,
you can determine how Data Integration handles the job when pushdown optimization does not work.
You can set the task to fail or run without pushdown optimization.
• When you configure a mapping to read data from a Google BigQuery source in staging mode, you can
stage the data into the local staging file in Parquet format.
• When you run a mapping to a Google BigQuery target in bulk mode, Data Integration creates a CSV file in
the temporary folder in the Secure Agent directory to stage the data before writing the data to the Google
BigQuery target. The performance of the task is optimized when the connector uses the CSV file for
staging data.

Google Cloud Storage V2 Connector


This release includes the following enhancements for Google Cloud Storage V2 Connector:

• You can run multiple elastic mappings concurrently.


• When you run elastic mappings, you can choose to import metadata for the selected object without
parsing other objects, folders, or sub-folders available in the bucket. Directly importing metadata for the
selected object can improve performance by reducing the overhead and time taken to parse each object
available in the bucket.
• After you incrementally load source files in an elastic mapping, you can run a job to reprocess files that
were modified in a specific time interval.
• When you run a mapping, you can read data from or write data to a Google Cloud Storage fixed-width flat
file.

Hive Connector
This release includes the following enhancements for Hive Connector:

• You can configure a Target transformation in a mapping or an elastic mapping to create a target at
runtime.
• When you configure a Target transformation in a mapping or an elastic mapping to create a Hive target at
runtime, you can include the partition fields of the String data type and set the order in which they must
appear in the target.
• When you configure an elastic mapping to read from or write to Hive, you can use Managed Identity
Authentication to stage Hive data on Azure.
• You can enable dynamic schema handling in a Hive task to refresh the schema every time the task runs.
You can choose how Data Integration handles changes in the Hive data object schemas.
• You can configure a dynamic mapping task to create and batch multiple jobs based on the same mapping.

16 Chapter 3: Connectors
• You can configure an elastic mapping to read from or write data that contains Array and Struct complex
data types. To write Array and Struct data types to Hive, you must configure the elastic mapping to create
a new Hive target at runtime. You can also use a Hierarchy Processor transformation in an elastic
mapping to read relational or hierarchical input and convert it to relational or hierarchical output.
Important: This functionality is available for preview. Preview functionality is supported for evaluation
purposes but is unwarranted and is not production-ready. Informatica recommends that you use in non-
production environments only. Informatica intends to include the preview functionality in an upcoming
release for production use, but might choose not to in accordance with changing market or technical
circumstances. For more information, contact Informatica Global Customer Support.

JDBC V2 Connector
You can run elastic mappings on a self-service cluster.

Microsoft Azure Data Lake Storage Gen2 Connector


This release includes the following enhancements for Microsoft Azure Data Lake Storage Gen2 Connector:

• You can use Managed Identity authentication to connect to Microsoft Azure Data Lake Storage Gen2.
When you use Managed Identity authentication, you do not need to provide credentials, secrets, or Azure
Active Directory tokens.
• You can use shared key authentication to connect to Microsoft Azure Data Lake Storage Gen2 using the
account name and account key in elastic mappings.
• You can use Microsoft Azure Data Lake Storage Gen2 Connector to connect to Microsoft Azure Data Lake
Storage Gen2 on a virtual network with a private endpoint.
• After you incrementally-load source files in an elastic mapping, you can run a job to reprocess files that
were modified in a specific time interval.

Microsoft Azure Synapse SQL Connector


This release includes the following enhancements for Microsoft Azure Synapse SQL Connector:

• Pushdown enhancements in mappings using Microsoft Azure Synapse SQL connection


- When you configure a mapping enabled for full pushdown optimization to read from a Microsoft Azure
Data Lake Storage Gen2 source and write to a Microsoft Azure Synapse SQL target, you can use the
shared key authentication to connect to Microsoft Azure Data Lake Storage Gen2 using the account
name and account key.
- When you configure full pushdown optimization for a task, you can determine how Data Integration
handles the job when pushdown optimization does not work. You can set the task to fail or run without
pushdown optimization.
- If you want to stop a running job enabled for pushdown optimization, you can clean stop the job. When
you use clean stop, Data Integration terminates all the issued statements and processes spawned by the
job.
- When you configure a full or source pushdown optimization for an Expression transformation, you can
use variables to define calculations and store data temporarily.
• You can use Managed Identity authentication to connect to Microsoft Azure Data Lake Storage Gen2
when used to stage files for Microsoft Azure Synapse SQL. When you use Managed Identity
authentication, you do not need to provide credentials, secrets, or Azure Active Directory tokens.
• You can use Microsoft Azure Synapse SQL Connector to connect to Microsoft Azure Synapse SQL on a
virtual network with a private endpoint.
• You can map the IDENTITY column for a target object in mappings and elastic mappings.

Enhanced connectors 17
Microsoft SQL Server Connector
This release includes the following enhancements for Microsoft SQL Server Connector:

• When you configure a full or source pushdown optimization for an Expression transformation, you can
calculate an unique checksum value for a row of data each time you read data from a source object.
• You can push a few additional functions such as data type conversion and string operations to the
Microsoft SQL Server database by using full pushdown optimization.
For more information about the supported functions, see the help for Microsoft SQL Server Connector.

MongoDB V2 Connector
This release includes the following enhancements for MongoDB V2 Connector:

• You can configure both Atlas and self-managed X509 certificate-based SSL authentication in a MongoDB
V2 connection to read from and write data to MongoDB.
• You can parameterize the MongoDB V2 source object, target object, and the connection in elastic
mappings.
• You can use the Hierarchy Processor transformation in elastic mappings to convert hierarchical input into
relational output and relational input into hierarchical output.
• You can use a serverless runtime environment to run MongoDB V2 elastic mappings.
• You can read and write hierarchical data types such as, Array, Object, and ObjectID. To write the
hierarchical data types to MongoDB V2, you must configure the mapping to create a new MongoDB V2
target at runtime.

ODBC Connector
This release includes the following enhancements for ODBC Connector:

• You can perform an upsert operation to update or insert data to a Teradata target when you configure a
full pushdown optimization with the Teradata ODBC connection.
• When you configure a full pushdown optimization with the Teradata ODBC connection, you can push the
TO_CHAR(), TO_DATE(), and a few additional functions to the Teradata database.
For more information about the supported functions that you can use with pushdown optimization, see
the help for ODBC Connector.

PostgreSQL Connector
You can choose how Data Integration handles changes that you make to the data object schemas. You can
also refresh the schema every time when you run a PostgreSQL task.

Rest V2 Connector
You can use the PATCH HTTP method in source, target, and midstream transformations. Use this method to
update existing resources.

Rest V3 Connector
You can use REST V3 Connector in a serverless runtime environment.

Salesforce Connector
You can use Salesforce Bulk API 2.0 to perform bulk read and write operations.

Salesforce Analytics Connector


When a connection to Salesforce Analytics fails, the Secure Agent retries a maximum of three attempts at a
300-second interval to establish the connection.

18 Chapter 3: Connectors
SAP HANA Connector
You can use a serverless runtime environment to run SAP HANA mappings.

Snowflake Data Cloud Connector


This release includes the following enhancements for Snowflake Data Cloud Connector:

• Pushdown enhancements in mappings using the Snowflake Data Cloud connection:


- When you configure a mapping enabled for full pushdown optimization to read from a Snowflake source
and write to multiple Snowflake targets, you can enable the following optimization modes in the task
properties based on the target operations you specify:
- Multi-insert. Enable this mode when you insert data to all the Snowflake targets defined in the
mapping. Data Integration combines the queries generated for each of the targets and issues a single
query.
- SCD Type 2 merge. Enable this mode when you use two target transformations in a mapping, one to
insert data and the other to update data to the same Snowflake target table. Data Integration combines
the queries for both the targets and issues a Merge query.
None is selected by default. When you enable the Multi-insert or the SCD Type 2 merge optimization
context, the task is optimized.
- When you configure full pushdown optimization for a task that reads from and writes to Snowflake, you
can determine how Data Integration handles the job when pushdown optimization does not work. You
can set the task to fail or run without pushdown optimization.
- When you configure pushdown optimization for a mapping that contains an Expression transformation,
you can use variables in the expression to define calculations and store data temporarily.
- You can use a reusable sequence in an SQL transformation in a mapping enabled for full pushdown
optimization. When you run multiple jobs with the reusable sequence, each session receives unique
values in the sequence.
- When you configure an SQL transformation in a mapping enabled for pushdown optimization, you can
include functions in an entered query and run queries with the Snowflake target endpoint. For the list of
functions that you can use in an entered query, see the help for Snowflake Data Cloud Connector.
- If you want to stop a running job enabled for pushdown optimization, you can clean stop the job. When
you use clean stop, Data Integration terminates all the issued statements and processes spawned by the
job.
- You can use full pushdown optimization to push the SESSSTARTTIME variable to the Snowflake
database.
• When you run a mapping to write data to Snowflake, Data Integration, by default, creates a flat file in a
temporary folder on the Secure Agent machine to stage the data before writing to Snowflake. The
performance of the task is optimized when the connector uses the flat file for staging data.
• You can run elastic mappings on a self-service cluster.

Enhanced connectors 19
Changed behavior
This release includes changes in behavior for the following connectors.

Data type changes in elastic mappings


When you select an existing target to write data in elastic mappings, note the following data type changes:

• When you run an elastic mapping to write to Avro, JSON, ORC, or Parquet files having data of the boolean
data type, the data is written in boolean in the target.
Previously, data of the boolean data type was written in integer in the target.
• When you run an elastic mapping to write to Avro, ORC, or Parquet files having data of the float data type,
the data is written in float in the target.
Previously, data of the float data type was written in double in the target.
• When you run an elastic mapping to write to Avro, ORC, or Parquet files having data of the date data type,
the data is written in date in the target.
Previously, data of the date data type was written in timestamp in the target.

These changes apply to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure
Data Lake Storage Gen2 Connector.

Delimited format type


Effective in this release, the Delimited format type in the formatting options is renamed to Flat.

This change applies to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure
Data Lake Storage Gen2 Connector.

Flat files in elastic mappings


When you use an elastic mapping to read data from a flat file, you can change the data types before you write
to the target.

Previously, the default data type for all fields was set to string. If you modified the data types, the change did
not reflect in the target.

Formatting options for a flat file


Effective in this release, the flat file formatting options include the following changes:

• The escape characters in the source data are retained in the target data whether you enable or disable the
Is Escape Character Data Retained option.
Previously, the escape characters were retained in the target data only if you enabled the Is Escape
Character Data Retained option.
• When you set the Qualifier Mode to Minimal and if special characters or unicode characters are enclosed
within a qualifier in the source data, the qualifier is not retained in the target.
Previously, the qualifier was retained in the target.
• If there is an empty row in the source data, the empty row is written as it is in the target.
Previously, a qualifier was added to the first column of the empty row in the target.
• If the columns have a qualifier in the source data, the qualifier is retained only for the non-empty columns
in the target.
Previously, the qualifier was retained for both empty and non-empty columns in the target.
• When you use an escape character to escape a character in the source data that is also specified as a
qualifier, the escaped character is retained in the target data.
Previously, an extra qualifier was added to the escaped character in the target data.
These changes apply to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure
Data Lake Storage Gen2 Connector.

20 Chapter 3: Connectors
Multi-character delimiter in flat files
Effective in this release, when you read data from a flat file and specify a multi-character delimiter, all the
characters together are considered as the value of the delimiter.

Previously, if you specified a multi-character delimiter, only the first character was considered as the value of
the delimiter.

For example, if you specify ^|^ as the delimiter, the three characters together are considered as the value of
the delimiter.

Previously, only the first character ^ was considered as the value of the delimiter.

This change applies to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure
Data Lake Storage Gen2 Connector.

SQL queries in task logs for pushdown optimization


When you run a mapping enabled for pushdown optimization, the issued SQL queries in the task logs are
formatted and user friendly.

Previously, the issued SQL queries were unformatted and appeared in a single line.

This change is not applicable for pushdown optimization through the ODBC connector.

Amazon Redshift V2 Connector


Effective in this release, Amazon Redshift V2 Connector includes the following changes:

• Even when you do not map the NOT NULL columns that have default values in an Amazon Redshift target
table, the insert, update, or upsert operation is successful and the default values for NOT NULL columns
are used.
Previously, if you did not map the NOT NULL columns, the operation failed.
To retain the previous behavior, set the JVM option -DRetainUnmappedNotNullColumnValidation value to
true in the Secure Agent properties.
• When you read data that contains columns of the decimal data type, the scale that you set for the decimal
data type columns in the Amazon Redshift UI is honored.
Previously, the scale that you set for decimal data type columns in the Amazon Redshift UI was not
honored and a value greater than the defined scale was also read.
To retain the previous behavior, set the JVM option -honorDecimalScaleRedshift value to false in the
Secure Agent properties.
• When you configure an Aggregator transformation in a mapping enabled for pushdown optimization and
you do not include the incoming field from an aggregate function or a group by field in a field mapping,
Data Integration uses the ANY_VALUE() function to return any value.
Previously, when you defined how to group data for aggregate expressions in an Aggregator
transformation, you had to include each of the incoming fields from an aggregate function or a group by
field in the field mapping.
• If the mapping enabled for pushdown optimization contains Union and Aggregator transformations,
include the incoming field from the aggregate function or group by field in the field mapping, or remove
the field from the aggregate function or group by field altogether. Otherwise, the mapping runs without
pushdown optimization.
Previously, the task partially pushed down the mapping logic to the point where the transformation is
supported and runs without pushdown optimization.

Databricks Delta Connector


Effective in this release, when you configure mappings, the processing logic is pushed by default to the
Databricks Delta SQL endpoint.

Changed behavior 21
Previously, you had to configure the Secure Agent properties to use the Databricks Delta SQL endpoint.

Google BigQuery V2 Connector


Effective in this release, Google BigQuery V2 Connector includes the following changes:

• When you write data to a Google BigQuery target in bulk mode and use CSV mode as the staging file
format, you can use a precision of upto 15 digits for a column of Float or Double data type.
Previously, you can set the precision of upto 17 digits for a column of Float or Double data type.
• When you migrate a mapping or an elastic mapping and write data to a Google BigQuery target created at
runtime and you override the target table and dataset name, the Secure Agent creates the target with the
overridden target table name irrespective of the Create Disposition value.
Previously, the mapping or elastic mapping failed to create the target with the overridden target table
name if the Google BigQuery target did not exist and the Create Disposition property was set to Create
never.

Hive Connector
Effective in this release, when you run a task, Data Integration logs messages in the following directory:
<Secure Agent installation directory>/apps/Data_Integration_Server/logs/tomcat/<version>.log

Previously, Data Integration logged messages in the following directory: <Secure Agent installation
directory>/apps/Data_Integration_Server/<version>/tomcat.out

Snowflake Data Cloud Connector


Effective in this release, Snowflake Cloud Data Warehouse V2 Connector is renamed to Snowflake Data Cloud
Connector. You must use the Snowflake Data Cloud connection type in mappings to read from or write to
Snowflake.

22 Chapter 3: Connectors
Chapter 4

Upgrade
The following topics provide information about tasks that you might need to perform before or after an
upgrade of Informatica Intelligent Cloud Services Data Integration. Post-upgrade tasks for previous monthly
releases are also included in case you haven't performed these tasks after the previous upgrade.

Preparing for the upgrade


The Secure Agent upgrades the first time that you access Informatica Intelligent Cloud Services after the
upgrade.

Files that you added to the following directory are preserved after the upgrade:

<Secure Agent installation directory>/apps/Data_Integration_Server/ext/deploy_to_main/bin/


rdtm-extra

Perform the following steps to ensure that the Secure Agent is ready for the upgrade:

1. Ensure that each Secure Agent machine has sufficient disk space available for upgrade.
The machine must have 5 GB free space or the amount of disk space calculated using the following
formula, whichever is greatest:
Minimum required free space = 3 * (size of current Secure Agent installation directory -
space used for logs directory)
2. Close all applications and open files to avoid file lock issues, for example:
• Windows Explorer
• Notepad
• Windows Command Processor (cmd.exe)

Post-upgrade tasks for the April 2022 release


Perform the following tasks after your organization is upgraded to the April 2022 release.

23
TLS 1.0 and 1.1 disablement for the Secure Agent
In the April 2022 release of Informatica Intelligent Cloud Services, support for Transport Layer Security (TLS)
1.0 and 1.1 is disabled on the Secure Agent. The Secure Agent uses Transport Layer Security (TLS) version
1.2.

Data that passes between Informatica Intelligent Cloud Services and the Secure Agent is always encrypted
using TLS 1.2. You do not need to reconfigure the agent or take any action to enable the agent to
communicate with Informatica Intelligent Cloud Services.

Data that passes between the Secure Agent and connector endpoints is also encrypted using TLS 1.2. If you
use a connector or access a connection endpoint that uses TLS 1.0 or 1.1, Informatica recommends that you
upgrade to a version that uses TLS 1.2. If you cannot do this, you can re-enable TLS 1.0 and 1.1 on the Secure
Agent by following the instructions in this KB article:
HOW TO: Enable TLS 1.0 and 1.1 on the Secure Agent in Cloud Data Integration.

Amazon Redshift V2 Connector


Effective in this release, you must map all the fields from the SQL query advanced source property to the
target for the mappings enabled for pushdown optimization to run successfully.

After you upgrade, to run the existing mappings enabled for pushdown optimization that have only a few
fields from the SQL query mapped to the target successfully, you must modify the mappings and map all the
fields from the SQL query to the target.

Amazon S3 V2 Connector
After the upgrade, existing elastic mappings configured to read from a JSON partition column fail if you
chose to override the folder path in the advanced source properties.

To run the existing mappings successfully, click on the Refresh button in the Fields tab or select the source
again to refresh the metadata.

Amazon S3 bucket policy for elastic mappings


Effective in this release, when you run an elastic mapping, in addition to the existing minimum required
permissions that you configure for the Amazon S3 buckets, you must configure an additional permission
ListBucketMultipartUploads to successfully read data from and write data to AWS resources.

After you upgrade, to run the existing elastic mappings successfully, you must modify the IAM permission for
the user to include the Amazon S3 bucket permission ListBucketMultipartUploads.

This upgrade impact is applicable for Amazon S3 V2 Connector and Amazon Redshift V2 Connector.

Connection with TLS 1.0 or 1.1


After the upgrade, existing mappings fail in the following connectors:

• Microsoft SQL Server Connector


• MySQL Connector
• Oracle Connector
• PostgreSQL Connector

24 Chapter 4: Upgrade
When you run the existing mappings, the mappings fail in the following scenarios:

• The connection uses the TLS 1.0 or 1.1 protocol to connect to the source or target endpoint.
To run mappings successfully, edit the connection properties, and from the Crypto Protocol Version
option, select TLSv1.2 instead of TLSv1 or TLSv1.1.
• The connection uses the TLS 1.2 protocol, but the source or target endpoint that the connector accesses
does not support TLS 1.2 protocol.
To run mappings successfully, Informatica recommends upgrading to an endpoint version that supports
TLS 1.2.

Databricks Delta Connector


Effective in this release, when you configure mappings, the processing logic is pushed by default to the
Databricks SQL endpoint.

After you upgrade, if you want to use existing mappings running on Databricks analytics or Databricks data
engineering cluster, configure the following properties based on the type of operation you want to perform:

Operation Configuration

Import metadata Set the JRE_OPTS property for the Data Integration Service of type Tomcat JRE to the
following value: -DUseDatabricksSql=false.

Run mappings¹ Set the JVMOption property for the Data Integration Service of type DTM to the following
value: -DUseDatabricksSql=false.

Run mappings enabled Set the JVMOption property for the Data Integration Service of type DTM to the following
with pushdown value: -DUseDatabricksSqlForPdo=false.
optimization¹

¹Applies only to mappings.

Elastic clusters in an AWS environment


Effective in this release, Data Integration Elastic uses kubeadm as the cluster operator for elastic clusters in
an AWS environment. With this change, the Secure Agent proxy server must have access to certain Amazon
S3 buckets and the ELB security group requires additional inbound traffic rules.

Perform the following tasks:


Configure the Secure Agent proxy server

If your organization uses an outgoing proxy server, allow traffic to the following URLS:

• .s3.amazonaws.com
• <S3 staging bucket>.s3.<bucket region>.amazonaws.com

When you use an Amazon S3 or Amazon Redshift object as a mapping source or target, also allow traffic
to each source and target bucket that the agent will access.

If your organization does not use an outgoing proxy server, contact Informatica Global Customer Support
to disable the proxy settings used for S3 access.

Post-upgrade tasks for the April 2022 release 25


Configure the ELB security group

If you create user-defined security groups, add inbound rules for the ELB security group to allow the
following traffic:

• Incoming traffic from the Secure Agent that creates the cluster.
• Incoming traffic from master nodes in the same cluster.
• Incoming traffic from worker nodes in the same cluster.

For more information, see Data Integration Elastic Administration.

Flat files with UTF-8-BOM encoding


After the upgrade, an elastic mapping configured to read a flat file with UTF-8-BOM encoding does not map
the first column in the source to the target.

To map the first column, you must synchronize all the fields from the source object and rerun the elastic
mapping.

This upgrade impact is applicable for Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and
Microsoft Azure Data Lake Storage Gen2 Connector.

Microsoft Azure Synapse SQL Connector


After the upgrade, an existing elastic mapping configured to read data from and write data to Microsoft Azure
Synapse SQL might fail if the source fields were dropped after the mapping was created.

To run the existing elastic mapping successfully, you must synchronize all fields with the source object and
rerun the elastic mapping.

Microsoft SQL Server Connector


After the upgrade, existing mappings enabled for pushdown optimization in which you push the MD5()
function to the Microsoft SQL Server database through an Expression transformation return a different value
for the nchar datatype as compared to a mapping that runs without pushdown optimization.

Previously, the MD5() function configured in an Expression transformation ran without pushdown
optimization even when you enabled the mappings for pushdown optimization.

To achieve the existing behavior, run the existing mappings without pushdown optimization.

SAP Connector
After the upgrade, if you use the HTTPS connection in SAP mappings, new and existing SAP Table Reader
and SAP BW Reader mappings might fail in the following scenarios:

• ABAP Kernel version is 753 or earlier and the CommonCryptoLib SAP system is earlier than 8.4.31.
To run mappings successfully, you must upgrade the CommonCryptoLib SAP system to version 8.4.31 or
later. To know more about upgrading the SAP system, see the SAP documentation.
• ABAP Kernel version is 753 or earlier and the CommonCryptoLib SAP system is 8.4.31 or later.
To run mappings successfully, you must enable the TLS 1.2 protocol in the SAP system.
For more information about enabling the TLS 1.2 protocol in the SAP system, see SAP Note 510007.

26 Chapter 4: Upgrade
SSE-KMS encryption for elastic mappings
Effective in this release, an existing elastic mapping enabled for SSE-KMS encryption fails when the
connector uses the default IAM role and uses the credentials from the ~/.aws/credentials location.

After you upgrade, to run the existing mappings successfully, you must perform one of the following steps:

• To use the credentials from the ~/.aws/credentials location, you must create the master instance
profile and the worker instance profile in AWS, attach the KMS policy to the worker profile, and specify the
profiles in the cluster configuration.
• Use the Secure Agent on Amazon EC2, create the master instance profile and the worker instance profile
in AWS, and attach the KMS policy to the worker profile.
• Use the Secure Agent on Amazon EC2, use the default IAM role, and attach the KMS policy to the Secure
Agent role.

This upgrade impact is applicable for Amazon S3 V2 Connector and Amazon Redshift V2 Connector.

File Integration Service proxy


If you use the file integration proxy server, update the server with the latest version of the fis-proxy-
server_<version>.zip file.

For more information, see What's New in the Administrator help.

Post-upgrade tasks for the April 2022 release 27


Chapter 5

Enhancements in previous
releases
You can find information on enhancements and changed behavior in previous Data Integration releases on
Informatica Network.

What's New guides for releases occurring within the last year are included in the following community article:
https://network.informatica.com/docs/DOC-17912

28
Index

C P
Cloud Application Integration community parameter files
URL 5 creating targets 8
Cloud Developer community
URL 5
code task API
enhancements 11
R
creating targets REST API
using parameter files 8 enhancements 11
platform enhancements 11

D
Data Integration community
S
URL 5 Secure Agents
upgrade preparation 23
status

F Informatica Intelligent Cloud Services 6


system status 6
File Integration Service proxy 27
fis-proxy-server_.zip 27
flat file enhancements 7
T
transformations

H enhancements 10
trust site
Hierarchy Processor transformation description 6
enhancements 10

U
I upgrade notifications 6
Informatica Global Customer Support upgrade preparation
contact information 6 Secure Agent preparation 23
Informatica Intelligent Cloud Services
web site 5

W
M web site 5

maintenance outages 6

29

You might also like