Professional Documents
Culture Documents
April 2022
What's New
Informatica Cloud Data Integration What's New
April 2022
© Copyright Informatica LLC 2016, 2022
This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial
computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such,
the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the
extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License.
Informatica, Informatica Cloud, Informatica Intelligent Cloud Services, PowerCenter, PowerExchange, and the Informatica logo are trademarks or registered trademarks
of Informatica LLC in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://
www.informatica.com/trademarks.html. Other company and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product.
The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at
infa_documentation@informatica.com.
Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE
INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.
Chapter 3: Connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
New connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Enhanced connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Changed behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Chapter 4: Upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Preparing for the upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Post-upgrade tasks for the April 2022 release. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
TLS 1.0 and 1.1 disablement for the Secure Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Amazon Redshift V2 Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Table of Contents 3
Amazon S3 V2 Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Amazon S3 bucket policy for elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Connection with TLS 1.0 or 1.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Databricks Delta Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Elastic clusters in an AWS environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Flat files with UTF-8-BOM encoding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Microsoft Azure Synapse SQL Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Microsoft SQL Server Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
SAP Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
SSE-KMS encryption for elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
File Integration Service proxy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 Table of Contents
Preface
Read What's New to learn about new features, enhancements, and behavior changes in Informatica Intelligent
Cloud Services℠ Data Integration for the April 2022 release. You can also learn about upgrade steps that you
might need to perform.
Informatica Resources
Informatica provides you with a range of product resources through the Informatica Network and other online
portals. Use the resources to get the most from your Informatica products and solutions and to learn from
other Informatica users and subject matter experts.
Informatica Documentation
Use the Informatica Documentation Portal to explore an extensive library of documentation for current and
recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.
If you have questions, comments, or ideas about the product documentation, contact the Informatica
Documentation team at infa_documentation@informatica.com.
https://network.informatica.com/community/informatica-network/products/cloud-integration
Developers can learn more and share tips at the Cloud Developer community:
https://network.informatica.com/community/informatica-network/products/cloud-integration/cloud-
developers
5
https://marketplace.informatica.com/
To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or
ideas about the Knowledge Base, contact the Informatica Knowledge Base team at
KB_Feedback@informatica.com.
Subscribe to the Informatica Intelligent Cloud Services Trust Center to receive upgrade, maintenance, and
incident notifications. The Informatica Intelligent Cloud Services Status page displays the production status
of all the Informatica cloud products. All maintenance updates are posted to this page, and during an outage,
it will have the most current information. To ensure you are notified of updates and outages, you can
subscribe to receive updates for a single component or all Informatica Intelligent Cloud Services
components. Subscribing to all components is the best way to be certain you never miss an update.
For online support, click Submit Support Request in Informatica Intelligent Cloud Services. You can also use
Online Support to log a case. Online Support requires a login. You can request a login at
https://network.informatica.com/welcome.
The telephone numbers for Informatica Global Customer Support are available from the Informatica web site
at https://www.informatica.com/services-and-training/support-services/contact-us.html.
6 Preface
Chapter 1
For example, you have an elastic mapping task that incrementally loads files every day at 12:00:00 p.m. On
Monday, April 4, you realize that bad data was entered the previous Friday, April 1, that affected the jobs over
the weekend. To fix this, you configure a reprocessing job to reload files changed after 04/01/2022 12:00:01
p.m.
Expression autocomplete
When you build an expression, Data Integration suggests functions, parameters, system variables, fields, and
user-defined functions to complete the expression.
Data Integration offers autocomplete suggestions for an expression when you configure an Expression
transformation with non-hierarchical data, or when you configure a user defined function.
Flat files
This release includes the following enhancements to flat files:
• When you search for an object in a flat file connection, you can browse and select an object from
subfolders within the default directory. When you create a flat file connection, the directory that you
specify is the default connection directory.
7
• You can edit the auto-generated field names in lookups for flat files with no header in a mapping.
• You can edit the metadata of flat file lookup return fields in a mapping task.
• When you configure a mapping task, you can edit field metadata for parameterized flat file source and
lookup file list objects.
• You can configure a mapping task to retain design-time metadata for a parameterized flat file object.
For more information about configuring flat file connections, see Data Integration Connections. For more
information about using flat file objects in mappings, see Transformations. For more information about
mapping tasks, see Tasks.
For example, a mapplet contains an Expression transformation named Expression_1. You create a mapping
and use the mapplet in the Mapplet transformation Mapplet_Tx_1. When you run the mapping, the Expression
transformation is renamed to Mapplet_Tx_1_Expression_1.
Data Integration does not update transformation names in mapplets that are used in mappings created prior
to the April 2022 upgrade.
For more information about mapplets, see Components. For more information about Mapplet
transformations, see Transformations.
Parameter files
You can create a new target when you use a parameter file. If a target with the same name as the target
specified in the parameter file doesn't exist, a new target is created.
Pushdown optimization
This release includes the following enhancements to pushdown optimization:
You can provide details about the optimization context for multi-insert and slowly changing dimension
type 2 merge scenarios. Based on the context that you provide, Data Integration combines multiple
targets in the mapping and constructs a single query for pushdown optimization.
If the pushdown optimization mode that you select is not possible, you can choose to cancel the
mapping task.
For more information about pushdown optimization, see Tasks or the help for the appropriate connector.
To see if a connector supports configuring advanced attributes, see the help for the appropriate connector.
For more information about SQL transformations, see Transformations.
Taskflows
This release includes the following enhancements to taskflows:
For more information about using the APIs to resume a suspended taskflow, see REST API Reference.
You can view the output fields on the Fault tab when one of the following conditions are met:
For more details about faulted data tasks, see Taskflows and Monitor.
Support for data transfer task and dynamic mapping task in a Data Task step
You can add a data transfer task and dynamic mapping task to a Data Task step of a taskflow.
You can use a data transfer task in a taskflow to transfer data from a source to a target. You can use a
dynamic mapping task in a taskflow to run specific groups and jobs configured in the task.
For more information about using a data transfer task and dynamic mapping task in a Data Task step, see
Taskflows.
Include schema elements in models based on Avro, Parquet, and ORC files
When you create an intelligent structure model that is based on an Avro, Parquet, or ORC file, Intelligent
Structure Discovery includes the schema elements in the model, thus making elements that don't contain
data part of the model.
Transformations
This release includes the following enhancements to transformations.
Amazon SageMaker
The Machine Learning transformation can run a machine learning model that is deployed on Amazon
SageMaker.
You can configure the Machine Learning transformation to combine multiple API requests into one bulk
request before sending the data to the machine learning model. Bulk requests can improve performance
by reducing processing overhead and the amount of time that it takes to communicate with the model.
You can run the Machine Learning transformation in a serverless runtime environment.
For more information about the code task API, see the REST API Reference.
Changed behavior
The April 2022 release of Informatica Intelligent Cloud Services Data Integration includes the following
changed behaviors.
For more information about the advanced attributes that you can configure, see the help for the appropriate
connector.
Taskflows
The Publish button is added to the taskflow designer page.
Previously, the Publish option was available under the Actions menu on the taskflow designer page.
File listener
When you use a file listener as a source in a file ingestion task, if a notification about a file event doesn't
reach the file ingestion task, the file listener queues the event and includes it in the notification it sends to
the next file ingestion job. A file ingestion task thus receives a notification about each file at least once. This
ensures that the file ingestion task transfers all files to the target.
Previously, if a notification about a file event didn't reach the file ingestion task, the file listener didn't
continue to notify the file ingestion task about the event, and the task didn't transfer the files to the target.
12
Chapter 3
Connectors
The April 2022 release includes the following enhanced connectors.
New connectors
This release includes the following new connectors.
Adabas Connector
You can use Adabas Connector to connect to a PowerExchange Adabas environment to retrieve data in bulk
from an Adabas source database on a z/OS system. The PowerExchange Listener retrieves metadata from
the data map repository and data from the Adabas source. The data is returned to the PowerExchange Bulk
Reader, which sends the data to Data Integration. Data Integration can then send the data to a supported
target for a batch load.
13
IMS Connector
You can use IMS Connector to connect to a PowerExchange IMS environment to retrieve data in bulk from an
IMS source database on a z/OS system. The PowerExchange Listener retrieves metadata from the data map
repository and data from the IMS source. The data is returned to the PowerExchange Bulk Reader, which
sends the data to Data Integration. Data Integration can then send the data to a supported target for a batch
load.
SAP IQ Connector
You can use SAP IQ Connector to connect to SAP IQ database from Data Integration. Use SAP IQ Connector
to write data to an SAP IQ database. You can use SAP IQ objects as targets in mappings and mapping tasks.
When you use these objects in mappings, you must configure properties specific to SAP IQ.
You can only insert records when you configure a Target transformation in an SAP IQ mapping.
VSAM Connector
You can use VSAM Connector to connect to a PowerExchange VSAM environment to retrieve data in bulk
from VSAM source data sets on a z/OS system. The PowerExchange Listener retrieves metadata from the
data map repository and data from the VSAM data sets. The data is returned to the PowerExchange Bulk
Reader, which sends the data to Data Integration. Data Integration can then send the data to a supported
target for a batch load.
Enhanced connectors
This release includes enhancements to the following connectors.
• You can use a serverless runtime environment to run Amazon DynamoDB V2 elastic mappings.
• You can use temporary security credentials, created by AssumeRole, to access AWS resources.
• You can use the Hierarchy Processor transformation in elastic mappings to convert hierarchical input into
relational output and relational input into hierarchical output.
14 Chapter 3: Connectors
Amazon Redshift V2 Connector
This release includes the following enhancements for Amazon Redshift V2 Connector:
• You can configure client-side encryption for Amazon Redshift V2 sources and targets when you use a
serverless runtime environment.
• When you configure a full or source pushdown optimization for an Expression transformation, you can use
variables to define calculations and store data temporarily.
• You can run elastic mappings on a self-service cluster.
Amazon S3 V2 Connector
This release includes the following enhancements for Amazon S3 V2 Connector:
• You can configure an Amazon S3-compatible storage, such as Scality RING and MinIO, to access and
manage the data that is stored over an S3 compliant interface.
• You can configure client-side encryption for Amazon S3 V2 sources and targets when you use a
serverless runtime environment.
• After you incrementally-load source files in an elastic mapping, you can run a job to reprocess files that
were modified in a specific time interval.
• You can write to a partition directory incrementally in an elastic mapping and append data to the partition
directory.
• You can run elastic mappings on a self-service cluster.
- Mappings that read from an Microsoft Azure Data Lake Storage Gen2 source and write to a Databricks
Delta target.
- Mappings that read from an Amazon S3 V2 source and write to a Databricks Delta target.
- When you configure full pushdown optimization for a task that reads from and writes to Databricks
Delta, you can determine how Data Integration handles the job when pushdown optimization does not
work. You can set the task to fail or run without pushdown optimization.
- When you configure a full pushdown optimization for an Aggregator or Expression transformation, you
can use variables to define calculations and store data temporarily.
Enhanced connectors 15
- You can read data from Google BigQuery standard and materialized views as a source and lookup
object.
- When you configure a full or source pushdown optimization for an Expression transformation, you can
use variables to define calculations and store data temporarily.
- When you configure a mapping enabled for full pushdown optimization to read from a Google BigQuery
source and write to two Google BigQuery Target transformations that represents the same Google
BigQuery table, you can enable the SCD Type 2 merge optimization mode in the task properties. In SCD
Type 2 merge optimization mode, when you use two target transformations in a mapping, one to insert
data and the other to update data to the same Google BigQuery target table, Data Integration combines
the queries for both the Target transformations and issues a Merge query to optimize the task.
- If you want to stop a running job enabled for pushdown optimization, you can clean stop the job. When
you use clean stop, Data Integration terminates all the issued statements and processes spawned by the
job.
- When you enable full pushdown optimization for a task that reads from and writes to Google BigQuery,
you can determine how Data Integration handles the job when pushdown optimization does not work.
You can set the task to fail or run without pushdown optimization.
• When you configure a mapping to read data from a Google BigQuery source in staging mode, you can
stage the data into the local staging file in Parquet format.
• When you run a mapping to a Google BigQuery target in bulk mode, Data Integration creates a CSV file in
the temporary folder in the Secure Agent directory to stage the data before writing the data to the Google
BigQuery target. The performance of the task is optimized when the connector uses the CSV file for
staging data.
Hive Connector
This release includes the following enhancements for Hive Connector:
• You can configure a Target transformation in a mapping or an elastic mapping to create a target at
runtime.
• When you configure a Target transformation in a mapping or an elastic mapping to create a Hive target at
runtime, you can include the partition fields of the String data type and set the order in which they must
appear in the target.
• When you configure an elastic mapping to read from or write to Hive, you can use Managed Identity
Authentication to stage Hive data on Azure.
• You can enable dynamic schema handling in a Hive task to refresh the schema every time the task runs.
You can choose how Data Integration handles changes in the Hive data object schemas.
• You can configure a dynamic mapping task to create and batch multiple jobs based on the same mapping.
16 Chapter 3: Connectors
• You can configure an elastic mapping to read from or write data that contains Array and Struct complex
data types. To write Array and Struct data types to Hive, you must configure the elastic mapping to create
a new Hive target at runtime. You can also use a Hierarchy Processor transformation in an elastic
mapping to read relational or hierarchical input and convert it to relational or hierarchical output.
Important: This functionality is available for preview. Preview functionality is supported for evaluation
purposes but is unwarranted and is not production-ready. Informatica recommends that you use in non-
production environments only. Informatica intends to include the preview functionality in an upcoming
release for production use, but might choose not to in accordance with changing market or technical
circumstances. For more information, contact Informatica Global Customer Support.
JDBC V2 Connector
You can run elastic mappings on a self-service cluster.
• You can use Managed Identity authentication to connect to Microsoft Azure Data Lake Storage Gen2.
When you use Managed Identity authentication, you do not need to provide credentials, secrets, or Azure
Active Directory tokens.
• You can use shared key authentication to connect to Microsoft Azure Data Lake Storage Gen2 using the
account name and account key in elastic mappings.
• You can use Microsoft Azure Data Lake Storage Gen2 Connector to connect to Microsoft Azure Data Lake
Storage Gen2 on a virtual network with a private endpoint.
• After you incrementally-load source files in an elastic mapping, you can run a job to reprocess files that
were modified in a specific time interval.
Enhanced connectors 17
Microsoft SQL Server Connector
This release includes the following enhancements for Microsoft SQL Server Connector:
• When you configure a full or source pushdown optimization for an Expression transformation, you can
calculate an unique checksum value for a row of data each time you read data from a source object.
• You can push a few additional functions such as data type conversion and string operations to the
Microsoft SQL Server database by using full pushdown optimization.
For more information about the supported functions, see the help for Microsoft SQL Server Connector.
MongoDB V2 Connector
This release includes the following enhancements for MongoDB V2 Connector:
• You can configure both Atlas and self-managed X509 certificate-based SSL authentication in a MongoDB
V2 connection to read from and write data to MongoDB.
• You can parameterize the MongoDB V2 source object, target object, and the connection in elastic
mappings.
• You can use the Hierarchy Processor transformation in elastic mappings to convert hierarchical input into
relational output and relational input into hierarchical output.
• You can use a serverless runtime environment to run MongoDB V2 elastic mappings.
• You can read and write hierarchical data types such as, Array, Object, and ObjectID. To write the
hierarchical data types to MongoDB V2, you must configure the mapping to create a new MongoDB V2
target at runtime.
ODBC Connector
This release includes the following enhancements for ODBC Connector:
• You can perform an upsert operation to update or insert data to a Teradata target when you configure a
full pushdown optimization with the Teradata ODBC connection.
• When you configure a full pushdown optimization with the Teradata ODBC connection, you can push the
TO_CHAR(), TO_DATE(), and a few additional functions to the Teradata database.
For more information about the supported functions that you can use with pushdown optimization, see
the help for ODBC Connector.
PostgreSQL Connector
You can choose how Data Integration handles changes that you make to the data object schemas. You can
also refresh the schema every time when you run a PostgreSQL task.
Rest V2 Connector
You can use the PATCH HTTP method in source, target, and midstream transformations. Use this method to
update existing resources.
Rest V3 Connector
You can use REST V3 Connector in a serverless runtime environment.
Salesforce Connector
You can use Salesforce Bulk API 2.0 to perform bulk read and write operations.
18 Chapter 3: Connectors
SAP HANA Connector
You can use a serverless runtime environment to run SAP HANA mappings.
Enhanced connectors 19
Changed behavior
This release includes changes in behavior for the following connectors.
• When you run an elastic mapping to write to Avro, JSON, ORC, or Parquet files having data of the boolean
data type, the data is written in boolean in the target.
Previously, data of the boolean data type was written in integer in the target.
• When you run an elastic mapping to write to Avro, ORC, or Parquet files having data of the float data type,
the data is written in float in the target.
Previously, data of the float data type was written in double in the target.
• When you run an elastic mapping to write to Avro, ORC, or Parquet files having data of the date data type,
the data is written in date in the target.
Previously, data of the date data type was written in timestamp in the target.
These changes apply to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure
Data Lake Storage Gen2 Connector.
This change applies to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure
Data Lake Storage Gen2 Connector.
Previously, the default data type for all fields was set to string. If you modified the data types, the change did
not reflect in the target.
• The escape characters in the source data are retained in the target data whether you enable or disable the
Is Escape Character Data Retained option.
Previously, the escape characters were retained in the target data only if you enabled the Is Escape
Character Data Retained option.
• When you set the Qualifier Mode to Minimal and if special characters or unicode characters are enclosed
within a qualifier in the source data, the qualifier is not retained in the target.
Previously, the qualifier was retained in the target.
• If there is an empty row in the source data, the empty row is written as it is in the target.
Previously, a qualifier was added to the first column of the empty row in the target.
• If the columns have a qualifier in the source data, the qualifier is retained only for the non-empty columns
in the target.
Previously, the qualifier was retained for both empty and non-empty columns in the target.
• When you use an escape character to escape a character in the source data that is also specified as a
qualifier, the escaped character is retained in the target data.
Previously, an extra qualifier was added to the escaped character in the target data.
These changes apply to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure
Data Lake Storage Gen2 Connector.
20 Chapter 3: Connectors
Multi-character delimiter in flat files
Effective in this release, when you read data from a flat file and specify a multi-character delimiter, all the
characters together are considered as the value of the delimiter.
Previously, if you specified a multi-character delimiter, only the first character was considered as the value of
the delimiter.
For example, if you specify ^|^ as the delimiter, the three characters together are considered as the value of
the delimiter.
Previously, only the first character ^ was considered as the value of the delimiter.
This change applies to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure
Data Lake Storage Gen2 Connector.
Previously, the issued SQL queries were unformatted and appeared in a single line.
This change is not applicable for pushdown optimization through the ODBC connector.
• Even when you do not map the NOT NULL columns that have default values in an Amazon Redshift target
table, the insert, update, or upsert operation is successful and the default values for NOT NULL columns
are used.
Previously, if you did not map the NOT NULL columns, the operation failed.
To retain the previous behavior, set the JVM option -DRetainUnmappedNotNullColumnValidation value to
true in the Secure Agent properties.
• When you read data that contains columns of the decimal data type, the scale that you set for the decimal
data type columns in the Amazon Redshift UI is honored.
Previously, the scale that you set for decimal data type columns in the Amazon Redshift UI was not
honored and a value greater than the defined scale was also read.
To retain the previous behavior, set the JVM option -honorDecimalScaleRedshift value to false in the
Secure Agent properties.
• When you configure an Aggregator transformation in a mapping enabled for pushdown optimization and
you do not include the incoming field from an aggregate function or a group by field in a field mapping,
Data Integration uses the ANY_VALUE() function to return any value.
Previously, when you defined how to group data for aggregate expressions in an Aggregator
transformation, you had to include each of the incoming fields from an aggregate function or a group by
field in the field mapping.
• If the mapping enabled for pushdown optimization contains Union and Aggregator transformations,
include the incoming field from the aggregate function or group by field in the field mapping, or remove
the field from the aggregate function or group by field altogether. Otherwise, the mapping runs without
pushdown optimization.
Previously, the task partially pushed down the mapping logic to the point where the transformation is
supported and runs without pushdown optimization.
Changed behavior 21
Previously, you had to configure the Secure Agent properties to use the Databricks Delta SQL endpoint.
• When you write data to a Google BigQuery target in bulk mode and use CSV mode as the staging file
format, you can use a precision of upto 15 digits for a column of Float or Double data type.
Previously, you can set the precision of upto 17 digits for a column of Float or Double data type.
• When you migrate a mapping or an elastic mapping and write data to a Google BigQuery target created at
runtime and you override the target table and dataset name, the Secure Agent creates the target with the
overridden target table name irrespective of the Create Disposition value.
Previously, the mapping or elastic mapping failed to create the target with the overridden target table
name if the Google BigQuery target did not exist and the Create Disposition property was set to Create
never.
Hive Connector
Effective in this release, when you run a task, Data Integration logs messages in the following directory:
<Secure Agent installation directory>/apps/Data_Integration_Server/logs/tomcat/<version>.log
Previously, Data Integration logged messages in the following directory: <Secure Agent installation
directory>/apps/Data_Integration_Server/<version>/tomcat.out
22 Chapter 3: Connectors
Chapter 4
Upgrade
The following topics provide information about tasks that you might need to perform before or after an
upgrade of Informatica Intelligent Cloud Services Data Integration. Post-upgrade tasks for previous monthly
releases are also included in case you haven't performed these tasks after the previous upgrade.
Files that you added to the following directory are preserved after the upgrade:
Perform the following steps to ensure that the Secure Agent is ready for the upgrade:
1. Ensure that each Secure Agent machine has sufficient disk space available for upgrade.
The machine must have 5 GB free space or the amount of disk space calculated using the following
formula, whichever is greatest:
Minimum required free space = 3 * (size of current Secure Agent installation directory -
space used for logs directory)
2. Close all applications and open files to avoid file lock issues, for example:
• Windows Explorer
• Notepad
• Windows Command Processor (cmd.exe)
23
TLS 1.0 and 1.1 disablement for the Secure Agent
In the April 2022 release of Informatica Intelligent Cloud Services, support for Transport Layer Security (TLS)
1.0 and 1.1 is disabled on the Secure Agent. The Secure Agent uses Transport Layer Security (TLS) version
1.2.
Data that passes between Informatica Intelligent Cloud Services and the Secure Agent is always encrypted
using TLS 1.2. You do not need to reconfigure the agent or take any action to enable the agent to
communicate with Informatica Intelligent Cloud Services.
Data that passes between the Secure Agent and connector endpoints is also encrypted using TLS 1.2. If you
use a connector or access a connection endpoint that uses TLS 1.0 or 1.1, Informatica recommends that you
upgrade to a version that uses TLS 1.2. If you cannot do this, you can re-enable TLS 1.0 and 1.1 on the Secure
Agent by following the instructions in this KB article:
HOW TO: Enable TLS 1.0 and 1.1 on the Secure Agent in Cloud Data Integration.
After you upgrade, to run the existing mappings enabled for pushdown optimization that have only a few
fields from the SQL query mapped to the target successfully, you must modify the mappings and map all the
fields from the SQL query to the target.
Amazon S3 V2 Connector
After the upgrade, existing elastic mappings configured to read from a JSON partition column fail if you
chose to override the folder path in the advanced source properties.
To run the existing mappings successfully, click on the Refresh button in the Fields tab or select the source
again to refresh the metadata.
After you upgrade, to run the existing elastic mappings successfully, you must modify the IAM permission for
the user to include the Amazon S3 bucket permission ListBucketMultipartUploads.
This upgrade impact is applicable for Amazon S3 V2 Connector and Amazon Redshift V2 Connector.
24 Chapter 4: Upgrade
When you run the existing mappings, the mappings fail in the following scenarios:
• The connection uses the TLS 1.0 or 1.1 protocol to connect to the source or target endpoint.
To run mappings successfully, edit the connection properties, and from the Crypto Protocol Version
option, select TLSv1.2 instead of TLSv1 or TLSv1.1.
• The connection uses the TLS 1.2 protocol, but the source or target endpoint that the connector accesses
does not support TLS 1.2 protocol.
To run mappings successfully, Informatica recommends upgrading to an endpoint version that supports
TLS 1.2.
After you upgrade, if you want to use existing mappings running on Databricks analytics or Databricks data
engineering cluster, configure the following properties based on the type of operation you want to perform:
Operation Configuration
Import metadata Set the JRE_OPTS property for the Data Integration Service of type Tomcat JRE to the
following value: -DUseDatabricksSql=false.
Run mappings¹ Set the JVMOption property for the Data Integration Service of type DTM to the following
value: -DUseDatabricksSql=false.
Run mappings enabled Set the JVMOption property for the Data Integration Service of type DTM to the following
with pushdown value: -DUseDatabricksSqlForPdo=false.
optimization¹
If your organization uses an outgoing proxy server, allow traffic to the following URLS:
• .s3.amazonaws.com
• <S3 staging bucket>.s3.<bucket region>.amazonaws.com
When you use an Amazon S3 or Amazon Redshift object as a mapping source or target, also allow traffic
to each source and target bucket that the agent will access.
If your organization does not use an outgoing proxy server, contact Informatica Global Customer Support
to disable the proxy settings used for S3 access.
If you create user-defined security groups, add inbound rules for the ELB security group to allow the
following traffic:
• Incoming traffic from the Secure Agent that creates the cluster.
• Incoming traffic from master nodes in the same cluster.
• Incoming traffic from worker nodes in the same cluster.
To map the first column, you must synchronize all the fields from the source object and rerun the elastic
mapping.
This upgrade impact is applicable for Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and
Microsoft Azure Data Lake Storage Gen2 Connector.
To run the existing elastic mapping successfully, you must synchronize all fields with the source object and
rerun the elastic mapping.
Previously, the MD5() function configured in an Expression transformation ran without pushdown
optimization even when you enabled the mappings for pushdown optimization.
To achieve the existing behavior, run the existing mappings without pushdown optimization.
SAP Connector
After the upgrade, if you use the HTTPS connection in SAP mappings, new and existing SAP Table Reader
and SAP BW Reader mappings might fail in the following scenarios:
• ABAP Kernel version is 753 or earlier and the CommonCryptoLib SAP system is earlier than 8.4.31.
To run mappings successfully, you must upgrade the CommonCryptoLib SAP system to version 8.4.31 or
later. To know more about upgrading the SAP system, see the SAP documentation.
• ABAP Kernel version is 753 or earlier and the CommonCryptoLib SAP system is 8.4.31 or later.
To run mappings successfully, you must enable the TLS 1.2 protocol in the SAP system.
For more information about enabling the TLS 1.2 protocol in the SAP system, see SAP Note 510007.
26 Chapter 4: Upgrade
SSE-KMS encryption for elastic mappings
Effective in this release, an existing elastic mapping enabled for SSE-KMS encryption fails when the
connector uses the default IAM role and uses the credentials from the ~/.aws/credentials location.
After you upgrade, to run the existing mappings successfully, you must perform one of the following steps:
• To use the credentials from the ~/.aws/credentials location, you must create the master instance
profile and the worker instance profile in AWS, attach the KMS policy to the worker profile, and specify the
profiles in the cluster configuration.
• Use the Secure Agent on Amazon EC2, create the master instance profile and the worker instance profile
in AWS, and attach the KMS policy to the worker profile.
• Use the Secure Agent on Amazon EC2, use the default IAM role, and attach the KMS policy to the Secure
Agent role.
This upgrade impact is applicable for Amazon S3 V2 Connector and Amazon Redshift V2 Connector.
Enhancements in previous
releases
You can find information on enhancements and changed behavior in previous Data Integration releases on
Informatica Network.
What's New guides for releases occurring within the last year are included in the following community article:
https://network.informatica.com/docs/DOC-17912
28
Index
C P
Cloud Application Integration community parameter files
URL 5 creating targets 8
Cloud Developer community
URL 5
code task API
enhancements 11
R
creating targets REST API
using parameter files 8 enhancements 11
platform enhancements 11
D
Data Integration community
S
URL 5 Secure Agents
upgrade preparation 23
status
H enhancements 10
trust site
Hierarchy Processor transformation description 6
enhancements 10
U
I upgrade notifications 6
Informatica Global Customer Support upgrade preparation
contact information 6 Secure Agent preparation 23
Informatica Intelligent Cloud Services
web site 5
W
M web site 5
maintenance outages 6
29