IBM Watson Data Platform Data Flows Service - Data Asset and Connection Properties

The following information describes the possible values of the 'properties' section of a connection within a data flow. The properties that follow can also be used in the 'properties' section of a data asset definition, although care has to be taken to confirm the type of connection referenced by the data asset. Different types of connections will support different properties (termed 'interaction properties' below). Some of these properties are only applicable when the connection is used as a source, some when the connection is used as a target, and some are applicable in either case.

For example, if an 'IBM Db2 Warehouse on Cloud' connection is used as a source both the table_name and schema_name properties can be set as follows:
	{  
	   "id":"source1",
	   "type":"binding",
	   "output":{  
	      "id":"source1Output"
	   },
	   "connection":{  
	      "properties":{  
	         "schema_name":"GOSALESHR",
	         "table_name":"EMPLOYEE"
	      },
	      "ref":"{connection_id}"
	   }
	}
alternatively the 'IBM Db2 Warehouse on Cloud' connection used as a source also allows just a SQL select statement to be provided:
	{  
	   "id":"source1",
	   "type":"binding",
	   "output":{  
	      "id":"source1Output"
	   },
	   "connection":{  
	      "properties":{  
	         "select_statement":"select * from GOSALES.PRODUCT_NAME_LOOKUP"
	      },
	      "ref":"{connection_id}"
	   }
	}

Note that in the tables below a * character next to a property name denotes the property is required to be assigned a value.


Table of contents

Amazon RDS for MySQL
Amazon RDS for PostgreSQL
Amazon Redshift
Amazon S3
Analytics Engine HDFS
Apache Cassandra
Apache HDFS
Apache Hive
Cloud Object Storage
Cloud Object Storage (infrastructure)
Cloudant
Cloudera Impala
Cognos Analytics
Compose for MySQL
Databases for PostgreSQL
Db2
Db2 Big SQL
Db2 Hosted
Db2 Warehouse
Db2 for i
Db2 for z/OS
Db2 on Cloud
Dropbox
FTP
Google BigQuery
Google Cloud Storage
HTTP
Informix
Looker
Microsoft Azure Data Lake Store
Microsoft Azure SQL Database
Microsoft SQL Server
MongoDB
MySQL
Netezza (PureData System for Analytics)
OData
Oracle
Pivotal Greenplum
Planning Analytics
PostgreSQL
SAP OData
Salesforce.com
Snowflake
Sybase
Sybase IQ
Tableau
Teradata




Amazon RDS for MySQL


Description: Amazon RDS for MySQL database
Data source type ID: 9aa630f2-efc4-4d54-b8cb-254f31405b78
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Amazon RDS for PostgreSQL


Description: Amazon RDS for PostgreSQL database
Data source type ID: 9493d830-882b-445e-96c7-8e4c635a1a5b
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Amazon Redshift


Description: Amazon Redshift database
Data source type ID: 31170994-f54c-4148-9c5a-807832fa1d07
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Amazon S3


Description: Amazon Simple Storage Service (S3)
Data source type ID: a0b1d14a-4767-404c-aac1-4ce0e62818c3
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
access_key *The access key ID (username) for authorizing access to AWS
bucketThe name of the bucket that contains the files to access
secret_key *The password associated with the access key ID for authorizing access to AWS

Interaction properties (when used as a source)

Name Type Description
bucketThe name of the bucket that contains the files to read
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_name *The name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description
bucketThe name of the bucket that contains the files to write
codec_avroThe compression codec to use when writing. Values: [bzip2, deflate, null, snappy]
codec_orcThe compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib]
codec_parquetThe compression codec to use when writing. Values: [gzip, snappy, uncompressed]
create_bucketCreate the bucket that contains the files to write to. Default: false
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_formatThe format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv
file_name *The name of the file to write to or delete
first_line_headerThe first line of the file contains column headers. Default: false
include_typesInclude data types in the first line of the file. Default: false
names_as_labelsSet column labels to the value of the column name
null_valueThe value that represents null (a missing value) in the file, for example, NULL
partitionedWrite the file as multiple partitions. Default: false
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
quote_numericsEnclose numeric values the same as strings using the quote character. Default: true
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
write_modeWhether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write




Analytics Engine HDFS


Description: IBM Analytics Engine HDFS via the WebHDFS API
Data source type ID: 895507b6-f23e-40b2-b40a-5414fc9bd2ca
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
hive_dbThe database in Apache Hive. Default: default
hive_http_pathThe path of the endpoint such as gateway/default/hive when the Apache Hive server is configured for HTTP transport mode. Default: gateway/default/hive
hive_hostThe hostname or IP address of the Apache Hive server
hive_passwordThe password associated with the username for connecting to Apache Hive
hive_portThe port of the Apache Hive server. Default: 10000
hive_userThe username for connecting to Apache Hive
passwordThe password associated with the username for accessing the system
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
use_home_as_rootUser home directory is used as the root of browsing. Default: true
username *The username for accessing the system
url *The WebHDFS URL for accessing HDFS

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_name *The name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description
codec_avroThe compression codec to use when writing. Values: [bzip2, deflate, null, snappy]
codec_orcThe compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib]
codec_parquetThe compression codec to use when writing. Values: [gzip, snappy, uncompressed]
create_hive_tableCreate a table in the database. Default: false
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_formatThe format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv
file_name *The name of the file to write to or delete
first_line_headerThe first line of the file contains column headers. Default: false
hive_tableThe name of the table to create
include_typesInclude data types in the first line of the file. Default: false
names_as_labelsSet column labels to the value of the column name
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
quote_numericsEnclose numeric values the same as strings using the quote character. Default: true
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
write_modeWhether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write




Apache Cassandra


Description: Apache Cassandra database
Data source type ID: e6ff8c10-4199-4b58-9a93-749411eafacd
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
host *The hostname or IP address of the database
keyspace *The name of the keyspace
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Apache HDFS


Description: Apache HDFS via the WebHDFS API
Data source type ID: c10e5224-f17d-4524-844f-e97b1305e489
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
hive_dbThe database in Apache Hive. Default: default
hive_http_pathThe path of the endpoint such as gateway/default/hive when the Apache Hive server is configured for HTTP transport mode
hive_hostThe hostname or IP address of the Apache Hive server
hive_passwordThe password associated with the username for connecting to Apache Hive
hive_portThe port of the Apache Hive server. Default: 10000
hive_userThe username for connecting to Apache Hive
passwordThe password associated with the username for accessing the system
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
use_home_as_rootUser home directory is used as the root of browsing. Default: true
username *The username for accessing the system
url *The WebHDFS URL for accessing HDFS

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_name *The name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description
codec_avroThe compression codec to use when writing. Values: [bzip2, deflate, null, snappy]
codec_orcThe compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib]
codec_parquetThe compression codec to use when writing. Values: [gzip, snappy, uncompressed]
create_hive_tableCreate a table in the database. Default: false
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_formatThe format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv
file_name *The name of the file to write to or delete
first_line_headerThe first line of the file contains column headers. Default: false
hive_tableThe name of the table to create
include_typesInclude data types in the first line of the file. Default: false
names_as_labelsSet column labels to the value of the column name
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
quote_numericsEnclose numeric values the same as strings using the quote character. Default: true
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
write_modeWhether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write




Apache Hive


Description: Apache Hive database
Data source type ID: 0fd83fe5-8995-4e2e-a1be-679bb8813a6d
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
http_pathThe path of the endpoint such as gateway/default/hive when the server is configured for HTTP transport mode
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_format *The format of the file to write to. Values: [avro, csv, delimited, orc, parquet]. Default: delimited
file_nameThe name of the file to write to or delete
null_valueThe value that represents null (a missing value) in the file, for example, NULL
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert]. Default: insert




Cloud Object Storage


Description: Cloud Object Storage service on IBM Cloud. Offers S3 API and application binding with regional and cross regional resiliency
Data source type ID: 193a97c1-4475-4a19-b90c-295c4fdc6517
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
api_key *To find the API Key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of apikey without the quotation marks
access_keyTo find the Access Key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of access_key_id without the quotation marks
bucketThe name of the bucket that contains the files to access
credentialsThe contents of the Cloud Object Storage service credentials (JSON) file. Find JSON content by going to "Service credentials" tab and expanding selected credentials. Copy whole content including {} brackets.
url *To find this URL, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Endpoint in the left pane. Copy the value of the public endpoint that you want to use
resource_instance_idTo find the Resource Instance ID, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of resource_instance_id without the quotation marks
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
secret_key *To find the Secret Key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of secret_access_key without the quotation marks

Interaction properties (when used as a source)

Name Type Description
bucketThe name of the bucket that contains the files to read
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_nameThe name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description
bucketThe name of the bucket that contains the files to write
codec_avroThe compression codec to use when writing. Values: [bzip2, deflate, null, snappy]
codec_orcThe compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib]
codec_parquetThe compression codec to use when writing. Values: [gzip, snappy, uncompressed]
create_bucketCreate the bucket that contains the files to write to. Default: false
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_formatThe format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv
file_nameThe name of the file to write to or delete
first_line_headerThe first line of the file contains column headers. Default: false
include_typesInclude data types in the first line of the file. Default: false
names_as_labelsSet column labels to the value of the column name
null_valueThe value that represents null (a missing value) in the file, for example, NULL
partitionedWrite the file as multiple partitions. Default: false
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
quote_numericsEnclose numeric values the same as strings using the quote character. Default: true
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
storage_classThe storage class for the created bucket. Values: [cold_vault, flex, standard, vault]. Default: standard
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
write_modeWhether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write




Cloud Object Storage (infrastructure)


Description: Object storage for workloads requiring AWS SDKs and AWS Signature style authentication
Data source type ID: 4bf2dedd-3809-4443-96ec-b7bc5726c07b
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
access_keyTo find the Access Key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of access_key_id without the quotation marks
bucketThe name of the bucket that contains the files to access
credentialsThe contents of the Cloud Object Storage service credentials (JSON) file. Find JSON content by going to "Service credentials" tab and expanding selected credentials. Copy whole content including {} brackets.
url *To find this URL, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Endpoint in the left pane. Copy the value of the public endpoint that you want to use
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
secret_key *To find the Secret Key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of secret_access_key without the quotation marks

Interaction properties (when used as a source)

Name Type Description
bucketThe name of the bucket that contains the files to read
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_nameThe name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description
bucketThe name of the bucket that contains the files to write
codec_avroThe compression codec to use when writing. Values: [bzip2, deflate, null, snappy]
codec_orcThe compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib]
codec_parquetThe compression codec to use when writing. Values: [gzip, snappy, uncompressed]
create_bucketCreate the bucket that contains the files to write to. Default: false
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_formatThe format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv
file_nameThe name of the file to write to or delete
first_line_headerThe first line of the file contains column headers. Default: false
include_typesInclude data types in the first line of the file. Default: false
names_as_labelsSet column labels to the value of the column name
null_valueThe value that represents null (a missing value) in the file, for example, NULL
partitionedWrite the file as multiple partitions. Default: false
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
quote_numericsEnclose numeric values the same as strings using the quote character. Default: true
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
storage_classThe storage class for the created bucket. Values: [cold_vault, flex, standard, vault]. Default: standard
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
write_modeWhether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write




Cloudant


Description: IBM Cloudant
Data source type ID: 44e904b5-0cb2-4d8e-a5c0-c48bc3e24fdd
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
custom_urlThe URL to the Cloudant database
databaseThe database to connect to
password *The password associated with the username for accessing the system
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
database *The database to connect to
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_limitThe maximum number of rows to return

Interaction properties (when used as a target)

Name Type Description
blob_truncation_sizeThe maximum size for BLOB values. Values larger than this will be truncated. Default: 8000
batch_sizeThe number of documents to send per request. Default: 100
clob_truncation_sizeThe maximum size for CLOB values. Values larger than this will be truncated. Default: 8000
create_databaseCreate the database to connect to. Default: false
database *The database to connect to
document_typeThe type of the document
input_formatThe format of the source data. Values: [json, relational]. Default: relational
write_modeWhether to write to, or delete, the target. Values: [delete, write]. Default: write




Cloudera Impala


Description: Cloudera Impala database
Data source type ID: 05c58384-862e-4597-b19a-c71ea7e760bc
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description




Cognos Analytics


Description: Cognos Analytics
Data source type ID: 11f3029d-a1cf-4c4d-b8e7-64422fa54a94
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
url *The gateway URL to access Cognos
namespace_idThe identifier of the authentication namespace
passwordThe password associated with the username for accessing the system
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
use_anonymous_accessConnect without providing logon credentials. Default: false
usernameThe username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
file_name *The name of the file to read
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_limitThe maximum number of rows to return

Interaction properties (when used as a target)

Name Type Description




Compose for MySQL


Description: Compose for MySQL database
Data source type ID: 0cd4b64c-b485-47ed-a8c4-329c25412de3
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Databases for PostgreSQL


Description: Databases for PostgreSQL database
Data source type ID: 048ed1bf-516c-46f0-ae90-fa3349d8bc1c
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Db2


Description: IBM Db2 database
Data source type ID: 8c1a4480-1c29-4b33-9086-9cb799d7b157
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Db2 Big SQL


Description: IBM Db2 Big SQL
Data source type ID: 2bdd9544-f13a-47b6-b6c3-f5964a08066a
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Db2 Hosted


Description: IBM Db2 hosted database
Data source type ID: 9525f6a6-1c44-4925-b1a0-9a2b731518cb
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Db2 Warehouse


Description: Db2 Warehouse
Data source type ID: cfdcb449-1204-44ba-baa6-9a8a878e6aa7
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
api_key *An application programming interface key that identifies the calling application or user
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
portThe port of the database. Default: 50001
ssl[Deprecated] The port is configured to accept SSL connections. Default: true
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Db2 for i


Description: IBM Db2 database for i
Data source type ID: 335cbfe7-e495-474e-8ad7-78ad63c05091
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
host *The hostname or IP address of the database
database *The unique name of the Db2 location you want to access
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Db2 for z/OS


Description: IBM Db2 database for z/OS
Data source type ID: c8d3eab2-25f6-4a90-8e10-0b4226693c45
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
collectionThe ID of the collection of packages to use
host *The hostname or IP address of the database
database *The unique name of the Db2 location you want to access
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Db2 on Cloud


Description: IBM Db2 fully-managed cloud SQL database
Data source type ID: 506039fb-802f-4ef2-a2bf-c1682e9c8aa2
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Dropbox


Description: Dropbox secure file sharing and storage service
Data source type ID: 507b850c-f4a1-41d7-ad64-4182a1264014
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
access_token *The OAuth2 access token that you obtained by following the instructions at https://www.dropbox.com/developers/reference/oauth-guide

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_name *The name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description
codec_avroThe compression codec to use when writing. Values: [bzip2, deflate, null, snappy]
codec_orcThe compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib]
codec_parquetThe compression codec to use when writing. Values: [gzip, snappy, uncompressed]
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_formatThe format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv
file_name *The name of the file to write to or delete
first_line_headerThe first line of the file contains column headers. Default: false
include_typesInclude data types in the first line of the file. Default: false
names_as_labelsSet column labels to the value of the column name
null_valueThe value that represents null (a missing value) in the file, for example, NULL
partitionedWrite the file as multiple partitions. Default: false
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
quote_numericsEnclose numeric values the same as strings using the quote character. Default: true
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
write_modeWhether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write




FTP


Description: Remote file system (FTP)
Data source type ID: d5dbc62f-7c4c-4d49-8eb2-dab6cef2969c
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
connection_mode *Connection mode. Values: [anonymous, basic, ftps, sftp]
host *The hostname or IP address of the remote FTP server
key_passphraseIf the private key is encrypted, this passphrase is needed to decrypt/encrypt it
passwordThe password associated with the username for connecting to the FTP server
portThe port of the FTP server
private_keyThe private key that's uniquely associated with your account and recognized by the SFTP server
username *The username for connecting to the FTP server. Default: anonymous

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_name *The name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description
codec_avroThe compression codec to use when writing. Values: [bzip2, deflate, null, snappy]
codec_orcThe compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib]
codec_parquetThe compression codec to use when writing. Values: [gzip, snappy, uncompressed]
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_formatThe format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv
file_name *The name of the file to write to or delete
first_line_headerThe first line of the file contains column headers. Default: false
include_typesInclude data types in the first line of the file. Default: false
names_as_labelsSet column labels to the value of the column name
null_valueThe value that represents null (a missing value) in the file, for example, NULL
partitionedWrite the file as multiple partitions. Default: false
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
quote_numericsEnclose numeric values the same as strings using the quote character. Default: true
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
write_modeWhether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write




Google BigQuery


Description: Google BigQuery
Data source type ID: 933152db-99e1-453a-8ce5-ae0e6714d1a9
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
credentials *The contents of the Google service account key (JSON) file
project_idThe id of the Google project

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_name *The name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_name *The name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [delete, delete_insert, insert, merge, update]. Default: insert




Google Cloud Storage


Description: Google Cloud Storage
Data source type ID: 05b7f0ea-6ae4-45e2-a455-cc280f110825
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
credentials *The contents of the Google service account key (JSON) file
project_idThe id of the Google project

Interaction properties (when used as a source)

Name Type Description
bucketThe name of the bucket that contains the files to read
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_name *The name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description
bucketThe name of the bucket that contains the files to write
codec_avroThe compression codec to use when writing. Values: [bzip2, deflate, null, snappy]
codec_orcThe compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib]
codec_parquetThe compression codec to use when writing. Values: [gzip, snappy, uncompressed]
create_bigquery_tableWhether to load data into BigQuery table from the GCS file. Default: false
create_bucketCreate the bucket that contains the files to write to. Default: false
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_formatThe format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv
file_name *The name of the file to write to or delete
first_line_headerThe first line of the file contains column headers. Default: false
include_typesInclude data types in the first line of the file. Default: false
names_as_labelsSet column labels to the value of the column name
null_valueThe value that represents null (a missing value) in the file, for example, NULL
partitionedWrite the file as multiple partitions. Default: false
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
quote_numericsEnclose numeric values the same as strings using the quote character. Default: true
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
schema_name *The name of the schema that contains the table to write to
storage_classThe storage class for the created bucket. Values: [coldline, multi_regional, nearline, regional, standard]. Default: standard
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace]. Default: append
table_name *The name of the table to write to
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
write_modeWhether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write




HTTP


Description: Hypertext Transfer Protocol (HTTP)
Data source type ID: 4210c294-8b0f-46b4-bcdc-1c6ada2b7e6b
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
url *The URL of the file to be accessed
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_nameThe name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description




Informix


Description: IBM Informix database
Data source type ID: 029e5d1c-ba73-4b09-b742-14c3a39b6cf9
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
server *The name of the database server to connect to
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Looker


Description: Looker
Data source type ID: 69857d6b-2be8-4a59-8a70-723405f09708
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
client_id *The client ID for authorizing access to Looker
client_secret *The password associated with the client ID for authorizing access to Looker
host *The hostname or IP address of the Looker server
portThe port of the Looker server. Default: 19999

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
sheet_nameThe name of the Excel worksheet to read from
file_formatThe format of the file. Values: [csv, delimited, excel, json]. Default: csv
file_name *The name of the file to read
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_limitThe maximum number of rows to return
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2

Interaction properties (when used as a target)

Name Type Description




Microsoft Azure Data Lake Store


Description: Microsoft Azure Data Lake Store via the WebHDFS API
Data source type ID: 6863060d-97c4-4653-abbe-958bde533f8c
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
client_id *The client ID for authorizing access to Microsoft Azure Data Lake Store
client_secret *The authentication key associated with the client ID for authorizing access to Microsoft Azure Data Lake Store
tenant_id *The Azure Active Directory tenant ID
url *The WebHDFS URL for accessing HDFS

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
rangeThe range of cells to retrieve from the Excel worksheet, for example, C1:F10
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
display_value_labelsDisplay the value labels
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
sheet_nameThe name of the Excel worksheet to read from
exclude_missing_valuesSet values that have been defined as missing values to null
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
xml_path_fieldsThe path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher
file_formatThe format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv
file_name *The name of the file to read
first_lineThe first line of data or header. Default: 0
first_line_headerThe first line of the file contains column headers. Default: false
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_timestamp_as_dateInfer columns containing date and time data as date. Default: true
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
json_pathThe path that identifies the elements to retrieve from a JSON document, for example, /book/publisher
labels_as_namesSet column names to the value of the column label
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
row_limitThe maximum number of rows to return
xml_schemaThe schema that specified metadata information of elements, for example, data type, values, min, max
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2
use_field_formatsFormat data using specified field formats
use_variable_formatsFormat data using specified variable formats.
xml_pathThe path that identifies the root elements to retrieve from a XML document, for example, /book/publisher

Interaction properties (when used as a target)

Name Type Description
codec_avroThe compression codec to use when writing. Values: [bzip2, deflate, null, snappy]
codec_orcThe compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib]
codec_parquetThe compression codec to use when writing. Values: [gzip, snappy, uncompressed]
date_formatThe format of date values, for example, yyyy-[m]m-[d]d
decimal_formatThe format of decimal values, for example, #,###.##
decimal_format_grouping_separatorThe character used to group digits of similar significance
decimal_format_decimal_separatorThe character used to separate the integer part from the fractional part of a number
encodingThe appropriate character encoding for your data, for example, UTF-8. Default: utf-8
encryption_keyKey to decrypt sav file
escape_characterThe character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [, backslash, double_quote, none, single_quote]. Default: none
field_delimiterThe character that separates each value from the next value, for example, a comma. Values: [, colon, comma, tab]. Default: comma
file_formatThe format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv
file_name *The name of the file to write to or delete
first_line_headerThe first line of the file contains column headers. Default: false
include_typesInclude data types in the first line of the file. Default: false
names_as_labelsSet column labels to the value of the column name
null_valueThe value that represents null (a missing value) in the file, for example, NULL
quote_characterThe character that's used to enclose string values, for example, a double quotation mark. Values: [", ', double_quote, none, single_quote]. Default: none
quote_numericsEnclose numeric values the same as strings using the quote character. Default: true
row_delimiterThe character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [carriage_return, carriage_return_line_feed, line_feed, new_line]. Default: new_line
time_formatThe format of time values, for example, HH:mm:ss[.f]
timestamp_formatThe format of timestamp values, for example, yyyy-MM-dd H:m:s
write_modeWhether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write




Microsoft Azure SQL Database


Description: Microsoft Azure SQL Database
Data source type ID: e375c0ae-cba9-47fc-baf7-523bef88c09e
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Microsoft SQL Server


Description: Microsoft SQL Server database
Data source type ID: 48695e79-6279-474a-b539-342625d3dfc2
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




MongoDB


Description: MongoDB database
Data source type ID: c6fb9293-51eb-4f2b-b20c-4dafa3136744
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
auth_databaseThe name of the database in which the user was created
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, update]. Default: insert




MySQL


Description: MySQL database
Data source type ID: b2cc3dc2-aff7-4a80-8f80-5e8c5703e9d2
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
encodingThe character encoding for your data. If not specified, the default character set of the database server is used. If you change the value, enter a valid character encoding, for example, UTF-8
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Netezza (PureData System for Analytics)


Description: IBM Netezza (PureData System for Analytics) database
Data source type ID: c2a82a72-0711-4376-a468-4e9951cabf22
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




OData


Description: OData
Data source type ID: 27c3e1b0-b7d2-4e32-9511-1b8aaa197de0
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
api_key *The api key to use for connecting to the service root
auth_type *The type of authentication to be used to access the service root. Values: [api_key, basic, none]
password *The password associated with the username for accessing the system
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
service_root *The URL used to access the service root of a site implementing the OData protocol.
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
entity_set_name *The entity set to be processed
row_limitThe maximum number of rows to return
row_startThe first row of data to read

Interaction properties (when used as a target)

Name Type Description




Oracle


Description: Oracle database
Data source type ID: 971223d3-093e-4957-8af9-a83181ee9dd9
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
sid *The unique name of the database instance. If you provide a SID, do not provide a service name
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
service_name *The name of the service. If you provide a service name, do not provide a SID
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Pivotal Greenplum


Description: Pivotal Greenplum database
Data source type ID: e278eff1-a7c4-4d60-9a02-bde1bb1d26ef
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Planning Analytics


Description: Planning Analytics
Data source type ID: c8f3d379-78b2-4bad-969d-2e928277377e
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
auth_type *The type of authentication to be used to access the TM1 server api. Values: [basic, cam_credentials, cam_passport, wia_token]. Default: basic
cam_passportThe CAM passport to use for connecting to the TM1 sever api
namespaceThe namespace to use for connecting to the TM1 sever api
password *The password associated with the username for accessing the system
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
service_root *The URL used to access the TM1 server API implementing the OData protocol
username *The username for accessing the system
wia_tokenThe WIA token to use for connecting to the TM1 sever api

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
cube_name *The cube to be processed
row_limitThe maximum number of rows to return
row_startThe first row of data to read
view_name *The view to be processed
view_group *The group that the view belongs to

Interaction properties (when used as a target)

Name Type Description
cube_name *The cube to be processed




PostgreSQL


Description: PostgreSQL database
Data source type ID: e1c23729-99d8-4407-b3df-336e33ffdc82
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
sslThe port is configured to accept SSL connections. Default: false
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




SAP OData


Description: SAP OData
Data source type ID: 79a0a133-cbb6-48d0-a3b0-0956a9655401
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
api_key *The api key to use for connecting to the service root
auth_type *The type of authentication to be used to access the service root. Values: [api_key, basic, none]
password *The password associated with the username for accessing the system
sap_gateway_url *The URL used to access the SAP gateway catalog
ssl_certificateA self-signed certificate that was created by a tool such as OpenSSL
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
entity_set_name *The entity set to be processed
row_limitThe maximum number of rows to return
row_startThe first row of data to read
service_name *The name of the service containing the entity set to be processed
service_versionThe version of the service containing the entity set to be processed

Interaction properties (when used as a target)

Name Type Description




Salesforce.com


Description: Salesforce.com
Data source type ID: 06847b16-07b4-4415-a924-c63d11a17aa1
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
password *The password associated with the username for accessing the system
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, update]. Default: insert




Snowflake


Description: Snowflake database
Data source type ID: 2fc1372f-b58c-4d45-b0c4-dfb32a1c78a5
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
account_name *The full name of your account (provided by Snowflake)
database *The name of the database
password *The password associated with the username for accessing the system
roleThe default access control role to use in the Snowflake session
username *The username for accessing the system
warehouse *The virtual warehouse

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
sampling_seedSeed to be used for getting a repeatable sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert




Sybase


Description: Sybase database
Data source type ID: 6976a3fc-b2ad-4db6-818c-ea049cac309d
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, update]. Default: insert




Sybase IQ


Description: Sybase IQ database
Data source type ID: 49079262-fac2-4762-99d1-452c1caf6b49
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, update]. Default: insert




Tableau


Description: Tableau
Data source type ID: 9ebc33eb-8c01-43fd-be1e-7202cf5c2c82
Can be used as a source: Yes
Can be used as a target: No
Secure gateway: Not applicable / not supported

Connection properties (connection asset)

Name Type Description
host *The hostname or IP address of the Tableau server
password *The password associated with the username for accessing the system
site *The name of the Tableau site to use
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
file_name *The name of the file to read
infer_record_countThe number of records to process to obtain the structure of the data. Default: 1000
infer_schemaObtain the schema from the file. Default: false
infer_as_varcharTreat the data in all columns as VARCHARs. Default: false
invalid_data_handlingHow to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail
read_modeThe method for reading files. Values: [read_multiple_regex, read_multiple_wildcard, read_raw, read_single]. Default: read_single
row_limitThe maximum number of rows to return
type_mappingOverrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2

Interaction properties (when used as a target)

Name Type Description




Teradata


Description: Teradata database
Data source type ID: 96ec8f53-a818-4ba1-bd8d-c86cc33a0b45
Can be used as a source: Yes
Can be used as a target: Yes
Secure gateway: Supported but optional

Connection properties (connection asset)

Name Type Description
database *The name of the database
host *The hostname or IP address of the database
password *The password associated with the username for accessing the system
port *The port of the database
username *The username for accessing the system

Interaction properties (when used as a source)

Name Type Description
byte_limitThe maximum number of bytes to return. Use any of these suffixes; KB, MB, GB, or TB
row_limitThe maximum number of rows to return
sampling_typeIndicates which data sampling type should be used in the select statement. Values: [block, none]. Default: none
sampling_percentagePercentage for each row or block to be included in the sample
schema_nameThe name of the schema that contains the table to read from
select_statement *The SQL SELECT statement for retrieving data from the table
table_name *The name of the table to read from

Interaction properties (when used as a target)

Name Type Description
create_statementThe Create DDL statement for recreating the target table
key_column_namesA comma separated list of column names to override the primary key used during an update or merge
schema_nameThe name of the schema that contains the table to write to
table_actionThe action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append
table_name *The name of the table to write to
write_modeThe mode for writing records to the target table. Values: [insert, merge, update]. Default: insert



Generated on: 2020-08-09T10:40:40.101Z
Generated from: GET /v2/datasource_types?interactive_properties=true&connection_properties=true