{ "id":"source1", "type":"binding", "output":{ "id":"source1Output" }, "connection":{ "properties":{ "schema_name":"GOSALESHR", "table_name":"EMPLOYEE" }, "ref":"{connection_id}" } }alternatively the 'IBM Db2 Warehouse on Cloud' connection used as a source also allows just a SQL select statement to be provided:
{ "id":"source1", "type":"binding", "output":{ "id":"source1Output" }, "connection":{ "properties":{ "select_statement":"select * from GOSALES.PRODUCT_NAME_LOOKUP" }, "ref":"{connection_id}" } }
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: true | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
connection_mode | ||
sid * | The unique name of the database instance. If you provide a SID, do not provide a service name | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
metadata_discovery | Determines what types of metadata can be discovered, 'No Remarks' option will be set as default. Values: [no_remarks, no_remarks_or_synonyms, no_synonyms, remarks_and_synonyms] | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
service_name * | The name of the service. If you provide a service name, do not provide a SID | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: true | |
query_timeout | Sets the default query timeout in seconds for all statements created by a connection. If not specified the default value of 300 seconds will be used.. Default: 300 | |
retry_limit | Specify the maximum number of retry connection attempts to be made by the connector with an increasing delay between each retry. If no value is provided, two attempts will be made by default if necessary. | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: true | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
time_type | Choose the required Time Type for time values in the data source . Values: [time, timestamp, varchar]. Default: timestamp | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
access_key * | The access key ID (username) for authorizing access to AWS | |
bucket * | The name of the bucket that contains the files to access | |
create_statement | The Create DDL statement for recreating the target table | |
file_name * | Name of a temporary file to be stored in the S3 bucket. | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
region * | Amazon Web Services (AWS) region | |
schema_name | The name of the schema that contains the table to write to | |
secret_key * | The password associated with the access key ID for authorizing access to AWS | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, load, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
access_key | The access key ID (username) for authorizing access to AWS | |
auth_method | ||
bucket | The name of the bucket that contains the files to access | |
duration_seconds | The duration in seconds of the temporary security credentials | |
url | The endpoint URL to use for access to AWS S3 | |
external_id | The external ID of the organization that is attempting to assume a role | |
proxy_host * | The server proxy host | |
proxy_password | The password used to authenticate with the server proxy | |
proxy_port * | The server proxy port | |
proxy_user | The name of the user used to connect to the server proxy | |
region | Amazon Web Services (AWS) region | |
role_arn | The Amazon Resource Name (ARN) of the role that the connection should assume | |
role_session_name * | A name such as your IAM user name to identify the session to S3 administrators | |
secret_key * | The password associated with the access key ID for authorizing access to AWS | |
proxy | Use server proxy. Default: false | |
session_token | The session token (only needed with temporary credentials) |
Name | Type | Description |
---|---|---|
bucket | The name of the bucket that contains the files to read | |
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
_file_format._delimited_syntax._data_format | Select how binary data is represented. Binary data includes data that is of integer, float, double, or binary data types. If variable length binary fields are written as binary, they are prefixed with a 4-byte integer that represents the size of the field.. Values: [1, 0]. Default: 0 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
_file_format._delimited_syntax._record_def._record_def_source | If the record definition is a delimited string, enter a delimited string that specifies the names and data types of the files. Use the format name:data_type, and separate each field with the delimiter specified as the >B<Field delimiter>/B< property. If the record definition is in a delimited string file or Osh schema file, specify the full path of the file. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
_file_format._delimited_syntax._escape | Specify the character to use to escape field and row delimiters. If an escape character exists in the data, the escape character is also escaped. Because escape characters require additional processing, do not specify a value for this property if you do not need to include escape characters in the data. | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
_exclude_files | Specify a comma-separated list of file prefixes to exclude from the files that are read. If a prefix includes a comma, escape the comma by using a backslash (\). | |
exclude_missing_values | Set values that have been defined as missing values to null | |
_file_format._delimited_syntax._field_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: , | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
_file_format | . Values: [2, 4, 1, 0, 6, 7]. Default: 0 | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name | The name of the file to read | |
_filename_column | Specify the name of the column to write the source file name to. | |
first_line | Indicates at which row start reading. Default: 0 | |
_first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
_recurse | Specify whether to read files that are in child folders of the prefix that is specified for the File name property. If you exclude child folders, the prefix that is specified must include a trailing forward slash (/).. Default: true | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
_file_format._avro_source._output_j_s_o_n | #PROP_DESC_AVROOUTPUTJSON#. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
_read_mode | Select Read single file to read from a single file or Read multiple files to read from the files that match a specified file prefix. Select List buckets to list the buckets for your account in the specified region. Select List files to list files files for your account in the specified bucket.. Values: [2, 3, 1, 0]. Default: 0 | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
_file_format._delimited_syntax._record_def | Select whether the record definition is provided to the Amazon S3 connector from the source file, a delimited string, a file that contains a delimited string, or a schema file. When runtime column propagation is enabled, this metadata provides the column definitions. If a schema file is provided, the schema file overrides the values of formatting properties in the stage and the column definitions that are specified on the Columns page of the output link.. Values: [1, 2, 3, 0, 4]. Default: 0 | |
_reject_mode | Specify what the connector does when a record that contains invalid data is found in the source file. Select Continue to read the rest of the file, Fail to stop the job with an error message, or Reject to send the rejected data to a reject link.. Values: [0, 1, 2]. Default: 0 | |
_file_format._delimited_syntax._row_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
_file_format._o_r_c_source._temp_staging_area | Specify a directory on the engine tier with write permission for the user running the job. This directory will be used to create the temporary files during the job run. | |
_file_format._parquet_source._temp_staging_area | Specify a directory on the engine tier with write permission for the user running the job. This directory will be used to create the temporary files during the job run. | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
_file_format._delimited_syntax._trace_file | Specify the full path to a file to contain trace information from the parser for delimited files. Because writing to a trace file requires additional processing, specify a value for this property only during job development. | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
_create_bucket._append_u_u_i_d | Select whether to append a unique set of characters to identify the bucket to the bucket name that is created.. Default: false | |
append_uid | Use this property to choose if a unique identifier is to be appended to the file name. When the value of this property is set to yes, then the file name gets appended with the unique identifier, and a new file would be written for every wave of data that is streamed into the stage. When the value of this property is set to No, then the file would be overwritten on every wave.. Default: false | |
_file_attributes._life_cycle_rule._transition | Specify whether to archive the file in Amazon Glacier. You can specify the date when the file is set to be archived or the number of days before the file is set to be archived.. Default: false | |
_file_format._avrotarget._avro_array_keys | If the file format is Avro in a target stage, normalization is controlled via array keys. | |
_file_format._avrotarget._avro_schema | Specify the fully qualified path for a JSON file that defines the schema for the Avro file. | |
_file_format._parquet_target._parquet_block_size | Specify the blocksize, default is 100000. Default: 10000000 | |
bucket | The name of the bucket that contains the files to write | |
_file_format._o_r_c_target._orc_buffer_size | Buffer size. Default: 10000 | |
_codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
_file_format._o_r_c_target._orc_compress | Specify compression mechanism. Values: [0, 2, 1]. Default: 2 | |
_file_format._parquet_target._parquet_compress | Specify compression mechanism. Values: [2, 3, 0, 1]. Default: 1 | |
_file_attributes._content_type | Specify the content type of the file to write, for example, text/xml or application/x-www-form-urlencoded; charset=utf-8. | |
create_bucket | Create the bucket that contains the files to write to. Default: false | |
_file_format._delimited_syntax._data_format | Select how binary data is represented. Binary data includes data that is of integer, float, double, or binary data types. If variable length binary fields are written as binary, they are prefixed with a 4-byte integer that represents the size of the field.. Values: [1, 0]. Default: 0 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
_file_attributes._life_cycle_rule._transition._transition_date * | Specify the date when the file is set to be archived in Amazon Glacier in the format "YYYY-MM-DD". | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
_file_attributes._life_cycle_rule | Specify whether you want to define one or more rules for when a file is set to expire or be archived.. Default: false | |
_file_attributes._life_cycle_rule._expiration._expiration_duration * | Specify the number of days that the file will exist in Amazon Web Services before it expires. | |
_file_attributes._life_cycle_rule._transition._transition_duration * | Specify the number of days that the file will exist in Amazon S3 before it is set to be archived in Amazon Glacier. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
_file_format._delimited_syntax._escape | Specify the character to use to escape field and row delimiters. If an escape character exists in the data, the escape character is also escaped. Because escape characters require additional processing, do not specify a value for this property if you do not need to include escape characters in the data. | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
_file_attributes._life_cycle_rule._expiration | Specify whether you want the file to expire. When a file expires, it is deleted from Amazon Web Services. You can specify the date when the file is set to expire or the number of days that the file will exist in Amazon Web Services before it is set to expire.. Default: false | |
_file_attributes._life_cycle_rule._expiration._expiration_date * | Specify the date when the file is set to expire in the format "YYYY-MM-DD". | |
_file_format._delimited_syntax._field_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: , | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
_file_format | . Values: [2, 4, 1, 0, 6, 7]. Default: 0 | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name | The name of the file to write to or delete | |
file_size_threshold | Specify the threshold for the file size in megabytes. Processing nodes will start a new file each time the size exceeds the value specified in the threshold.. Default: 1 | |
_first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
_file_exists | Specify what the connector does when it tries to write a file that already exists. Select Overwrite file to overwrite a file if it already exists, Do not overwrite file to not overwrite the file and stop the job, or Fail to stop the job with an error message.. Values: [1, 2, 0]. Default: 0 | |
_file_format._delimited_syntax._encoding._output_b_o_m | Specify whether to include a byte order mark in the file when the file encoding is a Unicode encoding such as UTF-8, UTF-16, or UTF-32.. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
_file_format._avrotarget._input_j_s_o_n | #PROP_DESC_AVROINPUTJSON#. Default: false | |
_log_interval | Specify the amount of data in MB that the connector writes to Amazon S3 before the connector writes a progress message to the job log. For example, if the interval is 20 MB, the connector writes a progress message to the log after the connector writes 20 MB of data, 40 MB of data, and so on. If you do not specify an interval, progress messages are not written. | |
names_as_labels | Set column labels to the value of the column name | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
_thread_count | Specify the number of writers that will write parts of the file at the same time.. Default: 5 | |
_file_format._parquet_target._parquet_page_size | Specify the pagesize, default is 10000. Default: 10000 | |
partitioned | Write the file as multiple partitions. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
_file_format._delimited_syntax._row_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
_file_attributes._life_cycle_rule._l_c_rule_scope | Specify whether to apply the rule to the file only or to all of the files in the folder that contains the file. If the connector runs in parallel, this property is ignored, and the rule is applied to all of the files in the folder.. Values: [0, 1]. Default: 0 | |
_file_attributes._encryption | . Values: [1, 2, 0]. Default: 0 | |
_file_attributes._storage_class | Specify the storage class for the file. The reduced redundancy storage class provides less redundancy for files than the standard class. For more information, see the Amazon S3 documentation.. Values: [1, 0]. Default: 0 | |
_file_format._o_r_c_target._orc_stripe_size | Stripe size. Default: 100000 | |
_file_format._o_r_c_target._temp_staging_area | Specify a directory on the engine tier with write permission for the user running the job. This directory will be used to create the temporary files during the job run. | |
_file_format._parquet_target._temp_staging_area | Specify a directory on the engine tier with write permission for the user running the job. This directory will be used to create the temporary files during the job run. | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
_file_attributes._life_cycle_rule._l_c_rule_format | Specify whether the lifecycle rule is based on the number of days from the date that the file is created or based on a specific date.. Values: [0, 1]. Default: 0 | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
_file_format._avrotarget._use_schema | Specify if you would like to provide the Avro schema using a schema file. It is recommended to use No for primitive datatypes and Yes for complex datatypes.. Default: false | |
_file_attributes._user_metadata | Specify metadata in a list of name-value pairs. Separate each name-value pair with a semicolon, for example, Topic=News;SubTopic=Sports. All characters that you specify must be in the US-ASCII character set. | |
sheet_name | The name of the Excel worksheet to write to | |
_write_mode | Select Write to write a file per node, or select Delete to delete files.. Values: [1, 0]. Default: 0 | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
keyspace * | The name of the keyspace | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
authenticator_type | Chose authentication method used in connection to Cassandra cluster. Values: [allow_all_authenticator, password_authentication]. Default: allow_all_authenticator | |
cluster_contact_points * | Multiple contact points (host names or IPs) of the target Cassandra cluster, separated by semicolon | |
compression | The type of compression of transport-level requests and responses - you need to provide third-party libraries and make them available on the connector classpath. Values: [lz4, no_compression, snappy]. Default: no_compression | |
ssl_keystore_password | Provide the password that was used when generating the keystore | |
ssl_keystore_path | The path to your keystore file | |
local_datacenter * | The name of the datacenter local to defined contact points.. Default: datacenter1 | |
password * | The user's password used to connect to Cassandra cluster | |
protocol_version | Chose CQL native protocol version that should be used to connect to the target Cassandra cluster. Values: [dse_v1, dse_v2, newest_supported, newest_beta, v1, v2, v3, v4, v5, v9]. Default: newest_supported | |
ssl_truststore_password | Provide the password that was used when generating the truststore | |
ssl_truststore_path | The path to your truststore file | |
use_ssl | Use SSL/TLS to secure connection between client and Cassandra cluster. Default: false | |
use_ssl_client_cert_auth | With this option Cassandra nodes verify the identity of the client | |
use_ssl_client_encryption | The traffic between client and cluster nodes is encrypted and the client verifies the identity of the Cassandra nodes it connects to | |
username * | The name of the user used to connect to Cassandra cluster |
Name | Type | Description |
---|---|---|
check_schema_agreement | Check if schema is exactly the same on all cluster nodes. Default: true | |
read_consistency_level | The level of consistency used in the read or write operation. Values: [all_nodes, each_data_center_quorum, local_one, local_quorum, one_node, quorum, three_nodes, two_nodes]. Default: quorum | |
custom_typecodecs | You can provide list of type codec classes that can be used to support your custom mappings between Cassandra and DataStage (semicolon separated list of classes) | |
tracing_statements | With tracing enabled the connector provides execution plan for each CQL statement (SELECT, INSERT, UPDATE, DELETE). Default: false | |
enable_quoted_identifiers | Specifies whether or not to enclose database object names in quotes when generating CQL statements. Default: false | |
ignore_blob_truncation_errors | Should we ignore BLOB truncation errors when value's length is bigger than column field length provided in link column definition. Default: false | |
ignore_string_truncation_errors | Should we ignore string truncation errors when value's length is bigger than column field length provided in link column definition. Default: false | |
cassandra_keyspace * | The name of the keyspace in the target Cassandra database | |
lookup_type | Lookup Type. Values: [empty]. Default: empty | |
page_size | The size of page that is used to retrive a subset of data. Default: 10 | |
parallel_read_strategy | Parallel read strategy determines how workload is distributed among players. Values: [equal_splitter, host_aware]. Default: equal_splitter | |
prefetching_threshold | Start pre-fetching when current page contains less rows than the value set in this property. Default: 2 | |
cassandra_table * | The name of the table in the target Cassandra database | |
use_json_mapped_rows | Enables selecting and inserting a single row as a JSON encoded map. Default: false | |
use_parallel_read | Split reading data to all available nodes to speed up the process. Default: false |
Name | Type | Description |
---|---|---|
check_schema_agreement | Check if schema is exactly the same on all cluster nodes. Default: true | |
write_consistency_level | The level of consistency used in the read or write operation. Values: [all_nodes, any_node, each_data_center_quorum, local_one, local_quorum, one_node, quorum, three_nodes, two_nodes]. Default: quorum | |
custom_typecodecs | You can provide list of type codec classes that can be used to support your custom mappings between Cassandra and DataStage (semicolon separated list of classes). Default: com.ibm.is.cc.cassandra.codec.UuidToStringCodec;com.ibm.is.cc.cassandra.codec.TimeUuidToStringCodec;com.ibm.is.cc.cassandra.codec.VarIntToStringCodec;com.ibm.is.cc.cassandra.codec.InetToStringCodec | |
tracing_statements | With tracing enabled the connector provides execution plan for each CQL statement (SELECT, INSERT, UPDATE, DELETE). Default: false | |
enable_quoted_identifiers | Specifies whether or not to enclose database object names in quotes when generating CQL statements. Default: false | |
cassandra_keyspace * | The name of the keyspace in the target Cassandra database | |
mutation_type | Chose the type of modification that you would like perform. Values: [delete_columns, delete_entire_rows, insert, update]. Default: insert | |
save_null_values | Should we save null values in the target table (this will create tombstones for each null value). Default: true | |
cassandra_table * | The name of the table in the target Cassandra database | |
use_json_mapped_rows | Enables selecting and inserting a single row as a JSON encoded map. Default: false |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
connect_to_apache_hive | Enter the Hive properties if you plan to write tables to the Hive data source using this connection.. Default: false | |
hive_ssl | Determines whether use SSL protocol for Hive connection. Default: true | |
hive_db | The database in Apache Hive | |
hive_http_path | The path of the endpoint such as gateway/default/hive when the Apache Hive server is configured for HTTP transport mode | |
hive_host | The hostname or IP address of the Apache Hive server | |
hive_password | The password associated with the username for connecting to Apache Hive | |
hive_port | The port of the Apache Hive server | |
hive_user | The username for connecting to Apache Hive | |
password | The password associated with the username for accessing the data source | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
use_home_as_root | User home directory is used as the root of browsing. Default: true | |
username * | The username for accessing the data source | |
url * | The WebHDFS URL for accessing HDFS |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
_file_format.impl_syntax.binary | Specify the type of implicit file. Values: [binary]. Default: binary | |
_file_format.delimited_syntax.field_formats.date_format | Specify a string that defines the format for fields that have the Date data type. | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
_file_format.delimited_syntax.field_formats.decimal_format | Specify a string that defines the format for fields that have the Decimal or Numeric data type. | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
_file_format.delimited_syntax.record_def.record_def_source | If the record definition is a delimited string, enter a delimited string that specifies the names and data types of the files. Use the format name:data_type, and separate each field with the delimiter specified as the >B | |
_file_format.impl_syntax.record_def.record_def_source | Enter a delimited string that specifies the names and data types and length of each fields. Use the format name:data_type[length], and separate each field with the delimiter specified as the >B | |
display_value_labels | Display the value labels | |
_file_format.delimited_syntax.encoding | Specify the encoding of the files to read or write, for example, UTF-8. | |
_file_format.impl_syntax.encoding | Specify the encoding of the files to read or write, for example, UTF-8. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
_file_format.delimited_syntax.escape | Specify the character to use to escape field and row delimiters. If an escape character exists in the data, the escape character is also escaped. Because escape characters require additional processing, do not specify a value for this property if you do not need to include escape characters in the data. | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
_exclude_files | Specify a comma-separated list of file prefixes to exclude from the files that are read. If a prefix includes a comma, escape the comma by using a backslash (\\). | |
exclude_missing_values | Set values that have been defined as missing values to null | |
_file_format.delimited_syntax.field_delimiter | Specify a string or one of the following values: | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
_file_format | Specify the format of the files to read or write. The implicit file format specifies that the input to the file connector is in binary or string format without a delimiter.. Values: [avro, comma-separated_value_csv, delimited, implicit, orc, parquet, sequencefile]. Default: delimited | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
_filename_source * | Specify the name of the file to read from, or specify a pattern to read from multiple files. | |
file_name * | The name of the file to read | |
_filename_column | Specify the name of the column to write the source file name to. | |
first_line | Indicates at which row start reading. Default: 0 | |
_file_format.delimited_syntax.header | Select Yes if the first row of the file contains field headers and is not part of the data. If you select Yes, when the connector writes data, the field names will be the first row of the output. If runtime column propagation is enabled, metadata can be obtained from the first row of the file.. Default: false | |
_file_format.impl_syntax.header | Select Yes if the first row of the file contains field headers and is not part of the data. If you select Yes, when the connector writes data, the field names will be the first row of the output. If runtime column propagation is enabled, metadata can be obtained from the first row of the file.. Default: false | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
_file_format.delimited_syntax.null_value | Specify the character or string that represents null values in the data. For a source stage, input data that has the value that you specify is set to null on the output link. For a target stage, in the output file that is written to the file system, null values are represented by the value that is specified for this property. To specify that an empty string represents a null value, specify "" (two double quotation marks). | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
_file_format.avro_source.output_json | Specify if each rows in the avro file should be exported as JSON to a string column.. Default: false | |
_file_format.delimited_syntax.quotes | . Values: [double, none, single]. Default: none | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
_read_mode | Select Read single file to read from a single file or Read multiple files to read from the files that match a specified file prefix. Select List buckets to list the buckets for your account in the specified region. Select List files to list files files for your account in the specified bucket.. Values: [read_multiple_files, read_single_file]. Default: read_single_file | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
_file_format.delimited_syntax.record_def | Select whether the record definition is provided to the connector from the source file, a delimited string, a file that contains a delimited string, or a schema file. When runtime column propagation is enabled, this metadata provides the column definitions. If a schema file is provided, the schema file overrides the values of formatting properties in the stage and the column definitions that are specified on the Columns page of the output link.. Values: [delimited_string, delimited_string_in_a_file, file_header, infer_schema, none, schema_file]. Default: none | |
_file_format.impl_syntax.record_def | Select whether the record definition is provided to the connector from the source file, a delimited string, a file that contains a delimited string, or a schema file. When runtime column propagation is enabled, this metadata provides the column definitions. If a schema file is provided, the schema file overrides the values of formatting properties in the stage and the column definitions that are specified on the Columns page of the output link.. Values: [delimited_string, delimited_string_in_a_file, file_header, none, schema_file]. Default: none | |
_reject_mode | Specify what the connector does when a record that contains invalid data is found in the source file. Select Continue to read the rest of the file, Fail to stop the job with an error message, or Reject to send the rejected data to a reject link.. Values: [continue, fail, reject]. Default: continue | |
_file_format.delimited_syntax.row_delimiter | Specify a string or one of the following values: | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
_file_format.delimited_syntax.record_limit | Specify the maximum number of records to read from the file per node. If a value is not specified for this property, the entire file is read. | |
_file_format.impl_syntax.record_limit | Specify the maximum number of records to read from the file per node. If a value is not specified for this property, the entire file is read. | |
row_limit | The maximum number of rows to return | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
_file_format.delimited_syntax.field_formats.time_format | Specify a string that defines the format for fields that have the Time data type. | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
_file_format.delimited_syntax.field_formats.timestamp_format | Specify a string that defines the format for fields that have the Timestamp data type. | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
_file_format.trace_file | Specify the full path to a file to contain trace information from the parser for delimited files. Because writing to a trace file requires additional processing, specify a value for this property only during job development. | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
_create_hive_table.additional_driver_params | Specify any additional driver-specific connection attributes. Enter the attributes in the name=value format, separated by semi-colon if multiple attributes needs to be specified. For information about the supported driver-specific attributes, refer to the Progress DataDirect driver documentation. | |
_wave_handling.append_uid | Use this property to choose if a unique identifier is to be appended to the file name. When the value of this property is set to yes, then the file name gets appended with the unique identifier, and a new file would be written for every wave of data that is streamed into the stage. When the value of this property is set to No, then the file would be overwritten on every wave. Default: false | |
_file_format.avro_target.avro_array_keys | If the file format is Avro in a target stage, then normalization is controlled through array keys. Specify ''ITERATE()'' in the description for the corresponding array field in column definition in the input tab of file connector. | |
_file_format.avro_target.avro_codec | Specify the compression algorithm that will be used to compress the data.. Values: [bzip2, deflate, none, snappy]. Default: none | |
_file_format.avro_target.avro_schema * | Specify the fully qualified path for a JSON file that defines the schema for the Avro file. | |
_file_format.parquet_target.parquet_block_size | Block size. Default: 10000000 | |
_file_format.orc_target.orc_buffer_size | Buffer Size. Default: 10000 | |
_split_on_key.case_sensitive | Select Yes to make the key value case sensitive.. Default: false | |
_cleanup | If a job fails, select whether the connector deletes the file or files that have been created.. Default: true | |
_file_format.orc_target.orc_compress | Specify Compression mechanism. Values: [none, snappy, zlib]. Default: snappy | |
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
_file_format.parquet_target.parquet_compress | Specify compression mechanism. Values: [gzip, lzo, none, snappy]. Default: snappy | |
create_hive_table | Create a table in the database. Default: false | |
_create_hive_table | Select Yes to create or use an existing Hive table after data has been loaded to HDFS.. Default: false | |
_create_hive_table.create_hive_schema | Specify Yes to create the schema indicated in the fully qualified table name if it does not already exist. If Yes is specified and the table name does not contain a schema, the job will fail. If Yes is specified and the schema already exists, the job will not fail.. Default: false | |
_file_format.impl_syntax.binary | Specify the type of implicit file. Values: [binary]. Default: binary | |
_file_format.delimited_syntax.field_formats.date_format | Specify a string that defines the format for fields that have the Date data type. | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
_file_format.delimited_syntax.field_formats.decimal_format | Specify a string that defines the format for fields that have the Decimal or Numeric data type. | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
_create_hive_table.drop_hive_table | Specify Yes to drop the Hive table if it already exists. No to append to existing Hive table.. Default: true | |
_create_hive_table.use_staging_table.hive_target_table_properties.hive_drop_staging_table | Use this property to drop the staging table. By default, the staging table would be dropped once the target table has been created. In case, the user do not want the staging table to be removed, set the value of this property to No. Default: true | |
_file_format.delimited_syntax.encoding | Specify the encoding of the files to read or write, for example, UTF-8. | |
_file_format.impl_syntax.encoding | Specify the encoding of the files to read or write, for example, UTF-8. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
_file_format.delimited_syntax.escape | Specify the character to use to escape field and row delimiters. If an escape character exists in the data, the escape character is also escaped. Because escape characters require additional processing, do not specify a value for this property if you do not need to include escape characters in the data. | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
_split_on_key.exclude_part_string | Select Yes to exclude the partition string each processing node appends to the file name.. Default: false | |
_file_format.delimited_syntax.field_delimiter | Specify a string or one of the following values: | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
_file_format | Specify the format of the files to read or write. The implicit file format specifies that the input to the file connector is in binary or string format without a delimiter.. Values: [avro, comma-separated_value_csv, delimited, implicit, orc, parquet, sequencefile]. Default: delimited | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
_filename_target * | Specify the name of the file to write to. | |
file_name * | The name of the file to write to or delete | |
_wave_handling.file_size_threshold | Specify the threshold for the file size in megabytes. Processing nodes will start a new file each time the size exceeds the value specified in the threshold and on reaching the wave boundary. The file will be written only on the wave boundary and hence the threshold value specified is only a soft limit. The actual size of the file can be higher than the specified threshold depending on the size of the wave. Default: 1 | |
_file_format.delimited_syntax.header | Select Yes if the first row of the file contains field headers and is not part of the data. If you select Yes, when the connector writes data, the field names will be the first row of the output. If runtime column propagation is enabled, metadata can be obtained from the first row of the file.. Default: false | |
_file_format.impl_syntax.header | Select Yes if the first row of the file contains field headers and is not part of the data. If you select Yes, when the connector writes data, the field names will be the first row of the output. If runtime column propagation is enabled, metadata can be obtained from the first row of the file.. Default: false | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
_force_sequential | Select Yes to run the connector sequentially on one node.. Default: false | |
hive_table | The name of the table to create | |
_create_hive_table.hive_table_type | Specify Hive table type, as external (default) or internal.. Values: [external, internal]. Default: external | |
_file_exists | Specify what the connector does when it tries to write a file that already exists. Select Overwrite file to overwrite a file if it already exists, Do not overwrite file to not overwrite the file and stop the job, or Fail to stop the job with an error message.. Values: [do_not_overwrite_file, fail, overwrite_file]. Default: overwrite_file | |
_file_format.delimited_syntax.encoding.output_bom | Specify whether to include a byte order mark in the file when the file encoding is a Unicode encoding such as UTF-8, UTF-16, or UTF-32.. Default: false | |
_file_format.impl_syntax.encoding.output_bom | Specify whether to include a byte order mark in the file when the file encoding is a Unicode encoding such as UTF-8, UTF-16, or UTF-32.. Default: false | |
_file_format.delimited_syntax.header.include_types | Select Yes to append the data type to each field name that the connector writes in the first row of the output.. Default: false | |
_file_format.impl_syntax.header.include_types | Select Yes to append the data type to each field name that the connector writes in the first row of the output.. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
_file_format.avro_target.input_json | Specify if each rows in avro file should be imported from a JSON string.. Default: false | |
_split_on_key.key_column | Specify the key column to use for splitting files. If not specified, the connector will use the first key column on the link. | |
_create_hive_table.use_staging_table.hive_target_table_properties.hive_target_table_location | Use this property to set the location of the HDFS files serving as storage for the Hive table | |
_max_file_size | Specify the maximum file size in megabytes. Processing nodes will start a new file each time the size exceeds this value.. Default: 0 | |
_create_hive_table.use_staging_table.load_existing_table.max_dynamic_partitions | Use this property to set the maximum number of Dynamic paritions to be created while loading into a partitioned table.. Default: 1000 | |
names_as_labels | Set column labels to the value of the column name | |
_file_format.delimited_syntax.null_value | Specify the character or string that represents null values in the data. For a source stage, input data that has the value that you specify is set to null on the output link. For a target stage, in the output file that is written to the file system, null values are represented by the value that is specified for this property. To specify that an empty string represents a null value, specify "" (two double quotation marks). | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
_create_hive_table.use_staging_table.hive_target_table_properties.hive_orc_compress | Use this property to set the compression type for the target table when the table format is ORC. Values: [none, snappy, zlib]. Default: zlib | |
_file_format.parquet_target.parquet_page_size | Page size. Default: 10000 | |
_create_hive_table.use_staging_table.hive_target_table_properties.hive_parquet_compress | Use this property to set the compression type for the target table when the table format is Parquet. Values: [gzip, lzo, none, snappy]. Default: snappy | |
partitioned | Write the file as multiple partitions. Default: false | |
_file_format.delimited_syntax.quotes | . Values: [double, none, single]. Default: none | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
_file_format.delimited_syntax.row_delimiter | Specify a string or one of the following values: | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
_split_on_key | Select Yes to create a new file when key column value changes. Data must be sorted and partitioned for this to work properly.. Default: false | |
_file_format.orc_target.orc_stripe_size | Stripe Size. Default: 100000 | |
_create_hive_table.use_staging_table.hive_target_table_properties.hive_orc_stripe_size | Stripe size. Default: 64 | |
_create_hive_table.hive_table * | Enter the name of the table to create. | |
_create_hive_table.use_staging_table.hive_target_table_properties.hive_target_table_format | Use this property to set the format of the target table. Values: [orc, parquet]. Default: parquet | |
_create_hive_table.use_staging_table.hive_target_table_properties.hive_target_table_type | Use this property to set the format of the type of the target table.. Values: [external, internal]. Default: external | |
_file_format.delimited_syntax.field_formats.time_format | Specify a string that defines the format for fields that have the Time data type. | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
_file_format.delimited_syntax.field_formats.timestamp_format | Specify a string that defines the format for fields that have the Timestamp data type. | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
_split_on_key.key_in_filename | Select Yes to use the key value in the generated file name.. Default: false | |
_create_hive_table.use_staging_table | Set Yes to use staging table. This option will be enabled only when the FileFormat is Delimited. Default: false | |
sheet_name | The name of the Excel worksheet to write to | |
_write_mode | Select Write single file to write a file per node, select Write multiple files to write multiple files per node (based on size and/or key value), or select Delete to delete files.. Values: [delete, write_multiple_files, write_single_file]. Default: write_single_file | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
http_path | The path of the endpoint such as gateway/default/hive when the server is configured for HTTP transport mode | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
row_limit_support | Enable if connector should append limit operator to the queries.. Default: true | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
_before_after._after_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node after all of the data is processed on that node. | |
_before_after._after_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once after all of the data is processed. | |
_before_after._before_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
_before_after._before_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once before any data is processed. | |
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
_session._character_set_for_non_unicode_columns | Select the character set option for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. If you select the Default option, the character set encoding of the engine host system locale is used. If you select the Custom option, you must provide the character set name to be used.. Values: [_custom, _default]. Default: _default | |
_session._character_set_for_non_unicode_columns._character_set_name * | Specify the name of the character set encoding for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
_session._default_length_for_columns | Enter the default length for the Char, NChar, Binary, VarChar, NVarChar, and VarBinary link columns for which the Length attribute is not set.. Default: 200 | |
_session._default_length_for_long_columns | Enter the default length for the LongVarChar, LongNVarChar and LongVarBinary link columns for which the Length attribute is not set.. Default: 20000 | |
_enable_partitioned_reads | Select Yes to run the statement on each processing node. When using 'Database partition' as Partitioned read method the statement should consist of a where clause | |
_enable_quoted_ids | Select Yes to enclose the specified table name and column names on the links in quoting strings when SQL statements are generated. The connector queries the driver to determine the quoting string. If it fails to obtain this information from the driver, the connector uses the backtick (`) character as the quoting string. The default is No.. Default: false | |
_session._fetch_size | Specify the number of rows that the driver must try to fetch from the data source when the connector requests a single row. Fetching rows in addition to the row requested by the connector can improve performance because the driver can complete the subsequent requests for more rows from the connector locally without a need to access the data source. The default value is 0, which indicates that the driver optimizes the fetch operation based on its internal logic.. Default: 0 | |
_generate_sql | Select Yes to automatically generate the SQL statements at run time.. Default: true | |
_session._generate_all_columns_as_unicode | Always generate columns as NChar, NVarChar and LongNVarChar columns instead of Char, VarChar and LongVarChar columns.. Default: false | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
_session._keep_conductor_connection_alive | Select Yes to keep the connection alive in the conductor process while the player processes are processing records. Select No to close the connection in the conductor process before player processes start processing records, and to connect again if necessary after the player processes complete processing the records.. Default: true | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
_transaction._end_of_wave | Select Yes to generate an end-of-wave record after each wave of records, where the number of records in each wave is specified in the Record count property. When the Record count property is set to 0, the end-of-wave records are not generated.. Values: [_no, _yes]. Default: _no | |
_enable_partitioned_reads._partition_method | Use this property to set the type of the partitioned to be used when the partitioned reads is enabled.. Values: [_hive_partition, _minimum_and_maximum_range, _modulus]. Default: _hive_partition | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
_transaction._record_count | Specify the number of rows that the stage reads from or writes to the data source in a single transaction. When this property is set to 0, the transaction is committed only once on each processing node of the stage after the stage processes all the rows on that node. When rows arrive on the input link of the stage in waves, the Record count value applies to each wave separately.. Default: 2000 | |
_session._report_schema_mismatch | Select Yes to perform early comparison of the column definitions on the link with the column definitions in the data source and to issue warning messages for any detected discrepancies which can result in data corruption. Depending on the environment and the usage scenario the early detection of discrepancies may not be possible in which case the error messages are reported only when the actual data corruption is detected. . Default: false | |
_limit_rows._limit | Enter the maximum number of rows to be returned by the connector or each node when Partition Read is enabled. | |
row_limit | The maximum number of rows to return | |
_before_after | Select Yes to run SQL statements before and after data is accessed in the database.. Default: false | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
_select_statement * | Enter a SELECT statement or the fully qualified name of the file that contains the SELECT statement. The statement is used to read rows from the database. | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
_hive_parameters | Enter the statement to set the database parameters. | |
_before_after._after_sql_node._fail_on_error | Select Yes to stop the job if the After SQL (node) statement fails.. Default: true | |
_before_after._after_sql._fail_on_error | Select Yes to stop the job if the After SQL statement fails.. Default: true | |
_before_after._before_sql_node._fail_on_error | Select Yes to stop the job if the Before SQL (node) statement fails.. Default: true | |
_before_after._before_sql._fail_on_error | Select Yes to stop the job if the Before SQL statement fails.. Default: true | |
_hive_parameters._fail_on_error | Select Yes to stop the job if the database parameters are not set.. Default: false | |
_table_name * | Enter the fully qualified name of the table that you want to access in the data source. | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
_before_after._after_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node after all of the data is processed on that node. | |
_before_after._after_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once after all of the data is processed. | |
_session._batch_size | Enter the number of records to include in the batch of records for each statement execution. The value 0 indicates that all input records are passed to the statements in a single batch.. Default: 2000 | |
_before_after._before_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
_before_after._before_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once before any data is processed. | |
_session._character_set_for_non_unicode_columns | Select the character set option for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. If you select the Default option, the character set encoding of the engine host system locale is used. If you select the Custom option, you must provide the character set name to be used.. Values: [_custom, _default]. Default: _default | |
_session._character_set_for_non_unicode_columns._character_set_name * | Specify the name of the character set encoding for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. | |
_table_action._generate_create_statement._create_statement * | Enter the CREATE TABLE statement to run to create the target database table. | |
_custom_statements | Custom statements to be run for each input row | |
_table_action._generate_create_statement._storage_format | Use this property to specify the storage format of the file that stores the data in the table.. Values: [_avro, _orc, _parquet, _rc_file, _sequence_file, _text_file]. Default: _text_file | |
_session._default_length_for_columns | Enter the default length for the Char, NChar, Binary, VarChar, NVarChar, and VarBinary link columns for which the Length attribute is not set.. Default: 200 | |
_session._default_length_for_long_columns | Enter the default length for the LongVarChar, LongNVarChar and LongVarBinary link columns for which the Length attribute is not set.. Default: 20000 | |
_delete_statement * | Enter a DELETE statement or the fully qualified name of the file that contains a DELETE statement. The statement is used to delete rows from the database. | |
_table_action._generate_drop_statement._drop_statement * | Enter the DROP TABLE statement to run to drop the target database table. | |
_session._drop_unmatched_fields | Select Yes to drop any fields from the input link for which there are no matching parameters in the statements configured for the stage. Select No to issue error message when an unmatched field is present on the link.. Default: false | |
_enable_partitioned_write | Select Yes to insert data into partitioned table. In the insert query, ORCHESTRATE. | |
_enable_quoted_ids | Select Yes to enclose the specified table name and column names on the links in quoting strings when SQL statements are generated. The connector queries the driver to determine the quoting string. If it fails to obtain this information from the driver, the connector uses the backtick (`) character as the quoting string. The default is No.. Default: false | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
_table_action._generate_create_statement._row_format._field_terminator | Use this property to specify the field terminator to create the table. | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, orc, parquet]. Default: delimited | |
file_name | The name of the file to write to or delete | |
_generate_sql | Select Yes to automatically generate the SQL statements at run time.. Default: true | |
_table_action._generate_create_statement | Select Yes to automatically generate the CREATE TABLE statement at run time. Depending on the input link column data types, the driver, and the data source, the connector might not be able to determine the corresponding native data types and produce a valid statement.. Default: true | |
_table_action._generate_drop_statement | Select Yes to automatically generate the DROP TABLE statement at run time.. Default: true | |
_table_action._generate_truncate_statement | Select Yes to automatically generate the TRUNCATE TABLE statement at run time.. Default: true | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
_insert_statement * | Enter an INSERT statement or the fully qualified name of the file that contains an INSERT statement. The statement is used to insert rows into the database. | |
_session._keep_conductor_connection_alive | Select Yes to keep the connection alive in the conductor process while the player processes are processing records. Select No to close the connection in the conductor process before player processes start processing records, and to connect again if necessary after the player processes complete processing the records.. Default: true | |
_table_action._generate_create_statement._row_format._line_terminator | Use this property to specify the line terminator to create the table. | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
_table_action._table_action_first | Select Yes to perform the table action first. Select No to run the Before SQL statements first.. Default: true | |
_transaction._record_count | Specify the number of rows that the stage reads from or writes to the data source in a single transaction. When this property is set to 0, the transaction is committed only once on each processing node of the stage after the stage processes all the rows on that node. When rows arrive on the input link of the stage in waves, the Record count value applies to each wave separately.. Default: 2000 | |
_session._report_schema_mismatch | Select Yes to perform early comparison of the column definitions on the link with the column definitions in the data source and to issue warning messages for any detected discrepancies which can result in data corruption. Depending on the environment and the usage scenario the early detection of discrepancies may not be possible in which case the error messages are reported only when the actual data corruption is detected. . Default: false | |
_table_action._generate_create_statement._row_format | Select row format option for table creation.. Values: [_delimited, _ser_de, _storage_format]. Default: _storage_format | |
_before_after | Select Yes to run SQL statements before and after data is accessed in the database.. Default: false | |
schema_name | The name of the schema that contains the table to write to | |
_table_action._generate_create_statement._row_format._serde_library * | Use this property to specify the library name for SerDe for creating the table | |
_hive_parameters | Enter the statement to set the database parameters. | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
_before_after._after_sql_node._fail_on_error | Select Yes to stop the job if the After SQL (node) statement fails.. Default: true | |
_before_after._after_sql._fail_on_error | Select Yes to stop the job if the After SQL statement fails.. Default: true | |
_before_after._before_sql_node._fail_on_error | Select Yes to stop the job if the Before SQL (node) statement fails.. Default: true | |
_before_after._before_sql._fail_on_error | Select Yes to stop the job if the Before SQL statement fails.. Default: true | |
_table_action._generate_drop_statement._fail_on_error | Select Yes to stop the job if the DROP TABLE statement fails.. Default: false | |
_table_action._generate_create_statement._fail_on_error | Select Yes to stop the job if the CREATE TABLE statement fails.. Default: true | |
_table_action._generate_truncate_statement._fail_on_error | Select Yes to stop the job if the TRUNCATE TABLE statement fails.. Default: true | |
_hive_parameters._fail_on_error | Select Yes to stop the job if the database parameters are not set.. Default: false | |
_table_action * | Select the action to complete before writing data to the table.. Values: [_append, _create, _replace, _truncate]. Default: _append | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace]. Default: append | |
_table_action._generate_create_statement._table_location | Use this property to specify the location of the file that serves as storage for the table. | |
_table_name * | Enter the fully qualified name of the table that you want to access in the data source. | |
table_name | The name of the table to write to | |
_table_action._generate_truncate_statement._truncate_statement * | Enter the TRUNCATE TABLE statement to run to truncate the target database table. | |
_update_statement * | Enter an UPDATE statement or the fully qualified name of the file that contains an UPDATE statement. The statement is used to update rows in the database. | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
_write_mode | Select the mode that you want to use to write to the data source.. Values: [_custom, _delete, _insert, _update]. Default: _insert | |
write_mode | The mode for writing records to the target table. Values: [insert, static_statement, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
schema_registry_authentication | Select option to provide credentials for authentication. Values: [none, reuse_sasl_credentials, user_credentials]. Default: none | |
server_name * | Specify the host name and port for the virtual Kafka operational server [HostName:Port]. Use comma to separate multiple servers. | |
key_pem | Private key in the PEM format. Only PKCS#8 keys are supported. If the key is encrypted, key password must be specified using 'Key password' property. | |
registry_key_pem | Private key in the PEM format. Only PKCS#8 keys are supported. If the key is encrypted, key password must be specified using 'Key password' property. | |
key_chain_pem | Certificates chain for private key in the PEM format. Only X.509 certificates are supported. | |
registry_key_chain_pem | Certificates chain for private key in the PEM format. Only X.509 certificates are supported. | |
key_password | Password for key file | |
registry_key_password | Password for key | |
password | Specify the password to use to connect to the virtual Kafka operational server. | |
registry_password | Password | |
schema_registry_url | Schema Registry service URL | |
schema_registry_secure | Select type of secure connection to schema registry. Values: [none, ssl, reuse_ssl]. Default: none | |
secure_connection | Type of secure connection to Kafka operational server. Values: [None, SASL_PLAIN, SASL_SSL, SCRAM-SHA-256, SCRAM-SHA-512, SSL]. Default: None | |
registry_truststore_pem | Trusted certificates in the PEM format. Only X.509 certificates are supported. | |
truststore_pem | Trusted certificates in the PEM format. Only X.509 certificates are supported. | |
use_schema_registry | Use Schema Registry service for message format definition. Default: false | |
registry_username | User name | |
username | Specify the user name to be use to connect to the kerberized kafka server/cluster. |
Name | Type | Description |
---|---|---|
_advanced_kafka_config_options | Advanced Kafka Client configuration. Default: false | |
_advanced_client_logging | Advanced Kafka Client logging. Default: false | |
schema_registry_authentication | Select option to provide credentials for authentication. Values: [none, reuse_sasl_credentials, user_credentials]. Default: none | |
_kafka_config_options | Additional Kafka Client configuration options. Depending on the context in which Kafka Connector stage is used, either Kafka Producer or Kafka Consumer properties should be provided. The value of this multiline property must conform Java Properties class requirements. | |
advanced_kafka_config_options | Additional Kafka Client configuration options. Depending on the context in which Kafka Connector stage is used, either Kafka Producer or Kafka Consumer properties should be provided. The value of this multiline property must conform Java Properties class requirements. | |
_consumer_group_name | Consumer group name to be used when reading messages from Kafka topic. | |
consumer_group_name | Consumer group name to be used when reading messages from Kafka topic. | |
continuous_mode | Choose continuous mode on or off. Default: false | |
end_of_data | Specify whether to insert EOW marker for the final set of records when their number is smaller than the value specified for the transaction record count. Note that if the specified transaction record count value is 0 (representing all available records), there is only one transaction wave which consists of all the records, and so the End of data value should be set to Yes in order for EOW marker to be inserted for that transaction wave. Default: false | |
end_of_wave | Specify whether to insert EOW marker. Default: false | |
heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
_isolation_level | Kafka isolation level for messages written transactionally. Read commited will only fetch commited messages, Read uncommitted will fetch all of them.. Values: [read_committed, read_uncommitted]. Default: read_uncommitted | |
isolation_level | Kafka isolation level for messages written transactionally. Read commited will only fetch commited messages, Read uncommitted will fetch all of them.. Values: [read_committed, read_uncommitted]. Default: read_uncommitted | |
_start_offset | Starting offset per each partition. [partition:offset,partition:offset,...] or [offset,offset,offset] | |
start_offset | Starting offset per each partition. [partition:offset,partition:offset,...] or [offset,offset,offset] | |
registry_key_pem | Private key in the PEM format. Only PKCS#8 keys are supported. If the key is encrypted, key password must be specified using 'Key password' property. | |
registry_key_chain_pem | Certificates chain for private key in the PEM format. Only X.509 certificates are supported. | |
_key_serializer_type | Specify the type of data key to use appropriate serializer or deserializer. Values: [avro, avro_to_json, byte, double, integer, small_integer, string]. Default: string | |
key_serializer_type | Specify the type of data key to use appropriate serializer or deserializer. Values: [avro, avro_to_json, byte, double, integer, small_integer, string]. Default: string | |
_client_logging_level | Logging level. Values: [debug, error, fatal, info, off, trace, warn]. Default: off | |
kafka_client_logging_level | Minimum logging level of messages from Kafka Client that will be written in the job log. Each entry that is read from Kafka Client has special prefix [KAFKA].. Values: [debug, error, fatal, info, off, trace, warn]. Default: off | |
max_messages | Maximum number of messages to be produced to the topic on a per player process basis. This should be a multiple of max poll records.. Default: 100 | |
max_poll_records | Maximum records to be fetched in a single poll.. Default: 100 | |
_max_poll_records | Maximum records to be fetched in a single poll.. Default: 100 | |
_timeout | Specify the time in seconds after which the consumer would not poll for records. | |
registry_password | Password | |
record_count | Number of records per transaction. The value 0 means all available records. Default: 0 | |
schema_registry_url | Specify schema registry service URL | |
_reset_policy | Set reset policy (must be either 'earliest', 'latest' (default: latest). Values: [earliest, latest]. Default: latest | |
reset_policy | Set reset policy (must be either 'earliest', 'latest' (default: latest). Values: [earliest, latest]. Default: earliest | |
schema_registry_secure | Select type of secure connection to schema registry. Values: [none, ssl, reuse_ssl]. Default: none | |
stop_message | Regular expression, which if matched, will stop continuous mode | |
_stop_message | Regular expression, which if matched, will stop continuous mode | |
time_interval | Time interval for transaction. Default: 0 | |
timeout | Specify the time in seconds after which the consumer would not poll for records.. Default: 30 | |
_timeout_after_last_message | Timeout after last message (secs). Default: 30 | |
topic_name * | Kafka topic name. | |
_max_messages | Total number of messages | |
registry_truststore_pem | Trusted certificates in the PEM format. Only X.509 certificates are supported. | |
registry_username | User name | |
_value_serializer_type | Specify the type of data to use appropriate serializer or deserializer. Values: [avro, avro_to_json, byte, double, integer, small_integer, string]. Default: string | |
value_serializer_type | Specify the type of data to use appropriate serializer or deserializer. Values: [avro, avro_to_json, byte, double, integer, small_integer, string]. Default: string | |
kafka_warning_and_error_logs | Defines how messages with severity higher than WARN (WARN, ERROR, FATAL) will be written in the job log.. Values: [keep_severity, log_as_informational, log_as_warning]. Default: log_as_informational |
Name | Type | Description |
---|---|---|
_advanced_kafka_config_options | Advanced Kafka Client configuration. Default: false | |
_advanced_client_logging | Advanced Kafka Client logging. Default: false | |
schema_registry_authentication | Select option to provide credentials for authentication. Values: [none, reuse_sasl_credentials, user_credentials]. Default: none | |
_kafka_config_options | Additional Kafka Client configuration options. Depending on the context in which Kafka Connector stage is used, either Kafka Producer or Kafka Consumer properties should be provided. The value of this multiline property must conform Java Properties class requirements. | |
advanced_kafka_config_options | Additional Kafka Client configuration options. Depending on the context in which Kafka Connector stage is used, either Kafka Producer or Kafka Consumer properties should be provided. The value of this multiline property must conform Java Properties class requirements. | |
heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
registry_key_pem | Private key in the PEM format. Only PKCS#8 keys are supported. If the key is encrypted, key password must be specified using 'Key password' property. | |
registry_key_chain_pem | Certificates chain for private key in the PEM format. Only X.509 certificates are supported. | |
_key_serializer_type | Specify the type of data key to use appropriate serializer or deserializer. Values: [avro, avro_to_json, byte, double, integer, small_integer, string]. Default: string | |
key_serializer_type | Specify the type of data key to use appropriate serializer or deserializer. Values: [avro, avro_to_json, byte, double, integer, small_integer, string]. Default: string | |
_client_logging_level | Logging level. Values: [debug, error, fatal, info, off, trace, warn]. Default: off | |
kafka_client_logging_level | Minimum logging level of messages from Kafka Client that will be written in the job log. Each entry that is read from Kafka Client has special prefix [KAFKA].. Values: [debug, error, fatal, info, off, trace, warn]. Default: off | |
registry_password | Password | |
schema_registry_url | Specify schema registry service URL | |
schema_registry_secure | Select type of secure connection to schema registry. Values: [none, ssl, reuse_ssl]. Default: none | |
topic_name * | Kafka topic name. | |
registry_truststore_pem | Trusted certificates in the PEM format. Only X.509 certificates are supported. | |
registry_username | User name | |
_value_serializer_type | Specify the type of data to use appropriate serializer or deserializer. Values: [avro, avro_to_json, byte, double, integer, small_integer, string]. Default: string | |
value_serializer_type | Specify the type of data to use appropriate serializer or deserializer. Values: [avro, avro_to_json, byte, double, integer, small_integer, string]. Default: string | |
kafka_warning_and_error_logs | Defines how messages with severity higher than WARN (WARN, ERROR, FATAL) will be written in the job log.. Values: [keep_severity, log_as_informational, log_as_warning]. Default: log_as_informational |
Name | Type | Description |
---|---|---|
client_id * | The client ID (username) for authorizing access to Box | |
client_secret * | The password associated with the client ID for authorizing access to Box | |
enterprise_id * | The ID for your organization | |
private_key * | The private key that was generated and provided to you by Box | |
private_key_password * | The password associated with the private key that was generated and provided to you by Box | |
public_key * | The public key that was generated and provided to you by Box | |
username | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name * | The name of the file to read | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name * | The name of the file to write to or delete | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
names_as_labels | Set column labels to the value of the column name | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
_before_after._after_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node after all of the data is processed on that node. | |
_before_after._after_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once after all of the data is processed. | |
_before_after._before_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
_before_after._before_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once before any data is processed. | |
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
_session._character_set_for_non_unicode_columns | Select the character set option for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. If you select the Default option, the character set encoding of the engine host system locale is used. If you select the Custom option, you must provide the character set name to be used.. Values: [_custom, _default]. Default: _default | |
_session._character_set_for_non_unicode_columns._character_set_name * | Specify the name of the character set encoding for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
_session._default_length_for_columns | Enter the default length for the Char, NChar, Binary, VarChar, NVarChar, and VarBinary link columns for which the Length attribute is not set.. Default: 200 | |
_session._default_length_for_long_columns | Enter the default length for the LongVarChar, LongNVarChar and LongVarBinary link columns for which the Length attribute is not set.. Default: 20000 | |
_enable_partitioned_reads | Select Yes to run the statement on each processing node. When using 'Database partition' as Partitioned read method the statement should consist of a where clause | |
_enable_quoted_ids | Select Yes to enclose the specified table name and column names on the links in quoting strings when SQL statements are generated. The connector queries the driver to determine the quoting string. If it fails to obtain this information from the driver, the connector uses the backtick (`) character as the quoting string. The default is No.. Default: false | |
_session._fetch_size | Specify the number of rows that the driver must try to fetch from the data source when the connector requests a single row. Fetching rows in addition to the row requested by the connector can improve performance because the driver can complete the subsequent requests for more rows from the connector locally without a need to access the data source. The default value is 0, which indicates that the driver optimizes the fetch operation based on its internal logic.. Default: 0 | |
_generate_sql | Select Yes to automatically generate the SQL statements at run time.. Default: true | |
_session._generate_all_columns_as_unicode | Always generate columns as NChar, NVarChar and LongNVarChar columns instead of Char, VarChar and LongVarChar columns.. Default: false | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
_session._keep_conductor_connection_alive | Select Yes to keep the connection alive in the conductor process while the player processes are processing records. Select No to close the connection in the conductor process before player processes start processing records, and to connect again if necessary after the player processes complete processing the records.. Default: true | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
_transaction._end_of_wave | Select Yes to generate an end-of-wave record after each wave of records, where the number of records in each wave is specified in the Record count property. When the Record count property is set to 0, the end-of-wave records are not generated.. Values: [_no, _yes]. Default: _no | |
_enable_partitioned_reads._partition_method | Use this property to set the type of the partitioned to be used when the partitioned reads is enabled.. Values: [_hive_partition, _minimum_and_maximum_range, _modulus]. Default: _hive_partition | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
_transaction._record_count | Specify the number of rows that the stage reads from or writes to the data source in a single transaction. When this property is set to 0, the transaction is committed only once on each processing node of the stage after the stage processes all the rows on that node. When rows arrive on the input link of the stage in waves, the Record count value applies to each wave separately.. Default: 2000 | |
_session._report_schema_mismatch | Select Yes to perform early comparison of the column definitions on the link with the column definitions in the data source and to issue warning messages for any detected discrepancies which can result in data corruption. Depending on the environment and the usage scenario the early detection of discrepancies may not be possible in which case the error messages are reported only when the actual data corruption is detected. . Default: false | |
_limit_rows._limit | Enter the maximum number of rows to be returned by the connector or each node when Partition Read is enabled. | |
row_limit | The maximum number of rows to return | |
_before_after | Select Yes to run SQL statements before and after data is accessed in the database.. Default: false | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
_select_statement * | Enter a SELECT statement or the fully qualified name of the file that contains the SELECT statement. The statement is used to read rows from the database. | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
_before_after._after_sql_node._fail_on_error | Select Yes to stop the job if the After SQL (node) statement fails.. Default: true | |
_before_after._after_sql._fail_on_error | Select Yes to stop the job if the After SQL statement fails.. Default: true | |
_before_after._before_sql_node._fail_on_error | Select Yes to stop the job if the Before SQL (node) statement fails.. Default: true | |
_before_after._before_sql._fail_on_error | Select Yes to stop the job if the Before SQL statement fails.. Default: true | |
_table_name * | Enter the fully qualified name of the table that you want to access in the data source. | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
_before_after._after_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node after all of the data is processed on that node. | |
_before_after._after_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once after all of the data is processed. | |
_session._batch_size | Enter the number of records to include in the batch of records for each statement execution. The value 0 indicates that all input records are passed to the statements in a single batch.. Default: 2000 | |
_before_after._before_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
_before_after._before_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once before any data is processed. | |
_session._character_set_for_non_unicode_columns | Select the character set option for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. If you select the Default option, the character set encoding of the engine host system locale is used. If you select the Custom option, you must provide the character set name to be used.. Values: [_custom, _default]. Default: _default | |
_session._character_set_for_non_unicode_columns._character_set_name * | Specify the name of the character set encoding for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. | |
_table_action._generate_create_statement._create_statement * | Enter the CREATE TABLE statement to run to create the target database table. | |
_custom_statements | Custom statements to be run for each input row | |
_table_action._generate_create_statement._storage_format | Use this property to specify the storage format of the file that stores the data in the table.. Values: [_avro, _orc, _parquet, _rc_file, _sequence_file, _text_file]. Default: _text_file | |
_session._default_length_for_columns | Enter the default length for the Char, NChar, Binary, VarChar, NVarChar, and VarBinary link columns for which the Length attribute is not set.. Default: 200 | |
_session._default_length_for_long_columns | Enter the default length for the LongVarChar, LongNVarChar and LongVarBinary link columns for which the Length attribute is not set.. Default: 20000 | |
_delete_statement * | Enter a DELETE statement or the fully qualified name of the file that contains a DELETE statement. The statement is used to delete rows from the database. | |
_table_action._generate_drop_statement._drop_statement * | Enter the DROP TABLE statement to run to drop the target database table. | |
_session._drop_unmatched_fields | Select Yes to drop any fields from the input link for which there are no matching parameters in the statements configured for the stage. Select No to issue error message when an unmatched field is present on the link.. Default: false | |
_enable_partitioned_write | Select Yes to insert data into partitioned table. In the insert query, ORCHESTRATE. | |
_enable_quoted_ids | Select Yes to enclose the specified table name and column names on the links in quoting strings when SQL statements are generated. The connector queries the driver to determine the quoting string. If it fails to obtain this information from the driver, the connector uses the backtick (`) character as the quoting string. The default is No.. Default: false | |
_table_action._generate_create_statement._row_format._field_terminator | Use this property to specify the field terminator to create the table. | |
_generate_sql | Select Yes to automatically generate the SQL statements at run time.. Default: true | |
_table_action._generate_create_statement | Select Yes to automatically generate the CREATE TABLE statement at run time. Depending on the input link column data types, the driver, and the data source, the connector might not be able to determine the corresponding native data types and produce a valid statement.. Default: true | |
_table_action._generate_drop_statement | Select Yes to automatically generate the DROP TABLE statement at run time.. Default: true | |
_table_action._generate_truncate_statement | Select Yes to automatically generate the TRUNCATE TABLE statement at run time.. Default: true | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
_insert_statement * | Enter an INSERT statement or the fully qualified name of the file that contains an INSERT statement. The statement is used to insert rows into the database. | |
_session._keep_conductor_connection_alive | Select Yes to keep the connection alive in the conductor process while the player processes are processing records. Select No to close the connection in the conductor process before player processes start processing records, and to connect again if necessary after the player processes complete processing the records.. Default: true | |
_table_action._generate_create_statement._row_format._line_terminator | Use this property to specify the line terminator to create the table. | |
_table_action._table_action_first | Select Yes to perform the table action first. Select No to run the Before SQL statements first.. Default: true | |
_transaction._record_count | Specify the number of rows that the stage reads from or writes to the data source in a single transaction. When this property is set to 0, the transaction is committed only once on each processing node of the stage after the stage processes all the rows on that node. When rows arrive on the input link of the stage in waves, the Record count value applies to each wave separately.. Default: 2000 | |
_session._report_schema_mismatch | Select Yes to perform early comparison of the column definitions on the link with the column definitions in the data source and to issue warning messages for any detected discrepancies which can result in data corruption. Depending on the environment and the usage scenario the early detection of discrepancies may not be possible in which case the error messages are reported only when the actual data corruption is detected. . Default: false | |
_table_action._generate_create_statement._row_format | Select row format option for table creation.. Values: [_delimited, _ser_de, _storage_format]. Default: _storage_format | |
_before_after | Select Yes to run SQL statements before and after data is accessed in the database.. Default: false | |
schema_name | The name of the schema that contains the table to write to | |
_table_action._generate_create_statement._row_format._serde_library * | Use this property to specify the library name for SerDe for creating the table | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
_before_after._after_sql_node._fail_on_error | Select Yes to stop the job if the After SQL (node) statement fails.. Default: true | |
_before_after._after_sql._fail_on_error | Select Yes to stop the job if the After SQL statement fails.. Default: true | |
_before_after._before_sql_node._fail_on_error | Select Yes to stop the job if the Before SQL (node) statement fails.. Default: true | |
_before_after._before_sql._fail_on_error | Select Yes to stop the job if the Before SQL statement fails.. Default: true | |
_table_action._generate_drop_statement._fail_on_error | Select Yes to stop the job if the DROP TABLE statement fails.. Default: false | |
_table_action._generate_create_statement._fail_on_error | Select Yes to stop the job if the CREATE TABLE statement fails.. Default: true | |
_table_action._generate_truncate_statement._fail_on_error | Select Yes to stop the job if the TRUNCATE TABLE statement fails.. Default: true | |
_table_action * | Select the action to complete before writing data to the table.. Values: [_append, _create, _replace, _truncate]. Default: _append | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace]. Default: append | |
_table_action._generate_create_statement._table_location | Use this property to specify the location of the file that serves as storage for the table. | |
_table_name * | Enter the fully qualified name of the table that you want to access in the data source. | |
table_name * | The name of the table to write to | |
_table_action._generate_truncate_statement._truncate_statement * | Enter the TRUNCATE TABLE statement to run to truncate the target database table. | |
_update_statement * | Enter an UPDATE statement or the fully qualified name of the file that contains an UPDATE statement. The statement is used to update rows in the database. | |
_write_mode | Select the mode that you want to use to write to the data source.. Values: [_custom, _delete, _insert, _update]. Default: _insert | |
write_mode | The mode for writing records to the target table. Values: [insert, static_statement, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|---|---|
project_id | The ID of the Dremio Cloud project | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
access_token * | The OAuth2 access token that you obtained by following the instructions at https://www.dropbox.com/developers/reference/oauth-guide |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name * | The name of the file to read | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name * | The name of the file to write to or delete | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
names_as_labels | Set column labels to the value of the column name | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
password * | The password associated with the username for accessing the data source | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
url * | The URL to access Elasticsearch | |
use_anonymous_access | Connect without providing logon credentials. Default: false | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
file_name * | The name of the file to read | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
query_body | JSON containing the body of a search request | |
query_string | Search query in the Lucene query string syntax |
Name | Type | Description |
---|---|---|
create_index_body | JSON containing the body of a create index request | |
document_type | The type of the document | |
file_action | The action to take on the target file to handle the new data set. Values: [append, replace, truncate]. Default: append | |
file_name * | The name of the file to write to or delete | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
mvs_dataset | Check, if MVS Dataset is to be accessed. | |
auth_method | Authentication method. If you use an encrypted private key, you will need a key passphrase.. Values: [username_password, username_password_key, username_key] | |
connection_mode * | Connection mode. Values: [anonymous, basic, mvssftp, sftp, ftps] | |
ftadv | Specify File Transfer Advice strings as comma-delimited key-value pairs. | |
host * | The hostname or IP address of the remote FTP server | |
key_passphrase | If the private key is encrypted, this passphrase is needed to decrypt/encrypt it | |
password | The password associated with the username for connecting to the FTP server | |
port | The port of the FTP server | |
private_key | The private key for your account. The key must be an RSA private key that is generated by the ssh-keygen tool and it must be in the PEM format. If the private key is encrypted, you will need a key passphrase. | |
username * | The username for connecting to the FTP server. Default: anonymous |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name * | The name of the file to read | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name * | The name of the file to write to or delete | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
names_as_labels | Set column labels to the value of the column name | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
access_key * | The access key ID (username) for authorizing access to S3 | |
bucket | The name of the bucket that contains the files to access | |
disable_chunked_encoding | Set this property if the storage doesn't support chunked encoding. Default: false | |
enable_global_bucket_access | Whether global bucket access should be enabled. Default: true | |
enable_path_style_access | Whether path style access should be enabled | |
url * | The endpoint URL to use for access to S3 | |
region | S3 region | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
secret_key * | The password associated with the access key ID for authorizing access to S3 | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
bucket | The name of the bucket that contains the files to read | |
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name * | The name of the file to read | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
bucket | The name of the bucket that contains the files to write | |
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
create_bucket | Create the bucket that contains the files to write to. Default: false | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name * | The name of the file to write to or delete | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
names_as_labels | Set column labels to the value of the column name | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
access_token * | An access token that can be used to connect to BigQuery | |
auth_method | ||
client_id * | The OAuth client ID | |
client_secret * | The OAuth client secret | |
credentials * | The contents of the Google service account key (JSON) file | |
token_url_headers | HTTP headers for the token URL request in JSON or as a JSON body: "Key1"="Value1","Key2"="Value2" | |
token_url_method | HTTP method that will be used for the token URL request. Values: [get, post, put]. Default: get | |
project_id | The ID of the Google project | |
refresh_token * | A refresh token to be used to refresh the access token | |
token_url_body | The body of the HTTP request to retrieve a token | |
sts_audience * | The Security Token Service audience containing the project ID, pool ID, and provider ID | |
service_account_email * | The e-mail address of the service account | |
service_account_token_lifetime | The lifetime in seconds of the service account access token | |
token_url * | The URL to retrieve a token | |
token_field_name * | The name of the field in the JSON response that contains the token | |
token_format | The format of the token. Values: [json, text]. Default: text | |
token_type | The type of access token. Values: [aws4_request, access_token, id_token, jwt, saml2]. Default: id_token |
Name | Type | Description |
---|---|---|
bucket * | The name of the Google Cloud Storage bucket to be used for staging temporary files | |
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
schema_name * | Specify the name of the BigQuery dataset that contains the table. When GCS staging area is selected the temporary staging table will be created in this schema. | |
database_name | Specify the Google project id where the the table resides. This is an optional property. If this property is not specified, project id from the connection will be used for operations. When GCS staging area is selected the temporary staging table will be created in this project id. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from | |
use_gcs_staging | Specify whether you want to use Google Cloud Storage as staging area while executing the select statement, to improve performance.. Default: false |
Name | Type | Description |
---|---|---|
bucket | The name of the Google Cloud Storage bucket to be used for staging temporary files | |
schema_name * | Specify the name of the BigQuery dataset that contains the table. | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
database_name | Specify the Google project id where the the table resides. This is an optional property. If this property is not specified, project id from the connection will be used for operations. | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace]. Default: append | |
table_name * | The name of the table to write to | |
update_statement * | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [delete, delete_insert, insert, merge, static_statement, update, update_statement]. Default: insert |
Name | Type | Description |
---|---|---|
access_token * | An access token that can be used to connect to Google Pub/Sub | |
auth_method | ||
client_id * | The OAuth client ID | |
client_secret * | The OAuth client secret | |
credentials * | The contents of the Google service account key (JSON) file | |
project_id | The ID of the Google project | |
refresh_token * | A refresh token to be used to refresh the access token |
Name | Type | Description |
---|---|---|
row_limit | The maximum number of rows to return | |
subscription_id | The id of the Google Pub/Sub subscription | |
timeout_after_last_message | Number of seconds after last received message.. Default: 300 |
Name | Type | Description |
---|---|---|
topic_id | The id of the Google Pub/Sub topic |
Name | Type | Description |
---|---|---|
access_token * | An access token that can be used to connect to Google Cloud Storage | |
auth_method | ||
client_id * | The OAuth client ID | |
client_secret * | The OAuth client secret | |
credentials * | The contents of the Google service account key (JSON) file | |
project_id | The id of the Google project | |
refresh_token * | A refresh token to be used to refresh the access token |
Name | Type | Description |
---|---|---|
bucket | The name of the bucket that contains the files to read | |
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name * | The name of the file to read | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
bucket | The name of the bucket that contains the files to write | |
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
create_bucket | Create the bucket that contains the files to write to. Default: false | |
database_name | The name of the database that contains the table to write to | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name * | The name of the file to write to or delete | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
create_bigquery_table | Whether to load data into BigQuery table from the GCS file. Default: false | |
location | A region, dual-region or multi-region location where your data will be stored | |
names_as_labels | Set column labels to the value of the column name | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
schema_name * | The name of the schema that contains the table to write to | |
storage_class | The storage class for the created bucket. Values: [coldline, multi_regional, nearline, regional, standard]. Default: standard | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace]. Default: append | |
table_name * | The name of the table to write to | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
write_mode | Whether to write to, or delete, the target. Values: [delete, delete_multiple_prefix, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
prepare_statement_support | Enable if connector should allow prepare statements.. Default: false | |
query_timeout | Sets the default query timeout in seconds for all statements created by a connection. If not specified the default value of 300 seconds will be used.. Default: 300 | |
retry_limit | Specify the maximum number of retry connection attempts to be made by the connector with an increasing delay between each retry. If no value is provided, two attempts will be made by default if necessary. | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
url * | The URL of the file to be accessed | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name | The name of the file to read | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
read_mode | The method for reading files. Values: [read_single, read_raw]. Default: read_single | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|
Name | Type | Description |
---|---|---|
crn * | To find the CRN, go to IBM Cloud Data Engine service. Copy the value of CRN from deployment details | |
password * | The IAM API key for accessing the data source | |
target_cos_url * | Target Cloud Object Storage, where the results should be stored |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random, row]. Default: none | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|
Name | Type | Description |
---|---|---|
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
keyspace * | The name of the keyspace | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: true | |
private_key | The private key | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
key_certificate | The certificate that will be stored along with the private key | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
auth_database | The name of the database in which the user was created | |
column_discovery_sample_size | The number of rows sampled per collection to determine table schemas. The default is 1000. | |
database * | The name of the database | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: true | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
schema_filter | A comma-separated list of database:collection pairs for which the driver should fetch metadata. For more information look into DataDirect driver documentation. | |
special_char_behavior | Specifies whether special characters in names that do not conform to SQL identifier syntax should be stripped (the default), included, or replaced with underscores. Values: [include, replace, strip] | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
query_timeout | Sets the default query timeout in seconds for all statements created by a connection. If not specified the default value of 300 seconds will be used.. Default: 300 | |
retry_limit | Specify the maximum number of retry connection attempts to be made by the connector with an increasing delay between each retry. If no value is provided, two attempts will be made by default if necessary. | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
api_key * | To find the API key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of apikey without the quotation marks | |
access_key | To find the Access key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of access_key_id without the quotation marks | |
auth_method | ||
bucket | The name of the bucket that contains the files to access | |
url * | To find this URL, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Endpoint in the left pane. Copy the value of the public endpoint that you want to use | |
resource_instance_id | To find the Resource instance ID, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of resource_instance_id without the quotation marks | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
secret_key * | To find the Secret key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of secret_access_key without the quotation marks | |
credentials | The contents of the Cloud Object Storage service credentials (JSON) file. Find JSON content by going to "Service credentials" tab and expanding selected credentials. Copy whole content including {} brackets. |
Name | Type | Description |
---|---|---|
bucket | The name of the bucket that contains the files to read | |
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name | The name of the file to read | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
append_uid | Use this property to choose if a unique identifier is to be appended to the file name. When the value of this property is set to yes, then the file name gets appended with the unique identifier, and a new file would be written for every wave of data that is streamed into the stage. When the value of this property is set to No, then the file would be overwritten on every wave.. Default: false | |
bucket | The name of the bucket that contains the files to write | |
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
create_bucket | Create the bucket that contains the files to write to. Default: false | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name | The name of the file to write to or delete | |
file_size_threshold | Specify the threshold for the file size in megabytes. Processing nodes will start a new file each time the size exceeds the value specified in the threshold.. Default: 1 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
names_as_labels | Set column labels to the value of the column name | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
storage_class | The storage class for the created bucket. Values: [cold_vault, flex, standard, vault]. Default: standard | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
access_key | To find the Access key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of access_key_id without the quotation marks | |
auth_method | ||
bucket | The name of the bucket that contains the files to access | |
url * | To find this URL, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Endpoint in the left pane. Copy the value of the public endpoint that you want to use | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
secret_key * | To find the Secret key, go to https://cloud.ibm.com/resources, expand the Storage resource, click the Cloud Object Storage service, and then click Service credentials in the left pane. Expand the desired Key name. Copy the value of secret_access_key without the quotation marks | |
credentials | The contents of the Cloud Object Storage service credentials (JSON) file. Find JSON content by going to "Service credentials" tab and expanding selected credentials. Copy whole content including {} brackets. |
Name | Type | Description |
---|---|---|
bucket | The name of the bucket that contains the files to read | |
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name | The name of the file to read | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
bucket | The name of the bucket that contains the files to write | |
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
create_bucket | Create the bucket that contains the files to write to. Default: false | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name | The name of the file to write to or delete | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
names_as_labels | Set column labels to the value of the column name | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
storage_class | The storage class for the created bucket. Values: [cold_vault, flex, standard, vault]. Default: standard | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
custom_url | The URL to the Cloudant database | |
database | The database to connect to | |
password * | The password associated with the username for accessing the data source | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
database * | The database to connect to | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_limit | The maximum number of rows to return |
Name | Type | Description |
---|---|---|
blob_truncation_size | The maximum size for BLOB values. Values larger than this will be truncated. Default: 8000 | |
batch_size | The number of documents to send per request. Default: 100 | |
clob_truncation_size | The maximum size for CLOB values. Values larger than this will be truncated. Default: 8000 | |
create_database | Create the database to connect to. Default: false | |
database * | The database to connect to | |
document_type | The type of the document | |
input_format | The format of the source data. Values: [json, relational]. Default: relational | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write]. Default: write |
Name | Type | Description |
---|---|---|
auth_method | ||
url * | The gateway URL to access Cognos | |
namespace_id * | The identifier of the authentication namespace | |
password * | The password associated with the username for accessing the data source | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
file_name * | The name of the file to read | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data |
Name | Type | Description |
---|
Name | Type | Description |
---|---|---|
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
username_password_encryption | The encryption algorithm for username and password credentials. Values: [aes_256_bit, des_56_bit, default]. Default: default | |
username_password_security | The DRDA security mechanism for username and password credentials. Values: [clear_text, default, encrypted_password, encrypted_username, encrypted_username_password, encrypted_username_password_data]. Default: default |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
advanced.api_key | API key to connect to the database. | |
authentication_type | Select type of credentials. Values: [api_key, username_and_password]. Default: username_and_password | |
database * | Specifies the name of the database to connect to. | |
advanced.hostname * | A database hostname to connect to. | |
keep_conductor_connection_alive | Select to keep the connector conductor process connected during the job run. If you do not select this property, the connector conductor process will disconnect from the database while the player processes run, and then reconnect when the player processes complete.. Default: false | |
advanced.options | Additional connection options passed as parameters to ODBC connection string. | |
password * | Specifies the password to use for connecting to the database. | |
advanced.port * | Port that database process is listening on.. Default: 50000 | |
advanced.ssl_certificate | Optional database SSL certificate (arm format) for establishing secure connection. | |
advanced.ssl_connection | Use of SSL connection.. Default: false | |
username * | Specifies the user name to use for connecting to the database. |
Name | Type | Description |
---|---|---|
before_after.after | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once after all data is processed. | |
before_after.after_node | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once one each node after all data is processed on that node. | |
session.array_size | The array size to be used for all read and write database operations. Default: 2000 | |
session.autocommit_mode | Specifies whether the connector commits transactions manually, or allows the database to commit transactions automatically at its discretion.. Values: [off, on]. Default: off | |
before_after.before | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once before any data is processed. | |
before_after.before_node | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
before_after | Before/After SQL properties. Default: false | |
sql.enable_partitioning.partitioning_method.key_field * | Specifies the key column that is used by the selected partitioned reads method. This column must be a numeric data type. | |
session.pass_lob_locator.column * | Use to choose columns containing LOBs to be passed by locator (reference) | |
session.use_external_tables.directory_for_named_pipe | Specifies the location where the named pipe used by the load operation should be created. This property applies to Unix systems only.. Default: /tmp | |
session.pass_lob_locator | Enables/disables the ability to specify LOB columns to be passed using locator (reference) information. LOB columns not specified will be passed inline. Default: false | |
sql.enable_partitioning | Enable or disable partitioned reads by using the selected partitioning method.. Default: false | |
enable_quoted_i_ds | Specifies whether or not to enclose database object names in quotes when generating DDL and DML. Default: true | |
transaction.end_of_wave.end_of_data | Specifies whether to insert an EOW marker for the last set of records when the number is less than the specified transaction record count value. Default: false | |
transaction.end_of_wave | Specify settings for the end of wave handling. None means EOW markers are never inserted, Before means EOW markers are inserted before committing the transaction, After means EOW markers are inserted after committing the transaction. Values: [after, before, none]. Default: none | |
before_after.after.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.after_node.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before_node.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
session.schema_reconciliation.fail_on_size_mismatch | Fail if the sizes of numeric and string fields are not compatible when validating the design schema against the database. Default: true | |
session.schema_reconciliation.fail_on_type_mismatch | Fail if the types of fields are not compatible when validating the design schema against the database. Default: true | |
generate_sql | Specifies whether to generate SQL statements at run time.. Default: false | |
sql.enable_partitioning.partitioning_method.gen_partitioning_sql | Specifies whether the connector should modify the SELECT statement at run-time and generate the required partitioning clause. Default: true | |
session.isolation_level | Specifies the isolation level that is used for all database transactions.. Values: [cursor_stability, read_stability, read_uncommitted, repeatable_read]. Default: cursor_stability | |
limit_rows.limit | Enter the maximum number of rows that will be returned by the connector.. Default: 1000 | |
limit_rows | Select Yes to limit the number of rows that are returned by the connector.. Default: false | |
lock_wait_mode | Specifies the lock wait strategy that is used when a lock cannot be obtained immediately.. Values: [return_an_sqlcode_and_sqlstate, use_the_lock_timeout_database_configuration_parameter, user_specified, wait_indefinitely]. Default: use_the_lock_timeout_database_configuration_parameter | |
lock_wait_mode.lock_wait_mode_time * | Time to wait for a lock | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
sql.other_clause | The other clause predicate of the SQL statement | |
session.use_external_tables.other_options | Additional options to be passed to the external table statement | |
pad_character | Specifies the pad character that is used in the WHERE clause for string columns that are smaller than the column size. | |
sql.enable_partitioning.partitioning_method | The method to use for partitioned reads.. Values: [db2_connector, minimum_and_maximum_range, modulus]. Default: minimum_and_maximum_range | |
prefix_for_expression_columns | Specifies the prefix for columns that contain the result of expressions.. Default: EXPR | |
before_after.after_node.read_from_file_after_sql_node | Select Yes to read the SQL statement from the file that is specified in the After SQL (node) statement property.. Default: false | |
before_after.after.read_from_file_after_sql | Select Yes to read the SQL statement from the file that is specified in the After SQL statement property.. Default: false | |
before_after.before_node.read_from_file_before_sql_node | Select Yes to read the SQL statement from the file that is specified in the Before SQL (node) statement property.. Default: false | |
before_after.before.read_from_file_before_sql | Select Yes to read the SQL statement from the file that is specified in the Before SQL statement property.. Default: false | |
sql.select_statement.read_from_file_select | Select YES to read the SELECT statement from the file that is specified in the SELECT statement property.. Default: false | |
transaction.record_count | Number of records per transaction. The value 0 means all available records. Default: 2000 | |
re_optimization | Specifies the type of reoptimization that is done by Db2.. Values: [always, none, once]. Default: none | |
sql.select_statement * | Statement to be executed when reading rows from the database or the fully qualified name of the file that contains the statement. | |
sql.enable_partitioning.partitioning_method.table_name * | Specifies the table that is used by the selected partitioned reads method. | |
table_name * | The table name to be used in generated SQL. The table name must be schema qualified in order to preview data. | |
session.use_external_tables | Indicates whether external tables are used.. Default: false | |
sql.where_clause | The where clause predicate of the SQL statement |
Name | Type | Description |
---|---|---|
load_to_zos.data_file_attributes.discard_data_set.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.data_file_attributes.error_data_set.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.data_file_attributes.input_data_files.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: keep | |
load_to_zos.data_file_attributes.map_data_set.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.data_file_attributes.work1_data_set.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: delete | |
load_to_zos.data_file_attributes.work2_data_set.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: delete | |
load_to_zos.image_copy_function.image_copy_backup_file.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.image_copy_function.image_copy_file.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.image_copy_function.recovery_backup.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.image_copy_function.recovery_file.file_disposition.abnormal_termination | This option specifies the abnormal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
before_after.after | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once after all data is processed. | |
before_after.after_node | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once one each node after all data is processed on that node. | |
load_control.allow_access_mode | Specifies the level of access on the table that is to be loaded.. Values: [no_access, read]. Default: no_access | |
load_to_zos.image_copy_function.allow_changes | Indicates whether other programs can update the table space while COPY is running. Default: false | |
session.array_size | The array size to be used for all read and write database operations. Default: 2000 | |
session.insert_buffering.atomic_arrays | Specifies whether arrays should be inserted atomically. Insert buffering with non-atomic arrays does not report errors accurately.. Values: [auto, no, simulated, yes]. Default: auto | |
session.autocommit_mode | Specifies whether the connector commits transactions manually, or allows the database to commit transactions automatically at its discretion.. Values: [off, on]. Default: off | |
load_to_zos.batch_pipe_system_id * | If the data is to be transferred to a Batch pipes system on z/OS, this option identifies its name. | |
before_after.before | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once before any data is processed. | |
before_after.before_node | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
before_after | Before/After SQL properties. Default: false | |
table_action.generate_create_statement.create_table_bufferpool | Specifies the buffer pool be use for the implicitly created table space and determines the page size of the table space. Do not specify BUFFERPOOL if the table space name is specified by using the IN table-space-name clause or the IN ACCELERATOR clause is specified. If you do not specify the BUFFERPOOL clause, Db2 selects the buffer pool as described in Implicitly defined table spaces (Db2 for z/OS only). | |
load_to_zos | Determines whether the target table is on Db2 for z/OS.. Default: false | |
load_control.bulkload_with_lob_xml | Indicates whether there are any LOB or XML columns in the target Db2 table.. Default: false | |
load_to_zos.encoding.ccsid * | Specifies the coded character set identifier (CCSID) for SBCS data, Mixed data, and DBCS data in the input file. | |
load_control.cpu_parallelism | Specifies the number of processes or threads that the load utility spawns for processing records.. Default: 0 | |
load_to_zos.image_copy_function.change_limit_percent1 | Specifies the percentage limit of changed pages in the table space at which an incremental image-copy is to be taken | |
load_to_zos.image_copy_function.change_limit_percent2 | Specifies the percentage limit of changed pages in the table space at which an incremental image-copy is to be taken | |
load_to_zos.encoding.character_set | The IANA character set name for the encoding. If not specified, ibm-1047-s390 will be used for EBCDIC, ASCII for ASCII, and UTF-16BE for UNICODE. | |
sql.user_defined_sql.file.character_set | IANA character set name | |
load_control.check_pending_cascade | Specifies whether the check pending state of the loaded table is immediately cascaded to all descendants.. Values: [deferred, immediate]. Default: deferred | |
load_control.partitioned_db_config.check_truncation | If selected, data records are checked for truncation at input and output (CHECK_TRUNCATION).. Default: false | |
load_control.cleanup_on_fail | Clean-up on failures during stage execution.. Default: false | |
logging.log_column_values.delimiter | Specifies the delimiter to use between columns. Values: [comma, newline, space, tab]. Default: space | |
table_action.generate_create_statement.create_table_compress | Specifies whether data compression applies to the rows. On Db2 for z/OS: if the IN table-space-name clause or the IN ACCELERATOR clause is specified, COMPRESS YES or COMPRESS NO must not be specified.. Values: [database_default, no, yes]. Default: database_default | |
load_to_zos.shr_level | Level of application's concurrent access to the table space or partition. Parameter's value corresponds to SHRLEVEL option of LOAD command.. Values: [change, none, reference]. Default: none | |
load_control.copy_loaded_data | Specifies the method that is used for making a copy of the loaded data.. Values: [no_copy, use_tivoli_to_make_a_copy, use_device_or_directory, use_shared_library]. Default: no_copy | |
table_action.generate_create_statement.create_statement * | A statement to be executed when creating the target database table | |
load_to_zos.dsn_prefix | This option identifies a prefix to be used when creating MVS dataset names. If omitted, the transfer user name is used. | |
load_control.data_buffer_size | Specifies the number of pages (size 4KB) that are used as buffered space for transferring data within the load utility.. Default: 0 | |
load_to_zos.data_file_attributes.discard_data_set.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.error_data_set.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.input_data_files.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.map_data_set.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work1_data_set.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work2_data_set.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_backup_file.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_file.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_backup.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_file.data_class | Specifies the SMS data class (DATACLAS). The name must be a valid SMS data class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.discard_data_set.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
load_to_zos.data_file_attributes.error_data_set.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
load_to_zos.data_file_attributes.input_data_files.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
load_to_zos.data_file_attributes.map_data_set.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
load_to_zos.data_file_attributes.work1_data_set.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
load_to_zos.data_file_attributes.work2_data_set.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
load_to_zos.image_copy_function.image_copy_backup_file.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
load_to_zos.image_copy_function.image_copy_file.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
load_to_zos.image_copy_function.recovery_backup.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
load_to_zos.image_copy_function.recovery_file.dataset_name | Dataset name used by the Load utility. If empty (default), then a dataset name is generated using the DSN prefix property. | |
sql.delete_statement * | Statement to be executed when deleting rows from the database | |
load_to_zos.device_type | This option specifies the device type to be used for various datasets that the LOAD utility may need. If omitted, the default is SYSDA.. Default: SYSDA | |
sql.direct_insert | If set to Yes, the connector insert directly into the target table. In this mode, when running with multiple processing nodes it is possible to have partially committed data if one or more of the processing nodes encounters an error. If set No, the connector inserts into the temporary work table (TWT) first and then from TWT into the target. In this mode the data will either be completely committed or completely rolled back guarantying consistency.. Default: true | |
load_control.data_file_path * | Specifies the location where the command file and data file (used by the load operation) should be created. | |
load_to_zos.transfer.data_file_path | Specifies the location where the data files will be created before transfer to z/OS. | |
session.use_external_tables.log_directory | Specifies the directory for the log and bad files. If it is left blank, the connector will use the value of the environment variable TMPDIR. If TMPDIR is not defined, it will default to /tmp on Unix and to system temporary directory on Windows. | |
load_control.directory_for_named_pipe | Specifies the location where the named pipe used by the load operation should be created. This property applies to Unix systems only.. Default: /tmp | |
session.use_external_tables.directory_for_named_pipe | Specifies the location where the named pipe used by the load operation should be created. This property applies to Unix systems only.. Default: /tmp | |
load_control.disk_parallelism | Specifies the number of processes or threads that the load utility spawns for writing data.. Default: 0 | |
table_action.generate_create_statement.create_table_distribute_by | Specifies the database partitioning or the way the data is distributed across multiple database partitions (Db2 LUW only).. Values: [hash, none, random]. Default: none | |
table_action.generate_drop_statement.drop_statement * | A statement to be executed when dropping the database table | |
session.temporary_work_table.drop_table | If set to Yes, the connector will drop the temporary work table.. Default: true | |
session.schema_reconciliation.drop_unmatched_fields | Drop fields that don't exist in the input schema. Default: true | |
load_control.file_type_modifiers.dump_file | Specifies a fully qualified file path to use with the 'dumpfile' file type modifier. If no path is specified, the 'dumpfile' modifier is not used | |
enable_quoted_i_ds | Specifies whether or not to enclose database object names in quotes when generating DDL and DML. Default: false | |
load_to_zos.encoding | Specifies the input dataset encoding. Values: [ascii, ccsid, ebcdic, unicode]. Default: ebcdic | |
load_control.exception_table | Name of tables where rows that violate constraints will be stored. | |
sql.user_defined_sql.fail_on_error | Abort the statement sequence when an error occurs. Default: true | |
before_after.after.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.after_node.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before_node.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
table_action.generate_create_statement.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
table_action.generate_drop_statement.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
table_action.generate_truncate_statement.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
session.fail_on_row_error_px | Fail the job if a write operation to the target is unsuccessful. Default: true | |
session.schema_reconciliation.fail_on_size_mismatch | Fail if the sizes of numeric and string fields are not compatible when validating the design schema against the database. Default: true | |
session.schema_reconciliation.fail_on_type_mismatch | Fail if the types of fields are not compatible when validating the design schema against the database. Default: true | |
sql.user_defined_sql.file * | File on the conductor node that contains SQL statements to be executed for each input row | |
load_control.file_type | Specifies the format of the data in the data file.. Values: [asc, del]. Default: asc | |
load_control.files_only | Indicates whether input files should be created without executing the load operation.. Default: false | |
load_to_zos.files_only | LOAD will not actually be executed, only MVS datasets are created.. Default: false | |
generate_sql | Specifies whether to generate SQL statements at run time.. Default: false | |
table_action.generate_create_statement | Specifies whether to generate a create table statement at runtime. Default: true | |
table_action.generate_drop_statement | Specifies whether to generate a drop table statement at runtime. Default: true | |
table_action.generate_truncate_statement | Specifies whether to generate a truncate table statement at runtime. Default: true | |
load_to_zos.encoding.graphic_character_set | The IANA character set name for the graphic encoding. If not specified, UTF-16BE is the default. | |
load_to_zos.transfer.uss_file_directory * | The presence of this option indicates that HFS files is to be used and gives the directory name where the files will be created. The value should be a fully qualified HFS directory name. | |
load_control.partitioned_db_config.port_range.max_value * | Higher port number of the port range | |
load_to_zos.image_copy_function.image_copy_backup_file | Specifies whether or not to create a backup of the image-copy file. Default: false | |
load_to_zos.image_copy_function | Specifies whether to run an Image-copy function after Load. Values: [concurrent, full, incremental, no]. Default: no | |
table_action.generate_create_statement.create_table_in | Identifies the database and/or table space in which the table is created: IN database-name.table-space-name. On Db2 LUW if required add CYCLE / NO CYCLE indication. | |
table_action.generate_create_statement.create_table_index_in | Specifies the table space in which indexes or long column values are to be stored (Db2 LUW only). | |
load_control.indexing_mode | Specifies whether indexes are rebuilt or extended incrementally.. Values: [automatic_selection, do_not_update_table_indexes, extend_existing_indexes, rebuild_table_indexes]. Default: automatic_selection | |
session.insert_buffering | Specifies whether to enable the insert buffering optimization in partitioned database environments.. Values: [default, ignore_duplicates, off, on]. Default: default | |
sql.insert_statement * | Statement to be executed when inserting rows into the database | |
load_to_zos.transfer.retry_connection.retry_interval * | Enter the time in seconds to wait between retries to establish a connection.. Default: 10 | |
load_control.partitioned_db_config.isolate_part_errors | Specifies the reaction of the load operation to errors that occur on individual partitions (ISOLATE_PART_ERRS).. Values: [load_errors_only, no_isolation, setup_and_load_errors, setup_errors_only]. Default: load_errors_only | |
session.isolation_level * | Specifies the isolation level that is used for all database transactions.. Values: [cursor_stability, read_stability, read_uncommitted, repeatable_read]. Default: cursor_stability | |
load_to_zos.resume | Select Yes to add records to the end of the table if the table space is not empty or No to load data into an empty table space. The value corresponds to the RESUME option value in the LOAD command.. Default: true | |
sql.key_columns | A comma-separated list of key column names. | |
table_action.generate_create_statement.create_table_distribute_by.hash_key_columns * | A comma-separated list of key column names. | |
load_control.copy_loaded_data.copy_load_library_name * | Specifies the name of the library that is used to generate the copy. | |
limit_parallelism | By default the connector runs one player process per database partition. If you want to force the connector to run fewer player processes, set this property to Yes. Default: false | |
load_control.load_method | Determines the load method to use.. Values: [named_pipes, sequential_files]. Default: named_pipes | |
load_to_zos.load_method | Load method to be used for loading input data into Db2 for z/OS.. Values: [batch_pipes, mvs_datasets, uss_pipes]. Default: mvs_datasets | |
load_control.load_mode | Specifies the mode where the load operates.. Values: [insert, replace, restart, terminate]. Default: insert | |
load_control.load_timeout | Specifies the time in seconds to attempt opening a socket for the load operation before timing out.. Default: 300 | |
load_to_zos.load_with_logging | Indicates whether logging is to occur during the load process.. Default: false | |
load_control.copy_loaded_data.copy_to_device_or_directory * | A comma-separated list of devices or directories where the copy is generated. | |
load_control.lob_path_list | A list of fully qualified paths or devices to identify the location of the individual LOB files to be loaded. | |
lock_wait_mode | Specifies the lock wait strategy that is used when a lock cannot be obtained immediately.. Values: [return_an_sqlcode_and_sqlstate, use_the_lock_timeout_database_configuration_parameter, user_specified, wait_indefinitely]. Default: use_the_lock_timeout_database_configuration_parameter | |
lock_wait_mode.lock_wait_mode_time * | Time to wait for a lock | |
load_control.lock_with_force | If selected, the load operation forces off other applications that hold conflicting locks.. Default: false | |
logging.log_column_values | Specifies whether to log column values for the first row that fails to be written. Default: false | |
logging.log_column_values.log_keys_only | Specifies whether to log key columns or all columns for failing statements. Default: false | |
load_control.partitioned_db_config.port_range.min_value * | Lower port number of the port range | |
load_to_zos.data_file_attributes.discard_data_set.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.error_data_set.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.input_data_files.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.map_data_set.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work1_data_set.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work2_data_set.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_backup_file.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_file.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_backup.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_file.management_class | Specifies the SMS management class (MGMTCLAS). The name must be a valid SMS management class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_control.partitioned_db_config.max_num_part_agents | Specifies the maximum number of partitioning agents to be used in a load session (MAX_NUM_PART_AGENTS).. Default: 25 | |
session.use_external_tables.max_errors | The number of rejected records at which the system stops processing and immediately rolls back the load. The default is 1 (that is, a single rejected record results in a rollback). Default: 1 | |
load_control.message_file * | Specifies the file where Db2 writes diagnostic messages.. Default: loadMsgs.out | |
load_control.allow_access_mode.table_space | Specifies the table space that is used for building a shadow copy of the index if the indexes are being rebuilt. | |
load_control.non_recoverable_tx | The load transaction is marked as non-recoverable.. Default: false | |
load_to_zos.data_file_attributes.discard_data_set.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.data_file_attributes.error_data_set.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.data_file_attributes.input_data_files.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: keep | |
load_to_zos.data_file_attributes.map_data_set.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.data_file_attributes.work1_data_set.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: delete | |
load_to_zos.data_file_attributes.work2_data_set.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: delete | |
load_to_zos.image_copy_function.image_copy_backup_file.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.image_copy_function.image_copy_file.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.image_copy_function.recovery_backup.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.image_copy_function.recovery_file.file_disposition.normal_termination | This option specifies the normal termination disposition of the dataset(s) used by the LOAD utility: Keep (KEEP), Delete (DELETE), Catalog (CATLG), Uncatalog (UNCATLG).. Values: [catalog, delete, keep, uncatalog]. Default: catalog | |
load_to_zos.data_file_attributes.discard_data_set.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.error_data_set.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.input_data_files.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.map_data_set.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work1_data_set.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work2_data_set.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_backup_file.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_file.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_backup.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_file.number_of_buffers | Specifies the number of buffers to use (BUFNO). If empty (default), then the property is not used. | |
load_to_zos.transfer.retry_connection.retry_count * | Enter the number of times to try to establish a connection after a failure on the initial attempt.. Default: 3 | |
table_action.generate_create_statement.create_table_organize_by | Specifies how the data is organized in the data pages of the table: row-organized table / column-organized table (Db2 LUW only).. Values: [column, database_default, row]. Default: database_default | |
session.use_external_tables.other_options | Additional options to be passed to the external table statement | |
table_action.generate_create_statement.create_table_other_options | Other options/clauses to the CREATE TABLE statement eg. Partitioning-clause. | |
load_control.partitioned_db_config.output_db_part_nums | List of database partition numbers (OUTPUT_DBPARTNUMS). The database partition numbers represent the database partitions on which the load operation is to be performed. Items in the list must be separated by commas. | |
pad_character | Specifies the pad character that is used in the WHERE clause for string columns that are smaller than the column size. | |
load_control.partitioned_db_config.run_stat_db_partnum | Specifies the database partition where statistics are collected (RUN_STAT_DBPARTNUM).. Default: -1 | |
load_to_zos.partition_number | If this option is present, then only the indicated partition will be loaded. If this option is omitted, all partitions of the Db2 for z/OS database table will be loaded. | |
load_control.partitioned_db_config | If selected, data is loaded into a partitioned table.. Default: false | |
load_control.partitioned_db_config.dist_file | If this option is specified, the load utility generates a database partition distribution file with the given name (DISTFILE). | |
load_control.partitioned_db_config.partitioning_db_part_nums | List of database partition numbers that are used in the distribution process (PARTITIONING_DBPARTNUMS). Items in the list must be separated by commas. | |
load_to_zos.transfer.password | Determines the password for transferring. | |
table_action.table_action_first | Select Yes to perform table action first. Select No to run Before SQL statements first.. Default: true | |
load_control.partitioned_db_config.port_range | Specifies the range of TCP ports that are used to create sockets for internal communications (PORT_RANGE). | |
prefix_for_expression_columns * | Specifies the prefix for columns that contain the result of expressions.. Default: EXPR | |
load_to_zos.data_file_attributes.discard_data_set.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.error_data_set.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.input_data_files.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.map_data_set.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.work1_data_set.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.work2_data_set.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.image_copy_function.image_copy_backup_file.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.image_copy_function.image_copy_file.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.image_copy_function.recovery_backup.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.image_copy_function.recovery_file.primary_allocation | Specifies the z/OS disk space primary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
before_after.after_node.read_from_file_after_sql_node | Select Yes to read the SQL statement from the file that is specified in the After SQL (node) statement property.. Default: false | |
before_after.after.read_from_file_after_sql | Select Yes to read the SQL statement from the file that is specified in the After SQL statement property.. Default: false | |
before_after.before_node.read_from_file_before_sql_node | Select Yes to read the SQL statement from the file that is specified in the Before SQL (node) statement property.. Default: false | |
before_after.before.read_from_file_before_sql | Select Yes to read the SQL statement from the file that is specified in the Before SQL statement property.. Default: false | |
table_action.generate_create_statement.read_create_statement_from_file | Select YES to read the CREATE statement from the file that is specified in the CREATE statement property.. Default: false | |
table_action.generate_drop_statement.read_drop_statement_from_file | Select YES to read the DROP statement from the file that is specified in the DROP statement property.. Default: false | |
table_action.generate_truncate_statement.read_truncate_statement_from_file | Select YES to read the TRUNCATE statement from the file that is specified in the TRUNCATE statement property.. Default: false | |
transaction.record_count | Number of records per transaction. The value 0 means all available records. Default: 2000 | |
load_to_zos.image_copy_function.recovery_backup | Specifies whether or not to create a backup of the image-copy recovery file. Default: false | |
load_to_zos.image_copy_function.recovery_file | Specifies whether or not to create the image-copy recovery file. Default: false | |
load_control.remove_intermediate_data_file | Select Yes to remove the intermediate data file after completing the load operation.. Default: true | |
re_optimization | Specifies the type of reoptimization that is done by Db2.. Values: [always, none, once]. Default: none | |
load_to_zos.image_copy_function.report_only | Run utility to produce report only. Default: false | |
load_control.restart_phase | Specifies which Db2 load phase is to be restarted. The original input data needs to be reproduced for restarting the Load phase. Build and Delete phases ignore the input data.. Values: [build, delete, load]. Default: load | |
load_to_zos.transfer.retry_connection | Select Yes to try to establish a connection again when the initial attempt to connect is unsuccessful.. Default: true | |
load_control.i_row_count | The number of physical records to be loaded. Allows a user to load only the first rowcnt rows in a file.. Default: 0 | |
load_to_zos.row_count_estimate | This option specifies an estimated count of the total number of rows to be loaded into all partitions combined. This estimate is used to calculate the amount of disk space which will be needed on z/OS for various datasets. If not present, a row estimate of 1000 is used.. Default: 1000 | |
load_control.save_count | Specifies the number of records to load before establishing a consistency point.. Default: 0 | |
load_to_zos.image_copy_function.scope | Scope of the image-copy. Values: [full, single_partition]. Default: full | |
load_to_zos.data_file_attributes.discard_data_set.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.error_data_set.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.input_data_files.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.map_data_set.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.work1_data_set.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.data_file_attributes.work2_data_set.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.image_copy_function.image_copy_backup_file.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.image_copy_function.image_copy_file.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.image_copy_function.recovery_backup.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.image_copy_function.recovery_file.secondary_allocation | Specifies the z/OS disk space secondary allocation amount. The range of values is from 1 to 1677215. If empty (default), then its value equals computed (from the row estimate) space requirements in cylinders or tracks, respectively to Space type setting. For Map dataset, the resulting value is doubled. | |
load_to_zos.set_copy_pending | Specifies whether or not the table space is set to the copy-pending status. (Applicable only when Load with logging = No). Default: false | |
load_control.sort_buffer_size | Specifies the number of pages (size 4KB) of memory that are used for sorting index keys during a load operation.. Default: 0 | |
load_to_zos.data_file_attributes.discard_data_set.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
load_to_zos.data_file_attributes.error_data_set.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
load_to_zos.data_file_attributes.input_data_files.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
load_to_zos.data_file_attributes.map_data_set.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
load_to_zos.data_file_attributes.work1_data_set.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
load_to_zos.data_file_attributes.work2_data_set.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
load_to_zos.image_copy_function.image_copy_backup_file.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
load_to_zos.image_copy_function.image_copy_file.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
load_to_zos.image_copy_function.recovery_backup.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
load_to_zos.image_copy_function.recovery_file.space_type | Specifies the z/OS disk space allocation unit type (SPACE(?,?)). Valid values are Cylinders (CYL) and Tracks (TRK). The default value is Cylinders.. Values: [cylinders, tracks]. Default: cylinders | |
sql.user_defined_sql.statements * | SQL statements to be executed for each input row | |
load_control.statistics | Instructs load to collect statistics during the load according to the profile defined for this table (STATISTICS).. Default: false | |
load_to_zos.statistics | This option requests that statistics be displayed at the end of the load.. Values: [all, index, none, table]. Default: none | |
session.use_external_tables.statistics | Instructs load to collect statistics during the load according to the profile defined for this table (STATISTICS).. Default: false | |
session.use_external_tables.statistics.run_stats_on_columns | Generates statistics on the columns. If no column specified, statistics are collected on all columns by default. | |
load_to_zos.data_file_attributes.discard_data_set.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [append, new, old, replace, share]. Default: replace | |
load_to_zos.data_file_attributes.error_data_set.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [append, new, old, replace, share]. Default: replace | |
load_to_zos.data_file_attributes.input_data_files.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [new, old, replace]. Default: replace | |
load_to_zos.data_file_attributes.map_data_set.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [append, new, old, replace, share]. Default: replace | |
load_to_zos.data_file_attributes.work1_data_set.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [append, new, old, replace, share]. Default: replace | |
load_to_zos.data_file_attributes.work2_data_set.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [append, new, old, replace, share]. Default: replace | |
load_to_zos.image_copy_function.image_copy_backup_file.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [append, new, old, replace, share]. Default: replace | |
load_to_zos.image_copy_function.image_copy_file.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [append, new, old, replace, share]. Default: replace | |
load_to_zos.image_copy_function.recovery_backup.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [append, new, old, replace, share]. Default: replace | |
load_to_zos.image_copy_function.recovery_file.file_disposition.status | This option specifies the disposition status of the dataset(s) used by the LOAD utility (DISP(?,?,?)). Specify the disposition status of the input data set used by load utility. This property is disabled when batch pipe system ID has any value. The default value is Replace. The valid values for this property are: Replace - Deletes an existing data set and creates a new data set (NEW). New - Indicates that the file does not currently exist (NEW). Old - Overwrites an existing data set or fails if the data set does not exist (OLD). Share - Identical to Old except that several jobs can read from the data set at the same time (SHR). Append - Appends to the end of an existing data set or creates a new data set if it does not already exist (MOD).. Values: [append, new, old, replace, share]. Default: replace | |
load_control.partitioned_db_config.status_interval | Specifies the number of megabytes of data to load before generating a progress message (STATUS_INTERVAL).. Default: 100 | |
load_to_zos.data_file_attributes.discard_data_set.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.error_data_set.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.input_data_files.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.map_data_set.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work1_data_set.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work2_data_set.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_backup_file.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_file.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_backup.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_file.storage_class | Specifies the SMS storage class (STORCLAS). The name must be a valid SMS storage class and must not exceed 8 characters in length. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.system_pages | Specifies whether the copy utility puts system pages at the beginning of the image-copy file. Default: true | |
table_action * | Select the action to perform on the database table. Values: [append, create, replace, truncate]. Default: append | |
session.temporary_work_table.table_name * | The name of the existing temporary work table. | |
table_name * | The table name to be used in generated SQL. The table name must be schema qualified in order to preview data. | |
table_action.generate_create_statement.create_table_on_zos | Determines whether the target table is on Db2 LUW or Db2 for z/OS.. Default: false | |
load_control.directory_for_tmp_files | Specifies the path name that is used by the Db2 server to store temporary files. | |
session.temporary_work_table | If set to Automatic, the connector will automatically create the temporary work table using an internally generated name.. Values: [automatic, existing]. Default: automatic | |
transaction.time_interval | Specify the amount of time to pass before a commit is issued. If you set a small value you force frequent commits and therefore if your program terminates unexpectedly, your table can still contain partial results. However, you may pay a performance penalty because of the high frequency of commits. If you set a large value, Db2 must log a correspondingly large amount of rollback information which may also slow your job.. Default: 0 | |
limit_parallelism.player_process_limit * | The total number of player processes across all processing nodes. | |
load_control.partitioned_db_config.trace | Specifies the number of records to trace in a dump of the data conversion process and the output of the hashing values (TRACE).. Default: 0 | |
load_to_zos.transfer.transfer_cmd | User-entered command sent just before data transfer begins. FTP example: quote site vcount=7 datakeepalive=60 | |
load_to_zos.transfer.transfer_to * | Name of the target machine to which the data is sent. | |
load_to_zos.transfer.transfer_type | Determines the method of transferring data to the z/OS machine.. Values: [ftp, lftp, sftp]. Default: ftp | |
sql.user_defined_sql.suppress_warnings | Do not abort the job when a Db2 Warning is encountered, report as an information messsage.. Default: false | |
table_action.generate_truncate_statement.truncate_statement * | A statement to be executed when truncating the database table | |
session.temporary_work_table.truncate_table | If set to Yes, the temporary work table is truncated before any data is written to it.. Default: false | |
table_action.generate_create_statement.create_table_compress.create_table_compress_luw | Specifies whether adaptive compression or classic row compression is used (Db2 LUW only).. Values: [adaptive, static]. Default: adaptive | |
load_to_zos.uss_pipe_directory * | The presence of this option indicates that USS piping is to be used and gives the directory name where the pipes will be created. The value should be a fully qualified USS directory name. | |
sql.use_unique_key_column.unique_key_column * | Unique key column name used in UPDATE statement | |
load_to_zos.data_file_attributes.discard_data_set.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.error_data_set.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.input_data_files.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.map_data_set.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work1_data_set.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work2_data_set.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_backup_file.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_file.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_backup.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_file.unit | Specifies the device number, device type (generic), or group name for the data set (UNIT). If empty (default), then the property is not used. | |
sql.update_columns | Comma separated list of columns to be updated with UPDATE statement. If no column specified, all columns are updated. | |
sql.update_statement * | Statement to be executed when updating rows in the database | |
session.use_external_tables | Indicates whether external tables are used.. Default: false | |
sql.use_unique_key_column | Use Unique key column in UPDATE statement. Default: false | |
load_to_zos.transfer.user | Determines the user name for transferring. | |
sql.user_defined_sql * | Source of the user-defined SQL statements. Values: [file, statements]. Default: statements | |
load_to_zos.utility_id | This is a unique identifier within Db2 for the execution of the LOAD utility. The default value is Db2ZLOAD.. Default: DB2ZLOAD | |
table_action.generate_create_statement.create_table_value_compression | This determines the row format that is to be used. Each data type has a different byte count depending on the row format that is used (Db2 LUW only).. Default: false | |
load_to_zos.data_file_attributes.discard_data_set.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.error_data_set.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.input_data_files.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.map_data_set.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work1_data_set.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_to_zos.data_file_attributes.work2_data_set.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_backup_file.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.image_copy_file.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_backup.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_to_zos.image_copy_function.recovery_file.volumes | Specifies a list of volume serial numbers for this allocation (VOLUMES(?)). The value or list of values may be entered by the user with or without enclosing parentheses. In the TEMPLATE statement, however, the parentheses are required. So, if the user does not enter them, the syntax generator should add them. The value for this property is a comma-separated list of string values. If empty (default), then the property is not used. | |
load_control.warning_count | Specifies the number of warnings that are allowed before the load operation stops.. Default: 0 | |
load_control.without_prompting | Select Yes to add the WITHOUT PROMPTING parameter to the LOAD command in the generated command file.. Default: false | |
write_mode * | The mode to be used when writing to a database table. Values: [bulk_load, delete, delete_then_insert, insert, insert_new_rows_only, insert_then_update, update, update_then_insert, user-defined_sql]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
api_key * | An application programming interface key that identifies the calling application or user | |
auth_method | ||
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port | The port of the database. Default: 50001 | |
ssl | The port is configured to accept SSL connections. Default: true | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
host * | The hostname or IP address of the database | |
database * | The unique name of the Db2 location you want to access | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
collection | The ID of the collection of packages to use | |
host * | The hostname or IP address of the database | |
database * | The unique name of the Db2 location you want to access | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port | The port of the database. Default: 50001 | |
ssl | The port is configured to accept SSL connections. Default: true | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
db_locale | The value of DB_LOCALE property. | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
server * | The name of the database server to connect to | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
client_channel_definition.channel_name | Name of the channel. | |
client_channel_definition.connection_name | Name of the connection. The format of this value must match the selected transport type. | |
password | Password for the username that connects to the MQ server. | |
queue_manager_name | Name of the queue manager to access. The value must match the queue manager in the client connection channel definition. | |
client_channel_definition.transport_type | Transport protocol to use.. Values: [tcp, udp]. Default: tcp | |
username | Name of the user that connects to the MQ server. |
Name | Type | Description |
---|---|---|
access_mode | Access mode to use when opening source queue. Values: [as_in_queue_definition, exclusive, exclusive_if_granted, shared]. Default: as_in_queue_definition | |
other_queue_settings.alternate_user_id | Alternate user identifier to specify when opening the queue. | |
work_queue.append_node_number | If running in parallel, there may be multiple work queues each with node number appended. This property spedifies whether to append the node number to the work queue name.. Default: true | |
header_fields_filter.appl_identity_data | List of acceptable application identity data values for source messages | |
header_fields_filter.appl_origin_data | List of application origin data values for source messages | |
header_fields_filter.backout_count | List of acceptable backout count values and ranges for source messages | |
transaction.end_of_day | Enable blocking transaction processing. Default: false | |
header_fields_filter.coded_char_set_id | List of acceptable coded character set identifier values and ranges for source messages | |
message_options.message_conversion.coded_char_set_id | Coded character set identifier to which to convert character data in source messages. When omitted or set to the default value 0, the 'DEFAULT' coded character set identifier for the current queue manager connection is assumed. The values -1 and -2 are used to specify special 'INHERIT' and 'EMBEDDED' coded character set identifier values. Default: 0 | |
pub_sub.content_filter | Content filter value to specify when registering and/or deregistering the subscriber. If neither registration nor deregistration is enabled, this value is ignored | |
error_queue.context_mode | Context mode to use when opening error queue. The value should be chosen based on whether messages that may end up on the error queue should preserve identity context fields (Set identity), origin context fields (Set all) or none of the context fields (None). Values: [none, set_all, set_identity]. Default: none | |
work_queue.context_mode | Context mode to use for opening work queue. It defines whether to preserve none, identity or all of the context message fields in the messages moved from the source queue to the work queue.. Values: [none, set_all, set_identity]. Default: set_all | |
pub_sub.deregistration.deregistration_correl_id | Correlation identifier for deregistration | |
pub_sub.registration.registration_correl_id | Correlation identifier for registration | |
header_fields_filter.feedback.custom_value | List of acceptable feedback and reason code custom values and ranges for source report messages | |
header_fields_filter.format.custom_value | List of acceptable format custom values for source messages | |
header_fields_filter.msg_type.custom_value | List of acceptable message type custom values and ranges for source messages | |
header_fields_filter.put_appl_type.custom_value | List of acceptable put application type custom values for source messages | |
pub_sub.deregistration | Switch that controls whether to deregister with the broker when the job ends. Default: false | |
pub_sub.pub_sub_dynamic_reply_to_queue | Switch that controls whether to open the reply queue as dynamic queue. Default: false | |
message_options.pass_by_reference | The option to enable for passing payload data by reference. When this option is not enabled, the payload data is passed inline. Default: false | |
header_fields_filter.encoding | List of acceptable encoding values and ranges for source messages | |
message_options.message_conversion.encoding | Encoding to which to convert numeric data in source messages. When omitted or set to default value -1, the 'NATIVE' numeric encoding for the current queue manager connection is assumed. Default: -1 | |
transaction.end_of_wave.end_of_data | Specify whether to insert EOW marker for the final set of records when their number is smaller than the value specified for the transaction record count. Note that if the specified transaction record count value is 0 (representing all available records), there is only one transaction wave which consists of all the records, and so the End of data value should be set to Yes in order for EOW marker to be inserted for that transaction wave. Default: true | |
end_of_data_message_type | Message type that marks the end message receiving. When the connector receives a message of this message type, it stops reading additional messages from the queue (except other messages from the same group when message group assembly is required) | |
transaction.end_of_wave | Provide settings for the end of wave handling. None means EOW markers are never inserted, Before means EOW markers are inserted before completing the transaction, After means EOW markers are inserted after completing the transaction. Values: [after, before, none]. Default: none | |
error_queue | Switch that controls whether to use error queue. Default: false | |
header_fields_filter.expiry | List of acceptable expiry interval values and ranges (in tenths of seconds) for source messages | |
message_options.extract_key | Switch that controls whether to extract a binary value from the message payload and provide it on output through the column that has WSMQ.EXTRACTEDKEY data element specified.. Default: false | |
header_fields_filter | Switch that controls whether to filter source messages based on the provided filtering criteria. Default: false | |
pub_sub.deregistration.subscriber | General deregistration options for the subscriber. Values: [correlation_id_as_identity, deregister_all, leave_only, variable_user_id] | |
pub_sub.registration.subscriber_general | General registration options for the subscriber. Note: Option [Anonymous] is ignored in MQRFH2 service mode. Values: [anonymous, correlation_id_as_identity, duplicates_ok, local, new_publications_only] | |
other_queue_settings.alternate_security_id.hex | Switch that controls whether the provided alternate security identifier value should be treated as an array of hex-digit pairs rather than a plain text value. Since the only currently supported Alternate security ID values are 40-byte long values that contain the Windows SID of the user, this property value should always be set to Yes. Default: true | |
header_fields_filter.accounting_token.hex | Switch that controls whether the provided accounting token value should be treated as an array of hex-digit pairs rather than a plain text value. Default: false | |
header_fields_filter.correl_id.hex | Switch that controls whether the provided correlation identifier value should be treated as an array of hex-digit pairs rather than a plain text value. Default: false | |
header_fields_filter.group_id.hex | Switch that controls whether the provided group identifier value should be treated as an array of hex-digit pairs rather than a plain text value. Default: false | |
header_fields_filter.msg_id.hex | Switch that controls whether the provided message identifier value should be treated as an array of hex-digit pairs rather than a plain text value. Default: false | |
pub_sub.registration.subscriber_identity | Identity-related registration options for the subscriber. Values: [add_name, join_exclusive, join_shared, no_alteration, variable_user_id] | |
message_options.extract_key.key_length * | Length in bytes of the binary value to extract. The value -1 specifies that the binary key value should contain all bytes from the specified key offset to the end of the message payload.. Default: 0 | |
message_options.extract_key.key_offset * | Offset in bytes in the message payload from which to extract binary key value.. Default: 0 | |
header_fields_filter.msg_flags.must_match_all | Switch that controls whether source messages must have ALL of the specified message flag values or ANY of the specified message flag values in order to be accepted. Default: false | |
header_fields_filter.report.must_match_all | Switch that controls whether source messages must have ALL of the specified report values or ANY of the specified report values in order to be accepted. Default: false | |
work_queue.monitor_queue_depth.max_queue_depth * | Maximum queue depth that the connector allows for the queue | |
transaction.message_controlled | Settings for the module and function that the connector should invoke for each input message to determine whether the transaction should be committed after each message. Default: false | |
message_options.message_conversion | Switch that controls whether to perform conversion of numeric and character data in source messages. Default: false | |
message_options | Various options that control ordering, structure and access mode for the message. Default: false | |
message_options.message_order_and_assembly | Value that specifies how to retrieve message segments and group messages from the source queue. Values: [assemble_groups, assemble_logical_messages, individual_ordered, individual_unordered]. Default: individual_ordered | |
message_options.message_padding | Switch that controls whether to pad message payload column for source messages. When selected, message payload text column is padded with space character, and message payload binary column is padded with NULL byte value. Default: false | |
message_quantity | Number of messages to retrieve from the source queue. Note that this is the number of queue messages, not the number of rows. When message group assembly is required, each group counts as one message. Default: -1 | |
message_read_mode | Mode for reading source messages. Messages can be read within or outside of transaction, and they can be received destructively or kept on the source queue. Values: [delete, delete_under_transaction, keep, move_to_work_queue]. Default: delete_under_transaction | |
header_fields_filter.msg_seq_number | List of acceptable message sequence number values and ranges for source messages | |
message_options.message_truncation | Switch that controls whether to allow truncation of the source message payload so that it fits in the message payload column. When selected, message payload for text message column is truncated to the column length in characters, and message payload for binary payload column is truncated to the column size in bytes. Default: true | |
transaction.end_of_day.method_name * | Name of the method that determines whether a message represents a blocking transaction | |
transaction.message_controlled.method_name * | Name of the method | |
work_queue.monitor_queue_depth.min_queue_depth * | Minimum queue depth that the connector tries to maintain on the queue. | |
transaction.end_of_day.module_name * | Fully-qualified name of the module (shared library) that implements the method for identifying blocking transactions | |
transaction.message_controlled.module_name * | Fully-qualified name of the module (shared library) | |
work_queue.monitor_queue_depth | Switch that controls whether to monitor the work queue depth. Default: false | |
work_queue.name * | Name of the work queue. | |
header_fields_filter.offset | List of acceptable offset values and ranges for source messages | |
header_fields_filter.original_length | List of acceptable original length values and ranges for source messages (used for segments of report messages) | |
other_queue_settings | Additional settings for the source queue from which to receive messages. Default: false | |
header_fields_filter.msg_payload_size | List of acceptable message payload size values and ranges for source messages (format headers are not counted towards payload size) | |
refresh.period | Amount of time after which the cursor should be periodically rewound. When omitted, the cursor is rewound each time the end of the queue is reached. Default: -1 | |
header_fields_filter.persistence | List of acceptable persistence values and ranges for source messages. Values: [as_in_queue_definition, not_persistent, persistent] | |
pub_sub.registration.subscriber_persistence | Persistence registration options for the subscriber. Values: [non_persistent, persistent, persistent_as_publish, persistent_as_queue]. Default: persistent_as_publish | |
header_fields_filter.priority | List of acceptable priority values and ranges for source messages | |
end_of_data_message_type.process_end_of_data_message | Switch that controls whether the end of data message should also be processed and provided on output.. Default: true | |
pub_sub | Switch that controls whether the connector is in Publish/Subscribe mode of operation. Default: false | |
header_fields_filter.put_appl_name | List of acceptable put application name values for source messages | |
header_fields_filter.put_date | List of acceptable put date values and ranges for source messages (in YYYYMMDD format) | |
header_fields_filter.put_time | List of acceptable put time values and ranges for source messages (in HHMMSSTH format) | |
error_queue.queue_manager_name | Name of the queue manager that hosts the error queue. When the value is not provided, the connector assumes that the error queue resides on the currently connected queue manager. | |
error_queue.name * | Name of the error queue. | |
pub_sub.pub_sub_dynamic_reply_to_queue.name * | Name of the dynamic reply queue. Use asterisk to specify incomplete name. Part of the name right of the asterisk (including the asterisk) will be automatically generated by the queue manager. Only one asterisk character may be specified. If specified, it must be the last character in the value and its position must be between 1 and 33 (inclusive). Default: * | |
queue_name | Name of the source queue from which to receive messages. In publish/subscribe mode this is used as the subscriber queue. Note: if dynamic queue options are specified, this value is the name of the model queue to use as template for creating the dynamic queue | |
transaction.record_count | Number of records per transaction. The value 0 means all available records. Default: 0 | |
refresh | Switch that controls whether to periodically rewind the cursor on the source queue. Default: false | |
pub_sub.registration | Switch that controls whether to register with the broker when the job starts. Default: false | |
message_options.remove_mqrfh2header | Switch that controls whether to remove MQRFH2 header.When selected, only the message body is passed.. Default: false | |
pub_sub.pub_sub_reply_to_queue | Name of the queue to which the broker should send replies for the command messages. Note: If dynamic reply queue usage is specified, the value specified here is used as the model queue name | |
header_fields_filter.reply_to_q | List of acceptable reply to queue values for source messages | |
header_fields_filter.reply_to_q_mgr | List of acceptable reply to queue manager values for source messages | |
pub_sub.service_type | Rules and formatting header version to use for command messages. Values: [mqrfh, mqrfh2]. Default: mqrfh | |
pub_sub.stream_name | Stream name value to specify when registering and/or deregistering the subscriber. If neither registration nor deregistration is enabled, this value is ignored. Default: SYSTEM.BROKER.DEFAULT.STREAM | |
pub_sub.sub_identity | Subscription identity value to specify when registering and/or deregistering the subscriber. If neither registration nor deregistration is enabled, this value is ignored | |
pub_sub.sub_name | Subscription name value to specify when registering and/or deregistering the subscriber. If neither registration nor deregistration is enabled, this value is ignored | |
pub_sub.sub_point | Subscription point value to specify when registering and/or deregistering the subscriber. If neither registration nor deregistration is enabled, this value is ignored | |
header_fields_filter.feedback.system_value | Acceptable feedback and reason code system values for source report messages. Values: [confirm_on_arrival, confirm_on_delivery, expiration, message_too_big_for_queue_mqrc, message_too_big_for_queue_manager_mqrc, negative_action_notification, none, not_authorized_mqrc, persistent_not_allowed_mqrc, positive_action_notification, put_inhibited_mqrc, queue_full_mqrc, queue_space_not_available_mqrc, quit] | |
header_fields_filter.format.system_value | Acceptable format system values for source messages. Values: [mqadmin, mqchcom, mqcics, mqcmd1, mqcmd2, mqdead, mqevent, mqhdist, mqhmde, mqhref, mqhrf, mqhrf2, mqhwih, mqims, mqimsvs, mqnone, mqpcf, mqstr, mqtrig, mqxmit] | |
header_fields_filter.msg_type.system_value | Acceptable message type system values for source messages. Values: [datagram, reply, report, request] | |
header_fields_filter.put_appl_type.system_value | Acceptable put application type system values for source messages. Values: [aix,_unix, broker, channelinitiator, cics, cicsbridge, cicsvse, dos, dqm, guardian,_nsk, ims, imsbridge, java, mvs,_os390,_zos, nocontext, notesagent, os2, os400, qmgr, unknown, user, vms, vos, windows, windowsnt, xcf] | |
transaction.time_interval | Number of seconds per transaction. The value 0 means unlimited time. Default: 0 | |
transaction.end_of_day.timeout | Maximum amount of time to wait for a blocking transaction to complete. The value -1 specifies unlimited time. The operation fails if the transaction does not complete within the specified time.. Default: -1 | |
pub_sub.deregistration.deregistration_topic | Topic(s) for deregistration | |
pub_sub.registration.registration_topic * | Topic(s) for registration | |
error_queue.tranmission_queue_name | Name of the transmission queue to use when the error queue is a remote queue. If the value is not specified, the default transmission queue is used. | |
message_options.treat_eol_as_row_terminator | Switch that controls whether end-of-line character in the message payload should be treated as row terminator. When selected, each source message may result in multiple rows of data. Default: false | |
header_fields_filter.accounting_token.use_wildcard | Switch that controls whether the initial asterisk in the provided accounting token value should be treated as a wildcard rather than plain text. Default: false | |
header_fields_filter.correl_id.use_wildcard | Switch that controls whether the initial asterisk in the provided correlation identifier value should be treated as a wildcard rather than plain text. Default: false | |
header_fields_filter.group_id.use_wildcard | Switch that controls whether the initial asterisk in the provided group identifier value should be treated as a wildcard rather than plain text. Default: false | |
header_fields_filter.msg_id.use_wildcard | Switch that controls whether the initial asterisk in the provided message identifier value should be treated as a wildcard rather than plain text. Default: false | |
header_fields_filter.user_identifier | List of acceptable user identifier values for source messages | |
other_queue_settings.alternate_security_id.value | The value for the alternate security identifier. | |
header_fields_filter.accounting_token.value | Acceptable accounting token value for source messages. | |
header_fields_filter.correl_id.value | Acceptable correlation identifier value for source messages. | |
header_fields_filter.group_id.value | Acceptable group identifier value for source messages. | |
header_fields_filter.msg_id.value | Acceptable message identifier value for source messages. | |
header_fields_filter.msg_flags.value | Acceptable message flag values for source messages. Values: [last_message_in_group, last_segment, message_in_group, segment, segmentation_allowed] | |
header_fields_filter.report.value | Acceptable report values for source messages. Values: [confirm_on_arrival, confirm_on_arrival_with_data, confirm_on_arrival_with_full_data, confirm_on_delivery, confirm_on_delivery_with_data, confirm_on_delivery_with_full_data, discard_message, exception, exception_with_data, exception_with_full_data, expiration, expiration_with_data, expiration_with_full_data, negative_action_notification, pass_correlation_id, pass_message_id, positive_action_notification] | |
wait_time | Maximum amount of time (in seconds) to wait for a new message to arrive on the source queue. Default: -1 |
Name | Type | Description |
---|---|---|
other_queue_settings.alternate_user_id | Alternate user identifier to specify when opening the queue. | |
header_fields_setter.appl_identity_data | Application identity data value for destination messages | |
header_fields_setter.appl_origin_data | Application origin data to set for the destination messages | |
other_queue_settings.cluster_queue.binding_mode | Binding mode to use when selecting physical queue instance from the cluster. Options are to resolve the instance when opening the shared cluster queue or each time a message is sent to the shared cluster queue. Additionally, the default binding mechanism for the shared cluster queue may be used. Values: [as_in_queue_definition, not_fixed, on_open]. Default: as_in_queue_definition | |
other_queue_settings.cluster_queue | Switch that controls whether to access destination queue as a shared cluster queue. Default: false | |
header_fields_setter.coded_char_set_id | Coded character set identifier value to set for destination messages. The values 0, -1 and -2 are used to specify the special 'DEFAULT', 'INHERIT' and 'EMBEDDED' coded character set identifier values. Default: 0 | |
context_mode | Context mode to use when opening destination queue. The value should be chosen based on whether destination messages will include identity context fields (Set identity), origin context fields (Set all) or none of the context fields (None). Values: [none, set_all, set_identity]. Default: none | |
error_queue.context_mode | Context mode to use when opening error queue. The value should be chosen based on whether messages that may end up on the error queue should preserve identity context fields (Set identity), origin context fields (Set all) or none of the context fields (None). Values: [none, set_all, set_identity]. Default: none | |
pub_sub.deregistration.deregistration_correl_id | Correlation identifier for deregistration | |
pub_sub.registration.registration_correl_id | Correlation identifier for registration | |
pub_sub.publish.publication_format.custom_value | Format custom value to set for the publication payload | |
header_fields_setter.feedback.custom_value | Feedback and reason code custom value to set for destination messages | |
header_fields_setter.format.custom_value | Format custom value to set for destination messages | |
header_fields_setter.msg_type.custom_value | Message type custom value to set for destination messages | |
header_fields_setter.put_appl_type.custom_value | Put application type custom value for destination messages | |
pub_sub.deregistration | Switch that controls whether to deregister with the broker when the job ends. Default: false | |
other_queue_settings.dynamic_queue | Switch that controls whether to open the queue as dynamic queue. To open the queue as a dynamic queue, use a model queue name where a regular queue name was specified. Default: false | |
pub_sub.pub_sub_dynamic_reply_to_queue | Switch that controls whether to open the reply queue as dynamic queue. Default: false | |
header_fields_setter.encoding | Encoding value to set for destination messages. The value -1 is used to specify the special 'NATIVE' encoding value. Default: -1 | |
error_queue | Switch that controls whether to use error queue. Default: false | |
header_fields_setter.expiry | Expiry interval value (in tenths of seconds) to set for destination messages. Default: -1 | |
pub_sub.deregistration.publisher | General deregistration options for the publisher. Values: [correlation_id_as_identity, deregister_all] | |
pub_sub.registration.publisher | General registration options for the publisher. Values: [anonymous, correlation_id_as_identity, local] | |
header_fields_setter.version | Header version value to set for destination messages. Values: [1, 2]. Default: 2 | |
other_queue_settings.alternate_security_id.hex | Switch that controls whether the provided alternate security identifier value should be treated as an array of hex-digit pairs rather than a plain text value. Since the only currently supported Alternate security ID values are 40-byte long values that contain the Windows SID of the user, this property value should always be set to Yes. Default: true | |
header_fields_setter.accounting_token.hex | Switch that controls whether the provided accounting token value should be treated as an array of hex-digit pairs rather than a plain text value. Default: false | |
header_fields_setter.correl_id.hex | Switch that controls whether the provided correlation identifier value should be treated as an array of hex-digit pairs rather than a plain text value. Default: false | |
header_fields_setter.group_id.hex | Switch that controls whether the provided group identifier value should be treated as an array of hex-digit pairs rather than a plain text value. Default: false | |
header_fields_setter.msg_id.hex | Switch that controls whether the provided message identifier value should be treated as an array of hex-digit pairs rather than a plain text value. Default: false | |
pub_sub.publish.message_content_descriptor | Switch that controls whether to include message content descriptor service folder ( | |
header_fields_setter.msg_flags | Message flags to set for destination messages. Note: if message segmentation was selected, the connector will automatically set the offset values on the generated message segments. The flags for Segmentation Allowed, Segment and Last Segment in this property will be ignored. Values: [last_message_in_group, last_segment, message_in_group, segment, segmentation_allowed] | |
message_options | Various options that control ordering, structure and access mode for the message. Default: false | |
header_fields_setter.msg_seq_number | Message sequence number to set for destination messages. Default: 1 | |
pub_sub.publish.msg_seq_number | Switch that controls whether to update message sequence number in the published messages. Default: false | |
pub_sub.publish.message_content_descriptor.message_service_domain * | Service domain for the publication messages. Values: [idoc, mrm, none, xml, xmlns]. Default: mrm | |
pub_sub.publish.message_content_descriptor.message_set | Name of the message set for the publication messages | |
pub_sub.publish.message_content_descriptor.message_type | Name of the message type for the publication messages | |
message_write_mode | Mode for writing destination messages. Messages can be written within or outside of transaction, and the connector can be configured to send only messages that have non-zero length payload. Values: [create, create_under_transaction, create_on_content, create_on_content_under_transaction]. Default: create_under_transaction | |
header_fields_setter.offset | Offset value to set for destination messages. Note: if message segmentation was selected, the connector will automatically set the offset values on the generated message segments. The value of this property will be ignored. Default: 0 | |
other_queue_settings | Additional settings for the source queue from which to receive messages. Default: false | |
header_fields_setter.persistence | Persistence value to set for destination messages. Values: [as_in_queue_definition, not_persistent, persistent]. Default: as_in_queue_definition | |
pub_sub.publish.message_content_descriptor.mrm_physical_format | Name of the MRM physical format in the specified message set used for the publication messages | |
header_fields_setter.priority | Priority value to set for destination messages. Default: -1 | |
pub_sub.publish.publication | Publication options to specify when publishing messages. Note: Option [No Registration] is ignored in MQRFH2 service mode. Values: [correlation_id_as_identity, no_registration, retain_publication] | |
pub_sub | Switch that controls whether the connector is in Publish/Subscribe mode of operation. Default: false | |
header_fields_setter.put_appl_name | Put application name value for destination messages | |
header_fields_setter.put_date | Put date value to set for destination messages (in YYYYMMDD format) | |
header_fields_setter.put_time | Put time value to set for destination messages (in HHMMSSTH format) | |
error_queue.queue_manager_name | Name of the queue manager that hosts the error queue. When the value is not provided, the connector assumes that the error queue resides on the currently connected queue manager. | |
other_queue_settings.cluster_queue.queue_manager_name | Name of the cluster queue manager. If the value is not specified, the queue manager is selected dynamically from the cluster. | |
error_queue.name * | Name of the error queue. | |
other_queue_settings.dynamic_queue.name * | Name of the dynamic queue. Use asterisk to specify incomplete name (stem). Part of the name to the right of the asterisk (including the asterisk) will be automatically generated by the queue manager. Only one asterisk character may be specified. If specified, it must be the last character in the value and its position must be between 1 and 33 (inclusive). Default: * | |
pub_sub.pub_sub_dynamic_reply_to_queue.name * | Name of the dynamic reply queue. Use asterisk to specify incomplete name. Part of the name right of the asterisk (including the asterisk) will be automatically generated by the queue manager. Only one asterisk character may be specified. If specified, it must be the last character in the value and its position must be between 1 and 33 (inclusive). Default: * | |
queue_name | Name of the source queue from which to receive messages. In publish/subscribe mode this is used as the subscriber queue. Note: if dynamic queue options are specified, this value is the name of the model queue to use as template for creating the dynamic queue | |
transaction.record_count | Number of records per transaction. The value 0 means all available records. Default: 0 | |
pub_sub.registration | Switch that controls whether to register with the broker when the job starts. Default: false | |
pub_sub.publish.registration | Registration options to specify when publishing messages. Note: Option [Anonymous] is ignored in MQRFH2 service mode. The remaining registration options are used in MQRFH2 service mode as publication options. Values: [anonymous, correlation_id_as_identity, local] | |
pub_sub.pub_sub_reply_to_queue | Name of the queue to which the broker should send replies for the command messages. Note: If dynamic reply queue usage is specified, the value specified here is used as the model queue name | |
other_queue_settings.dynamic_queue.close_options | Close options to use when closing dynamic reply queue in request/reply mode of operation. Values: [delete, none, purge_and_delete]. Default: none | |
header_fields_setter.reply_to_q | Reply to queue value to set for destination messages. In request/reply mode, if dynamic reply queue is used, this value is the name of the model queue to use for the dynamic reply queue | |
header_fields_setter.reply_to_q_mgr | Reply to queue manager value to set for destination messages | |
header_fields_setter.report | Report values to set for destination messages. Values: [confirm_on_arrival, confirm_on_arrival_with_data, confirm_on_arrival_with_full_data, confirm_on_delivery, confirm_on_delivery_with_data, confirm_on_delivery_with_full_data, discard_message, exception, exception_with_data, exception_with_full_data, expiration, expiration_with_data, expiration_with_full_data, negative_action_notification, pass_correlation_identifier, pass_message_identifier, positive_action_notification] | |
message_options.row_buffer_count | Number of rows that the connector buffers before sending a message with a payload comprised of a concatenation of the buffered rows. Message header fields and message format headers (if any) from the first buffered row are used for the composite destination message. Note that if a Record count value (under Transaction settings) is specified, that value must be a multiple of the Row buffer count value.. Default: 1 | |
message_options.create_segmented_message.segment_size * | Size of each segment in bytes. The last segment to be created may have smaller size than the specified value. Default: 1024 | |
message_options.create_segmented_message | Switch that controls whether to separate data for destination message into segments and to send those separate segments rather than a single message to the destination queue. Note that if an error occurs while sending some of the segments, the whole input message will be sent to the error queue (if defined) or to the reject link (if defined), rather than the individual segments.. Default: false | |
pub_sub.service_type | Rules and formatting header version to use for command messages. Values: [mqrfh, mqrfh2]. Default: mqrfh | |
header_fields_setter | Switch that controls whether to override specified message header fields for destination messages. Default: false | |
message_options.set_message_id_column_value | Switch that controls whether the message ID should be set to the value of the column with WSMQ.MSGID data element.. Default: false | |
pub_sub.publish.msg_seq_number.start_value | The initial message sequence number used for the first published message and regularly incremented for the subsequent published messages. Default: 1 | |
pub_sub.publish.publication_format.system_value | Format system value to set for the publication payload. Values: [mqadmin, mqchcom, mqcics, mqcmd1, mqcmd2, mqdead, mqevent, mqhdist, mqhmde, mqhref, mqhrf, mqhrf2, mqhwih, mqims, mqimsvs, mqnone, mqpcf, mqstr, mqtrig, mqxmit]. Default: mqstr | |
header_fields_setter.feedback.system_value | Feedback and MQRC (reason code) system value to set for destination messages. Values: [confirm_on_arrival, confirm_on_delivery, expiration, message_too_big_for_queue_mqrc, message_too_big_for_queue_manager_mqrc, negative_action_notification, none, not_authorized_mqrc, persistent_not_allowed_mqrc, positive_action_notification, put_inhibited_mqrc, queue_full_mqrc, queue_space_not_available_mqrc, quit]. Default: none | |
header_fields_setter.format.system_value | Format system value to set for destination messages. Values: [mqadmin, mqchcom, mqcics, mqcmd1, mqcmd2, mqdead, mqevent, mqhdist, mqhmde, mqhref, mqhrf, mqhrf2, mqhwih, mqims, mqimsvs, mqnone, mqpcf, mqstr, mqtrig, mqxmit]. Default: mqstr | |
header_fields_setter.msg_type.system_value | Message type system value to set for destination messages. Values: [datagram, reply, report, request]. Default: datagram | |
header_fields_setter.put_appl_type.system_value | Put application type system value for destination messages. Values: [aix,_unix, broker, channelinitiator, cics, cicsbridge, cicsvse, dos, dqm, guardian,_nsk, ims, imsbridge, java, mvs,_os390,_zos, nocontext, notesagent, os2, os400, qmgr, unknown, user, vms, vos, windows, windowsnt, xcf]. Default: nocontext | |
pub_sub.publish.timestamp | Switch that controls whether to include timestamps in the published messages. Default: false | |
pub_sub.deregistration.deregistration_topic | Topic(s) for deregistration | |
pub_sub.publish.publish_topic | Topic of the publication message | |
pub_sub.registration.registration_topic * | Topic(s) for registration | |
error_queue.tranmission_queue_name | Name of the transmission queue to use when the error queue is a remote queue. If the value is not specified, the default transmission queue is used. | |
other_queue_settings.transmission_queue_name | Name of the transmission queue to use when the destination queue is a remote queue. If the value is not specified, the default transmission queue is used. | |
header_fields_setter.user_identifier | User identifier value to set for destination messages | |
other_queue_settings.alternate_security_id.value | The value for the alternate security identifier. | |
header_fields_setter.accounting_token.value | Accounting token value for destination messages | |
header_fields_setter.correl_id.value | Correlation identifier value for destination messages | |
header_fields_setter.group_id.value | Group identifier value for destination messages | |
header_fields_setter.msg_id.value | Message identifier value for destination messages |
Name | Type | Description |
---|---|---|
api_key * | API key from the user account | |
crn | Cloud Resource Name. To find the CRN, go to the [Resource list] at https://cloud.ibm.com/resources. Expand [Services and software]. Select the [IBM Match 360 with Watson] instance, and click the Location column. |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
model_type_name * | The IBM Match 360 data model type, such as records or relationships. Values: [records, relationships]. Default: records | |
record_sub_type_name | The IBM Match 360 record subtype, such as customers, patients, or households. This value only applies when the data category is records | |
model_sub_type_name * | The IBM Match 360 data model subtype. This value can be a record type (such as such as person or organization) or a relationship type (such as spouse or doctor-patient) | |
row_limit | The maximum number of rows to return |
Name | Type | Description |
---|---|---|
model_type_name * | The IBM Match 360 data model type, such as records or relationships. Values: [records, relationships]. Default: records | |
record_sub_type_name | The IBM Match 360 record subtype, such as customers, patients, or households. This value only applies when the data category is records | |
model_sub_type_name * | The IBM Match 360 data model subtype. This value can be a record type (such as such as person or organization) or a relationship type (such as spouse or doctor-patient) |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | Specify the name of the database to connect to. | |
use_separate_connection_for_twt.database * | The name of the database for ETs and TWT. | |
hostname * | A database hostname to connect to. | |
password * | Specify the password to use to connect to the database. | |
use_separate_connection_for_twt.password | Password for authentication purposes. | |
port * | Port that database process is listening on.. Default: 5480 | |
use_separate_connection_for_twt | Use a separate connection for creating/dropping/accessing External tables (ETs) and the Temporary Work Table.. Default: false | |
use_separate_connection_for_twt.username | Name of the user for authentication purposes. | |
username * | Specify the user name to use to connect to the database. |
Name | Type | Description |
---|---|---|
before_after_sql.after_sql | One or more statements to be executed after the connector finished processing all input rows. Multiple statements are separated by semi-colon. Executed once from the conductor node. | |
before_after_sql.after_sql_node | One or more statements to be executed after the connector finished processing all input rows. Multiple statements are separated by semi-colon. Executed once in each processing node. (Parallel canvas only) | |
session.array_size | Enter a number that represents the number of records to process in read and write operations on the database. Enter 0 to process all records in a single array. Enter 1 to process one record at a time.. Default: 2000 | |
before_after_sql.after_sql.fail_on_error.atomic_mode | If the atomic mode is yes, execute all statements in one transaction. Otherwise, execute each statement as separate transaction.. Default: true | |
before_after_sql.after_sql_node.fail_on_error.atomic_mode | If the atomic mode is yes, execute all statements in one transaction. Otherwise, execute each statement as separate transaction.. Default: true | |
before_after_sql.before_sql.fail_on_error.atomic_mode | If the atomic mode is yes, execute all statements in one transaction. Otherwise, execute each statement as separate transaction.. Default: true | |
before_after_sql.before_sql_node.fail_on_error.atomic_mode | If the atomic mode is yes, execute all statements in one transaction. Otherwise, execute each statement as separate transaction.. Default: true | |
before_after_sql.before_sql | One or more statements to be executed before the connector starts processing any input rows. Multiple statements are separated by semi-colon. Executed once from the conductor node. | |
before_after_sql.before_sql_node | One or more statements to be executed before the connector starts processing any input rows Multiple statements are separated by semi-colon. Executed once in each processing node. (Parallel canvas only) | |
before_after_sql | Setting it to Yes, enables child properties for specifying Before and After SQL statements.. Default: false | |
session.unload_options.directory_for_named_pipe | Specifies the directory for the named pipe on Unix. It is ignored on Windows. If it is left blank, the connector will use the value of the environment variable TMPDIR. If TMPDIR is not defined, it will default to /tmp. | |
enable_case_sensitive_i_ds | If set to Yes, table and column names will be assumed to be case sensitive.. Default: true | |
sql.enable_partitioned_reads | If set to Yes, the connector will allow reading in parallel. The level of parallelism will be determined by the number of nodes in the APT configuration file. If set to No, the connector will force sequential execution.. Default: false | |
before_after_sql.after_sql.fail_on_error | If set to Yes, the job will be aborted if the statement fails. If set to No, the statement errors will be ignored.. Default: true | |
before_after_sql.after_sql_node.fail_on_error | If set to Yes, the job will be aborted if the statement fails. If set to No, the statement errors will be ignored.. Default: true | |
before_after_sql.before_sql.fail_on_error | If set to Yes, the job will be aborted if the statement fails. If set to No, the statement errors will be ignored.. Default: true | |
before_after_sql.before_sql_node.fail_on_error | If set to Yes, the job will be aborted if the statement fails. If set to No, the statement errors will be ignored.. Default: true | |
generate_sql | Indicates whether the connector should generate a SELECT statement or use the provided SQL statement.. Default: true | |
limit_rows.limit | Enter the maximum number of rows that will be returned by the connector.. Default: 1000 | |
limit_rows | Select Yes to limit the number of rows that are returned by the connector.. Default: false | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
transaction.mark_end_of_wave | If set to Yes, the connector emits an end-of-wave marker after the specified number of rows (Record count) is read from the Netezza server. If set to No, the connector will not emit end-of-wave markers.. Default: false | |
before_after_sql.after_sql.fail_on_error.log_level_for_after_sql | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
before_after_sql.after_sql_node.fail_on_error.log_level_for_after_sql_node | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
before_after_sql.before_sql.fail_on_error.log_level_for_before_sql | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
before_after_sql.before_sql_node.fail_on_error.log_level_for_before_sql_node | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
session.schema_reconciliation.mismatch_reporting_action | The type of message that will be logged if one or more columns unmatched or mismatched.. Values: [info, none, warning]. Default: warning | |
session.schema_reconciliation.mismatch_reporting_action_source | The type of message that will be logged if one or more columns unmatched or mismatched.. Values: [info, none, warning]. Default: warning | |
before_after_sql.after_sql.read_from_file | If set to Yes, the After SQL property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
before_after_sql.after_sql_node.read_from_file | If set to Yes, the After SQL (Node) property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
before_after_sql.before_sql.read_from_file | If set to Yes, the Before SQL property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
before_after_sql.before_sql_node.read_from_file | If set to Yes, the Before SQL (Node) property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
sql.select_statement.read_user_defined_sql_from_file | If set to Yes, the User-defined SQL property specifies a file name/path containing the SQL statement.. Default: false | |
transaction.record_count | Indicates the number of records (rows) in a single wave.. Default: 2000 | |
sql.select_statement * | Enter a SELECT statement. The statement is used to read rows from the database. | |
table_name * | The name of the target table. This table name will be used in the generated SQL statement(s). Never enter the name with quotes. | |
session.schema_reconciliation.type_mismatch_action | Action to take upon detecting a type mismatch. The value Fail will cause the job to abort.. Values: [drop, fail, keep]. Default: drop | |
session.schema_reconciliation.type_mismatch_action_source | Action to take upon detecting a type mismatch. The value Fail will cause the job to abort.. Values: [drop, fail, keep]. Default: drop | |
session.schema_reconciliation.unmatched_link_column_action_request_input_link | Action to take when an input link column does not match any columns in the table.. Values: [drop, fail]. Default: drop | |
session.schema_reconciliation.unmatched_link_column_action_source | Action to take when an input link column does not match any columns in the table.. Values: [drop, fail]. Default: drop | |
session.schema_reconciliation.unmatched_table_or_query_column_action_request | Action to take when a table or a query column does not match any output link columns. Values: [fail, ignore]. Default: ignore | |
session.schema_reconciliation.unmatched_table_or_query_column_action_source | Action to take when a table or a query column does not match any output link columns. Values: [fail, ignore]. Default: ignore |
Name | Type | Description |
---|---|---|
sql.action_column * | The name of the char (1) column that identifies one of the action the row should participate in. | |
before_after_sql.after_sql | One or more statements to be executed after the connector finished processing all input rows. Multiple statements are separated by semi-colon. Executed once from the conductor node. | |
before_after_sql.after_sql_node | One or more statements to be executed after the connector finished processing all input rows. Multiple statements are separated by semi-colon. Executed once in each processing node. (Parallel canvas only) | |
before_after_sql.after_sql.fail_on_error.atomic_mode | If the atomic mode is yes, execute all statements in one transaction. Otherwise, execute each statement as separate transaction.. Default: true | |
before_after_sql.after_sql_node.fail_on_error.atomic_mode | If the atomic mode is yes, execute all statements in one transaction. Otherwise, execute each statement as separate transaction.. Default: true | |
before_after_sql.before_sql.fail_on_error.atomic_mode | If the atomic mode is yes, execute all statements in one transaction. Otherwise, execute each statement as separate transaction.. Default: true | |
before_after_sql.before_sql_node.fail_on_error.atomic_mode | If the atomic mode is yes, execute all statements in one transaction. Otherwise, execute each statement as separate transaction.. Default: true | |
sql.atomic_mode | When set to Yes, all write mode statements will be executed in one transaction. If set to No, each statement will be executed in a separate transaction.. Default: true | |
before_after_sql.before_sql | One or more statements to be executed before the connector starts processing any input rows. Multiple statements are separated by semi-colon. Executed once from the conductor node. | |
before_after_sql.before_sql_node | One or more statements to be executed before the connector starts processing any input rows Multiple statements are separated by semi-colon. Executed once in each processing node. (Parallel canvas only) | |
before_after_sql | Setting it to Yes, enables child properties for specifying Before and After SQL statements.. Default: false | |
sql.check_duplicate_rows | When set to Yes, the connector detects duplicate rows. When set to No (Default), the connector does not detect duplicate rows.. Default: false | |
sql.enable_record_ordering.order_key.column_name | The name of the column representing the ordering key | |
session.temporary_work_table.create_statement | Specifies a user-defined CREATE TABLE statement. | |
table_action.generate_create_statement.create_statement * | Specifies a user-defined CREATE TABLE statement. | |
sql.direct_insert | If set to Yes, the connector insert directly into the target table. In this mode, when running with multiple processing nodes it is possible to have partially committed data if one or more of the processing nodes encounters an error. If set No, the connector inserts into the temporary work table (TWT) first and then from TWT into the target. In this mode the data will either be completely committed or completely rolled back guarantying consistency.. Default: false | |
session.load_options.directory_for_log_files | Specifies the directory for the nzlog and nzbad files. If it is left blank, the connector will use the value of the environment variable TMPDIR. If TMPDIR is not defined, it will default to /tmp on Unix and to system temporary directory on Windows. | |
session.load_options.directory_for_named_pipe | Specifies the directory for the named pipe on Unix. It is ignored on Windows. If it is left blank, the connector will use the value of the environment variable TMPDIR. If TMPDIR is not defined, it will default to /tmp. | |
table_action.generate_create_statement.distribution_key | If set to Automatic, the Netezza server will choose the key. There is no guaranty which columns will be used and the feature can vary between Netezza software releases. If set to Random, rows will be sent to processing nodes in a random fashion (key-less). If set to User-defined, the columns listed in Key columns below will be used as a distribution key. This property only applies to a generated CREATE TABLE statement (run-time or design-time) and has no effect if the statement is entered manually. Values: [automatic, random, user-defined]. Default: random | |
table_action.generate_drop_statement.drop_statement * | Specifies a user-defined DROP TABLE statement. | |
session.temporary_work_table.drop_table | If set to Yes, the connector will drop the temporary work table.. Default: true | |
sql.check_duplicate_rows.duplicate_row_action | When set to Filter (Default), the connector filters out duplicate rows. When set to Fail, the job fails if any duplicate rows are found without making changes to the target table.. Values: [fail, filter]. Default: filter | |
enable_case_sensitive_i_ds | If set to Yes, table and column names will be assumed to be case sensitive.. Default: false | |
sql.enable_record_ordering | Setting it to Yes, enables record ordering and enables child properties for specifying order columns. Default: false | |
session.temporary_work_table.enable_merge_join | Sets the ENABLE_MERGEJOIN configuration parameter. If set to Yes, the connector will enable Netezza query planner's use of merge-join plan types. If set to No, the planner will choose one of other algorithms, including hash-join as a highly optimized one. If set to Database default, the property will not be set before query execution, using the current database setting.. Values: [database_default, no, yes]. Default: database_default | |
before_after_sql.after_sql.fail_on_error | If set to Yes, the job will be aborted if the statement fails. If set to No, the statement errors will be ignored.. Default: true | |
before_after_sql.after_sql_node.fail_on_error | If set to Yes, the job will be aborted if the statement fails. If set to No, the statement errors will be ignored.. Default: true | |
before_after_sql.before_sql.fail_on_error | If set to Yes, the job will be aborted if the statement fails. If set to No, the statement errors will be ignored.. Default: true | |
before_after_sql.before_sql_node.fail_on_error | If set to Yes, the job will be aborted if the statement fails. If set to No, the statement errors will be ignored.. Default: true | |
table_action.generate_create_statement.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
table_action.generate_drop_statement.fail_on_error | Abort the job if there is an error executing a command. Default: false | |
table_action.generate_truncate_statement.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
table_action.generate_create_statement | If set to Yes, the CREATE TABLE statement will be generated by the connector at runtime based on the columns in the input link and the table name provided in the Table name property. Default: true | |
table_action.generate_drop_statement | If set the Yes, the DROP TABLE statement will be generated at runtime. If set to No, the user needs to provide a custom statement in the Drop statement property.. Default: true | |
session.load_options.generate_statistics | If set to Yes, the connector will generate statistics. If set to No, no statistics will be generated.. Default: false | |
session.load_options.generate_statistics.generate_statistics_mode | Choose whether to generate statistics for a table or the whole database. Values: [database, table]. Default: table | |
session.load_options.generate_statistics.generate_statistics_columns | Select the columns to generate the statistics for. If no columns are selected (default state), all columns of the target table will be included. | |
table_action.generate_truncate_statement | If set the Yes, the TRUNCATE TABLE statement will be generated at runtime.. Default: true | |
sql.key_columns * | A comma-separated list of key column names. | |
table_action.generate_create_statement.distribution_key.key_columns * | Comma-separated list of column names that should comprise the distribution key. | |
session.load_options.max_reject_count | The number of rejected records that are logged before the job aborts.. Default: 1 | |
before_after_sql.after_sql.fail_on_error.log_level_for_after_sql | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
before_after_sql.after_sql_node.fail_on_error.log_level_for_after_sql_node | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
before_after_sql.before_sql.fail_on_error.log_level_for_before_sql | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
before_after_sql.before_sql_node.fail_on_error.log_level_for_before_sql_node | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
table_action.generate_create_statement.fail_on_error.log_level_for_create_statement | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
table_action.generate_drop_statement.fail_on_error.log_level_for_drop_statement | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
table_action.generate_truncate_statement.fail_on_error.log_level_for_truncate_statement | The type of message that will be logged if a SQL statement fails.. Values: [info, none, warning]. Default: warning | |
session.schema_reconciliation.mismatch_reporting_action | The type of message that will be logged if one or more columns unmatched or mismatched.. Values: [info, none, warning]. Default: warning | |
session.load_options.other_options | Additional options to be passed to the external table create statement. | |
table_action.generate_create_statement.read_create_statement_from_file | If set to Yes, the Create statement property specifies a file name/path containing the SQL statement.. Default: false | |
table_action.generate_drop_statement.read_drop_statement_from_file | If set to Yes, the Drop statement property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
before_after_sql.after_sql.read_from_file | If set to Yes, the After SQL property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
before_after_sql.after_sql_node.read_from_file | If set to Yes, the After SQL (Node) property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
before_after_sql.before_sql.read_from_file | If set to Yes, the Before SQL property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
before_after_sql.before_sql_node.read_from_file | If set to Yes, the Before SQL (Node) property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
table_action.generate_truncate_statement.read_truncate_statement_from_file | If set to Yes, the Truncate statement property specifies a file name/path containing the SQL statement. Otherwise it specifies the actual SQL statement.. Default: false | |
sql.user_defined_sql.read_user_defined_sql_from_file | If set to Yes, the User-defined SQL property specifies a file name/path containing the SQL statement.. Default: false | |
table_action * | Select the action to perform before writing data to the table.. Values: [append, create, replace, truncate]. Default: append | |
session.temporary_work_table.table_name * | The name of the temporary work table. | |
table_name * | The name of the target table. This table name will be used in the generated SQL statement(s). Never enter the name with quotes. | |
session.temporary_work_table | If set to Automatic, the connector will automatically create the temporary work table using an internally generated name. If set to User-defined, the connector will automatically create the temporary work table using the specified table name or create table statement.. Values: [automatic, existing, user-defined]. Default: automatic | |
truncate_column_names | If Set to Yes, the names of the input link columns will be truncated.. Default: false | |
truncate_column_names.truncate_length * | The maximum length in characters of column names after truncation.. Default: 128 | |
table_action.generate_truncate_statement.truncate_statement * | Specifies a user-defined TRUNCATE TABLE statement. | |
session.temporary_work_table.truncate_table | If set to Yes, the temporary work table is truncated before any data is written to it.. Default: false | |
session.schema_reconciliation.type_mismatch_action | Action to take upon detecting a type mismatch. The value Fail will cause the job to abort.. Values: [drop, fail, keep]. Default: drop | |
sql.use_unique_key_column.unique_key_column * | The name of the unique key column. | |
session.schema_reconciliation.unmatched_link_column_action | Action to take when an input link column does not match any columns in the table.. Values: [drop, fail, keep]. Default: drop | |
session.schema_reconciliation.unmatched_table_column_action | Action to take when a table column does not match any input link columns. If an existing temporary work table is provided, its columns are checked against the input link columns. Otherwise target table columns are checked. Values: [fail, ignore_all, ignore_nullable]. Default: ignore_nullable | |
sql.update_columns | A comma-separated list of column names whose value will be updated by the statement(s). | |
sql.use_unique_key_column | When set to Yes, the connector will generate an update statement that uses the unique key specified below. When set to No, the connector will generate a simpler update statement that does not use a unique column.. Default: false | |
sql.user_defined_sql * | These SQL statements in general should update the target table using the data in the temporary work table. | |
write_mode | Type of the SQL statement (or statements) to be executed.. Values: [action_column, delete, delete_then_insert, insert, update, update_then_insert, user-defined_sql]. Default: insert |
Name | Type | Description |
---|---|---|
auth_type | The type of authentication to be used to access the TM1 server API. Values: [bearer, cam_credentials, basic] | |
gateway_url | The URL of the gateway for Planning Analytics service | |
namespace * | The namespace to use for connecting to the TM1 sever API | |
password * | The password associated with the username for accessing the data source | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
service_root * | The URL used to access the TM1 server API implementing the OData protocol | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
use_creation_order | Whether to read cube dimensions using creation order. Default: false | |
cube_name * | The cube to be processed | |
mdx_statement | The MDX statement to select a view | |
row_limit | The maximum number of rows to return | |
view_name * | The view to be processed | |
view_group * | The group that the view belongs to |
Name | Type | Description |
---|---|---|
cube_name * | The cube to be processed | |
mdx_statement | The MDX statement to select a view | |
view_name | The view to be processed | |
view_group | The group that the view belongs to | |
write_to_consolidation | If writing to consolidation |
Name | Type | Description |
---|---|---|
app_url | Product Master application cluster URL on Cloud Pak for Data | |
company * | Company name for IBM Product Master | |
password * | Password for IBM Product Master | |
username | Username for IBM Product Master |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
catalog_name * | Name of the Catalog | |
category_name | Identifier of the category associated with product | |
hierarchy_name | Name of the primary or secondary hierarchy | |
row_limit | The maximum number of rows to return |
Name | Type | Description |
---|---|---|
catalog_name * | Name of the Catalog | |
category_name | Identifier of the category associated with product | |
hierarchy_name | Name of the primary or secondary hierarchy |
Name | Type | Description |
---|---|---|
api_key * | An application programming interface key that identifies the calling application or user | |
auth_method | ||
database * | The name of the database | |
host * | The hostname or IP address of the database | |
instance_id * | 36 characters instance ID of the Watson Query database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
inherit_access_token | Use your Cloud Pak for Data credentials to authenticate to the data source. Default: false | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|
Name | Type | Description |
---|---|---|
client_id * | The client ID for authorizing access to Looker | |
client_secret * | The password associated with the client ID for authorizing access to Looker | |
host * | The hostname or IP address of the Looker server | |
port | The port of the Looker server. Default: 19999 |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
file_format | The format of the file. Values: [csv, delimited, excel, json]. Default: csv | |
file_name * | The name of the file to read | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
sheet_name | The name of the Excel worksheet to read from |
Name | Type | Description |
---|
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [delete, delete_insert, insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
connection_string * | Connection string from the storage account's Access keys page on the Microsoft Azure portal | |
container | The name of the container that contains the files to access |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
container | The name of the container that contains the files to read | |
container_source | Specify the container | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
delimited_syntax.field_formats.date_format | Specify a string that defines the format for fields that have the Date data type. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
delimited_syntax.field_formats.decimal_format | Specify a string that defines the format for fields that have the Decimal or Numeric data type. | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
delimited_syntax.record_def.record_def_source | Enter a delimited string that specifies the names and data types and length of each fields. Use the format name:data_type[length], and separate each field with the delimiter specified as the >B<Field delimiter>/B< property. If the record definition is in a delimited string file or Osh schema file, specify the full path of the file. | |
display_value_labels | Display the value labels | |
delimited_syntax.encoding | Specify the encoding of the files to read or write, for example, UTF-8. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
delimited_syntax.escape | Specify the character to use to escape field and row delimiters. If an escape character exists in the data, the escape character is also escaped. Because escape characters require additional processing, do not specify a value for this property if you do not need to include escape characters in the data. | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
delimited_syntax.field_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: , | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_name_source * | Specify the file name to read from Azure | |
_file_format | Specify the format of the files to read or write.. Values: [comma-separated_value_csv, delimited]. Default: delimited | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name * | The name of the file to read | |
filename_column | Specify the name of the column to write the source file name to. | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
delimited_syntax.header | Select Yes if the first row of the file contains field headers and is not part of the data. If you select Yes, when the connector writes data, the field names will be the first row of the output. If runtime column propagation is enabled, metadata can be obtained from the first row of the file.. Default: false | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
recurse | Specify whether to read files that are in child folders of the prefix that is specified for the File name property.. Default: true | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
delimited_syntax.null_value | Specify the character or string that represents null values in the data. For a source stage, input data that has the value that you specify is set to null on the output link. For a target stage, in the output file that is written to the file system, null values are represented by the value that is specified for this property. To specify that an empty string represents a null value, specify "" (two double quotation marks). | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
delimited_syntax.quotes | . Values: [double, none, single]. Default: none | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
_read_mode * | Select the Read mode. Values: [list_containers/fileshares, list_files, read_multiple_files, read_single_file]. Default: read_single_file | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
delimited_syntax.record_def | Select whether the record definition is provided to the connector from the source file, a delimited string, a file that contains a delimited string, or a schema file. When runtime column propagation is enabled, this metadata provides the column definitions. If a schema file is provided, the schema file overrides the values of formatting properties in the stage and the column definitions that are specified on the Columns page of the output link.. Values: [delimited_string, delimited_string_in_a_file, file_header, none, schema_file]. Default: none | |
delimited_syntax.record_limit | Specify the maximum number of records to read from the file per node. If a value is not specified for this property, the entire file is read. | |
reject_mode | Specify what the connector does when a record that contains invalid data is found in the source file. Select Continue to read the rest of the file, Fail to stop the job with an error message, or Reject to send the rejected data to a reject link.. Values: [continue, fail, reject]. Default: continue | |
delimited_syntax.row_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
delimited_syntax.field_formats.time_format | Specify a string that defines the format for fields that have the Time data type. | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
delimited_syntax.field_formats.timestamp_format | Specify a string that defines the format for fields that have the Timestamp data type. | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
wave_handling.append_uid | Use this property to choose if a unique identifier is to be appended to the file name. When the value of this property is set to yes, then the file name gets appended with the unique identifier, and a new file would be written for every wave of data that is streamed into the stage. When the value of this property is set to No, then the file would be overwritten on every wave.. Default: false | |
blob_type * | Type of blob to write. Values: [append, block, page]. Default: block | |
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
_container | Specify the container | |
container | The name of the container that contains the files to write to | |
parallel_write.temp_container | Specify the temporary container that can be used to create temporary files while performing parallel write. When no value is specified, the container specified in "Container" option will be used. | |
_create_container | Select this property if you want to create container if it doesn't exist.. Default: false | |
create_container | Create the container that contains the files to write to. Default: false | |
parallel_write.create_temp_container | Use this option if you want to create the container to store temporary files.. Default: false | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
delimited_syntax.field_formats.date_format | Specify a string that defines the format for fields that have the Date data type. | |
decimal_format | The format of decimal values, for example, #,###.## | |
delimited_syntax.field_formats.decimal_format | Specify a string that defines the format for fields that have the Decimal or Numeric data type. | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
parallel_write | Use this option to perform a parallel write in Blob Storage.. Default: false | |
delimited_syntax.encoding | Specify the encoding of the files to read or write, for example, UTF-8. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
delimited_syntax.escape | Specify the character to use to escape field and row delimiters. If an escape character exists in the data, the escape character is also escaped. Because escape characters require additional processing, do not specify a value for this property if you do not need to include escape characters in the data. | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
delimited_syntax.field_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: , | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
_file_name * | Specify the file name | |
_file_format | Specify the format of the files to read or write.. Values: [comma-separated_value_csv, delimited]. Default: delimited | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name * | The name of the file to write to or delete | |
wave_handling.file_size_threshold | Specify the threshold for the file size in megabytes. Processing nodes will start a new file each time the size exceeds the value specified in the threshold.. Default: 1 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
delimited_syntax.header | Select Yes if the first row of the file contains field headers and is not part of the data. If you select Yes, when the connector writes data, the field names will be the first row of the output. If runtime column propagation is enabled, metadata can be obtained from the first row of the file.. Default: false | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
file_exists | Specify what the connector does when it tries to write a file that already exists. Select Overwrite file to overwrite a file if it already exists, Do not overwrite file to not overwrite the file and stop the job, or Fail to stop the job with an error message.. Values: [do_not_overwrite_file, fail, overwrite_file]. Default: overwrite_file | |
delimited_syntax.encoding.output_bom | Specify whether to include a byte order mark in the file when the file encoding is a Unicode encoding such as UTF-8, UTF-16, or UTF-32.. Default: false | |
delimited_syntax.header.include_types | Select Yes to append the data type to each field name that the connector writes in the first row of the output.. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
names_as_labels | Set column labels to the value of the column name | |
delimited_syntax.null_value | Specify the character or string that represents null values in the data. For a source stage, input data that has the value that you specify is set to null on the output link. For a target stage, in the output file that is written to the file system, null values are represented by the value that is specified for this property. To specify that an empty string represents a null value, specify "" (two double quotation marks). | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
delimited_syntax.quotes | . Values: [double, none, single]. Default: none | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
delimited_syntax.row_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
delimited_syntax.field_formats.time_format | Specify a string that defines the format for fields that have the Time data type. | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
delimited_syntax.field_formats.timestamp_format | Specify a string that defines the format for fields that have the Timestamp data type. | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
_write_mode * | Select the Write mode. Values: [delete, write]. Default: write | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
host * | The Azure Cosmos DB database that stores the read-write keys | |
master_key * | The Azure Cosmos DB primary read-write key | |
port | The Azure Cosmos DB database port number. Default: 443 |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
collection * | The collection to connect to | |
database * | The database to connect to | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data |
Name | Type | Description |
---|---|---|
collection * | The collection to connect to | |
create_collection | Create the collection to connect to | |
database * | The database to connect to | |
input_format | The format of the source data. Values: [json, relational]. Default: relational | |
offer_throughput | The throughput allocated for bulk operations out of the collection's total throughput | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write]. Default: write |
Name | Type | Description |
---|---|---|
client_id * | The client ID for authorizing access to Microsoft Azure Data Lake Store | |
client_secret * | The authentication key associated with the client ID for authorizing access to Microsoft Azure Data Lake Store | |
proxy_host * | The server proxy host | |
proxy_port * | The server proxy port | |
proxy_protocol | The proxy server protocol. Values: [http, https] | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
proxy | Use server proxy. Default: false | |
tenant_id * | The Azure Active Directory tenant ID | |
url * | The WebHDFS URL for accessing HDFS |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
display_value_labels | Display the value labels | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name * | The name of the file to read | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
decimal_format | The format of decimal values, for example, #,###.## | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name * | The name of the file to write to or delete | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
names_as_labels | Set column labels to the value of the column name | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
connection_string * | Connection string from the storage account's Access keys page on the Microsoft Azure portal | |
container | The name of the container that contains the files to access |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
range | The range of cells to retrieve from the Excel worksheet, for example, C1:F10 | |
container | The name of the container that contains the files to read | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
delimited_syntax.field_formats.date_format | Specify a string that defines the format for fields that have the Date data type. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
decimal_format | The format of decimal values, for example, #,###.## | |
delimited_syntax.field_formats.decimal_format | Specify a string that defines the format for fields that have the Decimal or Numeric data type. | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
delimited_syntax.record_def.record_def_source | Enter a delimited string that specifies the names and data types and length of each fields. Use the format name:data_type[length], and separate each field with the delimiter specified as the >B<Field delimiter>/B< property. If the record definition is in a delimited string file or Osh schema file, specify the full path of the file. | |
display_value_labels | Display the value labels | |
delimited_syntax.encoding | Specify the encoding of the files to read or write, for example, UTF-8. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
delimited_syntax.escape | Specify the character to use to escape field and row delimiters. If an escape character exists in the data, the escape character is also escaped. Because escape characters require additional processing, do not specify a value for this property if you do not need to include escape characters in the data. | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
exclude_missing_values | Set values that have been defined as missing values to null | |
delimited_syntax.field_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: , | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
xml_path_fields | The path that identifies the specified elements to retrieve from the root path of a XML document, for example, ../publisher | |
file_name_source * | Specify the file name to read from Azure | |
file_share_source * | Specify the File share | |
_file_format | Specify the format of the files to read or write.. Values: [comma-separated_value_csv, delimited]. Default: delimited | |
file_format | The format of the file. Values: [avro, csv, delimited, excel, json, orc, parquet, sas, sav, shp, xml]. Default: csv | |
file_name * | The name of the file to read | |
filename_column | Specify the name of the column to write the source file name to. | |
first_line | Indicates at which row start reading. Default: 0 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
delimited_syntax.header | Select Yes if the first row of the file contains field headers and is not part of the data. If you select Yes, when the connector writes data, the field names will be the first row of the output. If runtime column propagation is enabled, metadata can be obtained from the first row of the file.. Default: false | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
infer_timestamp_as_date | Infer columns containing date and time data as date. Default: true | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_null_as_empty_string | Treat empty values in string type columns as empty strings instead of null. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
json_path | The path that identifies the elements to retrieve from a JSON document, for example, $.book.publisher | |
labels_as_names | Set column names to the value of the column label | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
delimited_syntax.null_value | Specify the character or string that represents null values in the data. For a source stage, input data that has the value that you specify is set to null on the output link. For a target stage, in the output file that is written to the file system, null values are represented by the value that is specified for this property. To specify that an empty string represents a null value, specify "" (two double quotation marks). | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
delimited_syntax.quotes | . Values: [double, none, single]. Default: none | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
_read_mode * | Select the Read mode. Values: [list_containers/fileshares, list_files, read_multiple_files, read_single_file]. Default: read_single_file | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
delimited_syntax.record_def | Select whether the record definition is provided to the connector from the source file, a delimited string, a file that contains a delimited string, or a schema file. When runtime column propagation is enabled, this metadata provides the column definitions. If a schema file is provided, the schema file overrides the values of formatting properties in the stage and the column definitions that are specified on the Columns page of the output link.. Values: [delimited_string, delimited_string_in_a_file, file_header, none, schema_file]. Default: none | |
delimited_syntax.record_limit | Specify the maximum number of records to read from the file per node. If a value is not specified for this property, the entire file is read. | |
reject_mode | Specify what the connector does when a record that contains invalid data is found in the source file. Select Continue to read the rest of the file, Fail to stop the job with an error message, or Reject to send the rejected data to a reject link.. Values: [continue, fail, reject]. Default: continue | |
delimited_syntax.row_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
xml_schema | The schema that specified metadata information of elements, for example, data type, values, min, max | |
delimited_syntax.field_formats.time_format | Specify a string that defines the format for fields that have the Time data type. | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
delimited_syntax.field_formats.timestamp_format | Specify a string that defines the format for fields that have the Timestamp data type. | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 | |
use_field_formats | Format data using specified field formats | |
use_variable_formats | Format data using specified variable formats. | |
sheet_name | The name of the Excel worksheet to read from | |
xml_path | The path that identifies the root elements to retrieve from a XML document, for example, /book/publisher |
Name | Type | Description |
---|---|---|
wave_handling.append_uid | Use this property to choose if a unique identifier is to be appended to the file name. When the value of this property is set to yes, then the file name gets appended with the unique identifier, and a new file would be written for every wave of data that is streamed into the stage. When the value of this property is set to No, then the file would be overwritten on every wave.. Default: false | |
codec_avro | The compression codec to use when writing. Values: [bzip2, deflate, null, snappy] | |
codec_csv | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_delimited | The compression codec to use when writing. Values: [gzip, uncompressed] | |
codec_orc | The compression codec to use when writing. Values: [lz4, lzo, none, snappy, zlib] | |
codec_parquet | The compression codec to use when writing. Values: [gzip, uncompressed, snappy] | |
container | The name of the container that contains the files to write to | |
create_file_share | Select this property if you want to create file share if it doesn't exist.. Default: false | |
create_container | Create the container that contains the files to write to. Default: false | |
date_format | The format of date values, for example, yyyy-[M]M-[d]d | |
delimited_syntax.field_formats.date_format | Specify a string that defines the format for fields that have the Date data type. | |
decimal_format | The format of decimal values, for example, #,###.## | |
delimited_syntax.field_formats.decimal_format | Specify a string that defines the format for fields that have the Decimal or Numeric data type. | |
decimal_format_grouping_separator | The character used to group digits of similar significance. This property and decimal separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
decimal_format_decimal_separator | The character used to separate the integer part from the fractional part of a number. This property and decimal grouping separator property must be unique. If you encounter error related to them not being unique when only one of them was provided then please provide the missing one explicitly. | |
delimited_syntax.encoding | Specify the encoding of the files to read or write, for example, UTF-8. | |
encoding | The appropriate character encoding for your data, for example, UTF-8. Default: utf-8 | |
encryption_key | Key to decrypt sav file | |
delimited_syntax.escape | Specify the character to use to escape field and row delimiters. If an escape character exists in the data, the escape character is also escaped. Because escape characters require additional processing, do not specify a value for this property if you do not need to include escape characters in the data. | |
escape_character | The character that's used to escape other characters, for example, a backslash. Escaping is a string technique that identifies characters as being part of a string value.. Values: [>, backslash, double_quote, none, single_quote]. Default: none | |
escape_character_value * | The custom character that is used to escape other characters. | |
delimited_syntax.field_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: , | |
field_delimiter | The character that separates each value from the next value, for example, a comma. Values: [>, colon, comma, tab]. Default: comma | |
field_delimiter_value * | The custom character that separates each value from the next value | |
_file_name * | Specify the file name | |
file_share * | Specify the File share | |
_file_format | Specify the format of the files to read or write.. Values: [comma-separated_value_csv, delimited]. Default: delimited | |
file_format | The format of the file to write to. Values: [avro, csv, delimited, excel, json, orc, parquet, sav, xml]. Default: csv | |
file_name * | The name of the file to write to or delete | |
wave_handling.file_size_threshold | Specify the threshold for the file size in megabytes. Processing nodes will start a new file each time the size exceeds the value specified in the threshold.. Default: 1 | |
first_line_header | Indicates whether the row where reading starts is the header. Default: false | |
delimited_syntax.header | Select Yes if the first row of the file contains field headers and is not part of the data. If you select Yes, when the connector writes data, the field names will be the first row of the output. If runtime column propagation is enabled, metadata can be obtained from the first row of the file.. Default: false | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
file_exists | Specify what the connector does when it tries to write a file that already exists. Select Overwrite file to overwrite a file if it already exists, Do not overwrite file to not overwrite the file and stop the job, or Fail to stop the job with an error message.. Values: [do_not_overwrite_file, fail, overwrite_file]. Default: overwrite_file | |
delimited_syntax.encoding.output_bom | Specify whether to include a byte order mark in the file when the file encoding is a Unicode encoding such as UTF-8, UTF-16, or UTF-32.. Default: false | |
delimited_syntax.header.include_types | Select Yes to append the data type to each field name that the connector writes in the first row of the output.. Default: false | |
include_types | Include data types in the first line of the file. Default: false | |
names_as_labels | Set column labels to the value of the column name | |
delimited_syntax.null_value | Specify the character or string that represents null values in the data. For a source stage, input data that has the value that you specify is set to null on the output link. For a target stage, in the output file that is written to the file system, null values are represented by the value that is specified for this property. To specify that an empty string represents a null value, specify "" (two double quotation marks). | |
null_value | The value that represents null (a missing value) in the file, for example, NULL | |
partitioned | Write the file as multiple partitions. Default: false | |
delimited_syntax.quotes | . Values: [double, none, single]. Default: none | |
quote_character | The character that's used to enclose string values, for example, a double quotation mark. Values: [double_quote, none, single_quote]. Default: none | |
quote_numerics | Enclose numeric values the same as strings using the quote character. Default: true | |
delimited_syntax.row_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code.. Default: | |
row_delimiter | The character or characters that separate one line from another, for example, CR/LF (Carriage Return/Line Feed). Values: [new_line, carriage_return, carriage_return_line_feed, line_feed]. Default: new_line | |
delimited_syntax.field_formats.time_format | Specify a string that defines the format for fields that have the Time data type. | |
time_format | The format of time values, for example, HH:mm:ss[.f] | |
delimited_syntax.field_formats.timestamp_format | Specify a string that defines the format for fields that have the Timestamp data type. | |
timestamp_format | The format of timestamp values, for example, yyyy-MM-dd H:m:s | |
sheet_name | The name of the Excel worksheet to write to | |
_write_mode * | Select the Write mode. Values: [delete, write]. Default: write | |
write_mode | Whether to write to, or delete, the target. Values: [delete, write, write_raw]. Default: write |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: true | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
domain | The name of the domain | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
instance_name * | The name of the instance | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
use_active_directory | Allows the Microsoft SQL Server connection to authenticate using NTLM | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
query_timeout | Specify the Query Timeout. If not specified the default value of 300 seconds or 5 minutes will be used. | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
auth_database | The name of the database in which the user was created | |
column_discovery_sample_size | The number of rows sampled per collection to determine table schemas. The default is 1000. | |
database * | The name of the database | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
schema_filter | A comma-separated list of database:collection pairs for which the driver should fetch metadata. For more information look into DataDirect driver documentation. | |
special_char_behavior | Specifies whether special characters in names that do not conform to SQL identifier syntax should be stripped (the default), included, or replaced with underscores. Values: [include, replace, strip] | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
encoding | The character encoding for your data. If not specified, the default character set of the database server is used. If you change the value, enter a valid character encoding, for example, UTF-8 | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
additional_props | A semicolon-separated list of additional connection properties. | |
cluster_nodes * | A comma-separated list of member nodes in your cluster. | |
dsn_type * | The ODBC data source type.. Values: [Cassandra, Hive, GreenPlum, DB2, DB2zOS, DB2AS400, Informix, Netezza, Impala, MicrosoftSQLServer, MongoDB, MySQL, Oracle, PostgreSQL, SybaseASE, SybaseIQ]. Default: DB2 | |
database * | Database name. | |
hostname * | The hostname of the database. | |
keyspace | The name of the Keyspace | |
network_address * | Server name or IP address followed by a comma and the port number. | |
password * | The password used to connect to the database | |
port * | Port | |
service_name * | The Oracle service name that specifies the database used for the connection. | |
username * | The username used to connect to the database |
Name | Type | Description |
---|---|---|
before_after.after_node | Enter the SQL statements or the fully-qualified name of the file that contains the SQL statements to run once one each node after all data is processed on that node. | |
before_after.after | Enter the SQL statements or the fully-qualified name of the file that contains the SQL statements to run once after all data is processed. | |
session.array_size | The array size to be used for all read and write database operations. Default: 2000 | |
session.autocommit_mode | Specifies whether the connector commits transactions manually or allows the driver to commit automatically at its discretion. Values: [off, on]. Default: off | |
before_after.before_node | Enter the SQL statements or the fully-qualified name of the file that contains the SQL statements to run once on each node before any data is processed on that node. | |
before_after.before | Enter the SQL statements or the fully-qualified name of the file that contains the SQL statements to run once before any data is processed. | |
session.code_page | Specify a code page to use for this data source. Values: [default, unicode, user-specified]. Default: default | |
session.code_page.code_page_name * | An ICU code page name compatible with this data source | |
sql.enable_partitioning.partitioning_method.key_field * | Specifies the key column that is used by the selected partitioned reads method. This column must be a numeric data type. | |
session.pass_lob_locator.column * | Use to choose columns containing LOBs to be passed by locator (reference) | |
session.pass_lob_locator | Enables/disables the ability to specify LOB columns to be passed using locator (reference) information. LOB columns not specified will be passed inline. Default: false | |
sql.enable_partitioning | Enable or disable partitioned reads by using the selected partitioning method.. Default: false | |
enable_quoted_i_ds | Specifies whether or not to enclose database object names in quotes when generating DDL and DML. Default: true | |
transaction.end_of_wave.end_of_data | Specifies whether to insert an EOW marker for the last set of records when the number is less than the specified transaction record count value. Default: false | |
transaction.end_of_wave | Specify settings for the end of wave handling. None means EOW markers are never inserted, Before means EOW markers are inserted before committing the transaction, After means EOW markers are inserted after committing the transaction. Values: [after, before, none]. Default: none | |
before_after.after_node.fail_on_error | Select Yes to stop the job if the After SQL (node) statements fails.. Default: true | |
before_after.after.fail_on_error | Select Yes to stop the job if the After SQL statements fails.. Default: true | |
before_after.before_node.fail_on_error | Select Yes to stop the job if the Before SQL (node) statements fails.. Default: true | |
before_after.before.fail_on_error | Select Yes to stop the job if the Before SQL statements fails.. Default: true | |
session.schema_reconciliation.fail_on_size_mismatch | Fail if the sizes of numeric and string fields are not compatible when validating the design schema against the database. Default: true | |
session.schema_reconciliation.fail_on_type_mismatch | Fail if the types of fields are not compatible when validating the design schema against the database. Default: true | |
generate_sql | Specifies whether to generate SQL statements at run time.. Default: false | |
session.isolation_level | The isolation level used for all database transactions. Values: [default, read_committed, read_uncommitted, repeatable_read, serializable]. Default: read_uncommitted | |
limit_rows.limit | Enter the maximum number of rows that will be returned by the connector.. Default: 1000 | |
limit_rows | Select Yes to limit the number of rows that are returned by the connector.. Default: false | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
sql.other_clause | The other clause predicate of the SQL statement | |
sql.enable_partitioning.partitioning_method | The method to use for partitioned reads.. Values: [minimum_and_maximum_range, modulus]. Default: minimum_and_maximum_range | |
before_after.after_node.read_from_file_after_sql_node | Select Yes to read the SQL statements from the file that is specified in the After SQL (node) statements property.. Default: false | |
before_after.after.read_from_file_after_sql | Select Yes to read the SQL statements from the file that is specified in the After SQL statements property.. Default: false | |
before_after.before_node.read_from_file_before_sql_node | Select Yes to read the SQL statements from the file that is specified in the Before SQL (node) statements property.. Default: false | |
before_after.before.read_from_file_before_sql | Select Yes to read the SQL statements from the file that is specified in the Before SQL statements property.. Default: false | |
sql.select_statement.read_statement_from_file | Select Yes to read the SELECT statement from the file specified in the SELECT statement property.. Default: false | |
transaction.record_count | Number of records per transaction. The value 0 means all available records. Default: 2000 | |
before_after | Select Yes to run specified SQL statements before and after data is accessed in the database.. Default: false | |
sql.select_statement * | Statement to be executed when reading rows from the database or absolute path to the file containing the SQL statements. | |
sql.enable_partitioning.partitioning_method.table_name * | Specifies the table that is used by the selected partitioned reads method. | |
table_name * | The table name to be used in generated SQL | |
sql.where_clause | The where clause predicate of the SQL statement |
Name | Type | Description |
---|---|---|
before_after.after_node | Enter the SQL statements or the fully-qualified name of the file that contains the SQL statements to run once one each node after all data is processed on that node. | |
before_after.after | Enter the SQL statements or the fully-qualified name of the file that contains the SQL statements to run once after all data is processed. | |
session.array_size | The array size to be used for all read and write database operations. Default: 2000 | |
session.autocommit_mode | Specifies whether the connector commits transactions manually or allows the driver to commit automatically at its discretion. Values: [off, on]. Default: off | |
before_after.before_node | Enter the SQL statements or the fully-qualified name of the file that contains the SQL statements to run once on each node before any data is processed on that node. | |
before_after.before | Enter the SQL statements or the fully-qualified name of the file that contains the SQL statements to run once before any data is processed. | |
sql.user_defined_sql.file.character_set | IANA character set name | |
session.code_page | Specify a code page to use for this data source. Values: [default, unicode, user-specified]. Default: default | |
session.code_page.code_page_name * | An ICU code page name compatible with this data source | |
logging.log_column_values.delimiter | Specifies the delimiter to use between columns. Values: [comma, newline, space, tab]. Default: space | |
table_action.generate_create_statement.create_statement * | A statement to be executed when creating the target database table | |
sql.delete_statement * | Statement to be executed when deleting rows from the database | |
table_action.generate_drop_statement.drop_statement * | A statement to be executed when dropping the target database table | |
session.schema_reconciliation.drop_unmatched_fields | Drop fields that don't exist in the input schema. Default: true | |
enable_quoted_i_ds | Specifies whether or not to enclose database object names in quotes when generating DDL and DML. Default: false | |
sql.user_defined_sql.fail_on_error | Abort the SQL statements when an error occurs. Default: true | |
table_action.generate_create_statement.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
table_action.generate_drop_statement.fail_on_error | Abort the job if there is an error executing a command. Default: false | |
table_action.generate_truncate_statement.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.after_node.fail_on_error | Select Yes to stop the job if the After SQL (node) statements fails.. Default: true | |
before_after.after.fail_on_error | Select Yes to stop the job if the After SQL statements fails.. Default: true | |
before_after.before_node.fail_on_error | Select Yes to stop the job if the Before SQL (node) statements fails.. Default: true | |
before_after.before.fail_on_error | Select Yes to stop the job if the Before SQL statements fails.. Default: true | |
session.fail_on_row_error_px | Fail the job if a write operation to the target is unsuccessful. Default: true | |
session.schema_reconciliation.fail_on_size_mismatch | Fail if the sizes of numeric and string fields are not compatible when validating the design schema against the database. Default: true | |
session.schema_reconciliation.fail_on_type_mismatch | Fail if the types of fields are not compatible when validating the design schema against the database. Default: true | |
sql.user_defined_sql.file * | File on the conductor node that contains SQL statements to be executed for each input row | |
table_action.generate_create_statement | Specifies whether to generate a CREATE TABLE statement at run time. Default: true | |
table_action.generate_drop_statement | Specifies whether to generate a DROP TABLE statement at run time. Default: true | |
generate_sql | Specifies whether to generate SQL statements at run time.. Default: false | |
table_action.generate_truncate_statement | Specifies whether to generate a TRUNCATE TABLE statement at run time. Default: true | |
sql.insert_statement * | Statement to be executed when inserting rows into the database | |
session.isolation_level | The isolation level used for all database transactions. Values: [default, read_committed, read_uncommitted, repeatable_read, serializable]. Default: read_uncommitted | |
logging.log_column_values | Specifies whether to log column values for the first row that fails to be written. Default: false | |
logging.log_column_values.log_keys_only | Specifies whether to log key columns or all columns for failing statements. Default: false | |
before_after.after_node.read_from_file_after_sql_node | Select Yes to read the SQL statements from the file that is specified in the After SQL (node) statements property.. Default: false | |
before_after.after.read_from_file_after_sql | Select Yes to read the SQL statements from the file that is specified in the After SQL statements property.. Default: false | |
before_after.before_node.read_from_file_before_sql_node | Select Yes to read the SQL statements from the file that is specified in the Before SQL (node) statements property.. Default: false | |
before_after.before.read_from_file_before_sql | Select Yes to read the SQL statements from the file that is specified in the Before SQL statements property.. Default: false | |
transaction.record_count | Number of records per transaction. The value 0 means all available records. Default: 2000 | |
before_after | Select Yes to run specified SQL statements before and after data is accessed in the database.. Default: false | |
sql.user_defined_sql.statements * | SQL statements to be executed for each input row | |
table_action * | Select the action to perform on the database table. Values: [append, create, replace, truncate]. Default: append | |
table_name * | The table name to be used in generated SQL | |
table_action.generate_truncate_statement.truncate_statement * | A statement to be executed when truncating the database table | |
sql.update_statement * | Statement to be executed when updating rows in the database | |
sql.user_defined_sql * | Source of the user-defined SQL statements. Values: [file, statements]. Default: statements | |
write_mode * | The mode to be used when writing to a database table. Values: [delete, delete_then_insert, insert, insert_new_rows_only, insert_then_update, update, update_then_insert, user-defined_sql]. Default: insert |
Name | Type | Description |
---|---|---|
api_key * | The api key to use for connecting to the service root | |
auth_type | The type of authentication to be used to access the service root. Values: [api_key, none, basic] | |
password * | The password associated with the username for accessing the data source | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
service_root * | The URL used to access the service root of a site implementing the OData protocol. | |
timeout_seconds | Timeout value for HTTP calls in seconds | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
entity_set_name * | The entity set to be processed | |
row_limit | The maximum number of rows to return | |
row_start | The first row of data to read |
Name | Type | Description |
---|---|---|
entity_set_name * | The entity set to be processed | |
write_mode | The mode to be used when writing to the entity. Values: [insert, update] |
Name | Type | Description |
---|---|---|
connection_mode | ||
sid * | The unique name of the database instance. If you provide a SID, do not provide a service name | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
metadata_discovery | Determines what types of metadata can be discovered, 'No Remarks' option will be set as default. Values: [no_remarks, no_remarks_or_synonyms, no_synonyms, remarks_and_synonyms] | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
service_name * | The name of the service. If you provide a service name, do not provide a SID | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
oracle_db_host * | Oracle host name | |
password * | Specify the password to use to connect to the database. | |
oracle_db_port * | Oracle port | |
oracle_service_name * | Oracle service name | |
username * | Specify the user name to use to connect to the database. |
Name | Type | Description |
---|---|---|
after_sql_node | Enter the SQL statement to run once one each node after all data is processed on that node. | |
after_sql | Enter the SQL statement to run once after all data is processed. | |
array_size | Enter a number that represents the number of records to process in read and write operations on the database.. Default: 2000 | |
before_sql_node | Enter the SQL statement to run once on each node before any data is processed on that node. | |
before_sql | Enter the SQL statement to run once before any data is processed. | |
read_strategy_column_name * | Enter the name of the column that the specified partitioned reads method uses. The column must be of NUMBER(p) type, where p is between 1 and 38, and it must be an existing column in the specified partitioned reads table. | |
pass_lob_locator.column * | Select the LOB columns to pass by reference (locator). Columns that are not selected are passed as actual values (inline). | |
disconnect | Enter the condition under which the connection to the database shall be closed. Values: [0, 1]. Default: 0 | |
pass_lob_locator | Select Yes to use references (locators) for LOB columns instead of their actual values.. Default: false | |
enable_partitioned_reads | Select Yes to read data in parallel from multiple processing nodes.. Default: false | |
enable_quoted_ids | Select Yes to enclose database object names in quotation marks when SQL statements are generated. Quotation marks preserve the case of object names.. Default: true | |
treat_fetch_truncate_as_error | Select Yes to stop the job if a truncation occurs when fetching the data. Select No to only log a warning and resume the job.. Default: true | |
after_sql_node.fail_on_error | Select Yes to stop the job if the After SQL (node) statement fails.. Default: true | |
after_sql.fail_on_error | Select Yes to stop the job if the After SQL statement fails.. Default: true | |
before_sql_node.fail_on_error | Select Yes to stop the job if the Before SQL (node) statement fails.. Default: true | |
before_sql.fail_on_error | Select Yes to stop the job if the Before SQL statement fails.. Default: true | |
generate_sql | Select Yes to automatically generate SQL statements at runtime.. Default: false | |
inactivity_period * | Enter the period of inactivity in seconds after which the connection should be closed. Default: 300 | |
retry_interval * | Enter the interval in seconds to wait between attempts to establish a connection. Default: 10 | |
isolation_level | Isolation level. Values: [0, 2, 1]. Default: 0 | |
limit | Enter the maximum number of rows that will be returned by the connector. | |
limit_rows | Select Yes to limit the number of rows that are returned by the connector.. Default: false | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
application_failover_control | Select Yes to configure the connector to participate in the Oracle transparent application failover (TAF) process and to report failover progress in the log.. Default: false | |
end_of_wave | Select Yes to generate end-of-wave record after each wave of records where the number of records in each wave is specified in the Record count property. When the Record count property is set to 0, the end-of-wave records are never generated.. Values: [0, 2]. Default: 0 | |
number_of_retries | Enter a number that represents how many retries the connector will allow for completion of transparent application failover (TAF) after it has been initiated.. Default: 10 | |
retry_count * | Enter the number of attempts to establish a connection. Default: 3 | |
other_clause | The other clause predicate of the SQL statement | |
pl_sql_statement * | Enter a PL/SQL anonymous block. The block must begin with the keyword BEGIN or DECLARE and end with the keyword END. | |
partition_name * | Enter the name of the partition to access. | |
read_strategy_partition_name | Enter the name of the partition (or subpartition) to use as input for the specified partitioned reads method. This value should typically be set to match the name of the partition (or subpartition) from which the data is fetched. | |
partitioned_reads_strategy | Select the method to use to read data in parallel from multiple processing nodes.. Values: [3, 2, 4, 5, 0, 1]. Default: 0 | |
prefetch_memory_size | Enter the size of the buffer (in KB) to use for prefetching rows.. Default: 0 | |
prefetch_row_count | Enter the number of rows to prefetch when the query runs.. Default: 1 | |
preserve_trailing_blanks | Select Yes to preserve trailing blanks in input text values. Select No for the connector to trim trailing spaces from input text values.. Default: true | |
treat_warnings_as_errors | Select Yes to stop the job when the first warning message occurs.. Default: false | |
read_mode | Select the mode to use to read from the database.. Values: [1, 0]. Default: 0 | |
reconnect | Select Yes to retry to establish connection to the database when the initial connection is unsuccessful or when the active connection is dropped.. Default: false | |
record_count | Enter the number of records to process in each transaction. The record count must be a multiple of the value specified for the Array size property. To process all available records in one transaction, enter 0.. Default: 2000 | |
replay_before_sql_node | Select Yes to run the Before SQL (node) statement on each node in a parallel job after a successful transparent application failover (TAF).. Default: false | |
replay_before_sql | Select Yes to run the Before SQL statement after a successful transparent application failover (TAF).. Default: false | |
resume_write | Select Yes to resubmit the current transaction and to resume sending records to the database after the failover has completed.. Default: false | |
before_after | Select Yes to run specified SQL statements before and after data is accessed in the database.. Default: false | |
select_statement * | Enter a SELECT statement. The statement is used to read rows from the database. | |
subpartition_name * | Enter the name of the subpartition to access. | |
table_name * | Enter the name of the Oracle database table or view to access. | |
read_strategy_table_name | Enter the name of the table to use as input for the specified partition read method. This value should typically be set to match the name of the source table from which the data is fetched. | |
table_scope | Select the part of the table to access.. Values: [0, 1, 2]. Default: 0 | |
wait_time | Enter a number, in seconds, that represents the time to wait between transparent application failover (TAF) retries.. Default: 10 | |
transfer_bfile_cont | Transfer BFILE contents. Default: false | |
where_clause | The where clause predicate of the SQL statement |
Name | Type | Description |
---|---|---|
generate_create_statement.fail_on_error | Select Yes to stop the job if the CREATE TABLE statement fails.. Default: true | |
generate_drop_statement.fail_on_error | Select Yes to stop the job if the DROP TABLE statement fails.. Default: true | |
generate_truncate_statement.fail_on_error | Select Yes to stop the job if the TRUNCATE TABLE statement fails.. Default: true | |
after_sql_node | Enter the SQL statement to run once one each node after all data is processed on that node. | |
after_sql | Enter the SQL statement to run once after all data is processed. | |
enable_parallel_load_sessions | Select Yes to allow multiple concurrent load sessions on the target table, partition or subpartition to which the data is loaded.. Default: true | |
array_size | Enter a number that represents the number of records to process in read and write operations on the database.. Default: 2000 | |
before_sql_node | Enter the SQL statement to run once on each node before any data is processed on that node. | |
before_sql | Enter the SQL statement to run once before any data is processed. | |
before_after | Before and after SQL. Default: false | |
buffer_size_in_kilobytes | Enter the size, in KB, to use for the direct path load buffer. Default: 1024 | |
cache_size | Enter the size, in elements, of the Oracle date cache.. Default: 1000 | |
delimiter | Specifies the delimiter to use between columns. Values: [3, 1, 0, 2]. Default: 0 | |
cont_file | Control file name | |
create_statement * | Enter the CREATE TABLE statement to run to create the target database table. | |
data_file | Data file name | |
degree_of_parallelism | Enter a number that represents the degree of parallelism to use in the parallel clause. Leave this property blank for Oracle database to automatically calculate the optimal parallelism degree. | |
delete_statement * | Enter a DELETE statement. The statement is used to delete rows from the database. | |
directory_cont_file | Control file name | |
disable_when_full | Select Yes to disable the use of the Oracle date cache if it becomes full during the bulk load.. Default: false | |
before_load.disable_constraints | Select Yes to disable all constraints on the table before the bulk load starts.. Default: false | |
disable_redo_log | Select Yes to disable the generation of Oracle redo and invalidation redo logs. Select No to use the default attributes of the table, partition or subpartition segment to which the data is loaded.. Default: false | |
before_load.disable_triggers | Select Yes to disable all triggers on the table before the bulk load starts.. Default: false | |
disconnect | Enter the condition under which the connection to the database shall be closed. Values: [0, 1]. Default: 0 | |
drop_statement * | Enter the DROP TABLE statement to run to drop the target database table. | |
drop_unmatched_fields | Select Yes to ignore the input schema columns that could not be mapped to any columns in the target database. Select No to stop the job if any unused columns are detected on the input schema.. Default: false | |
enable_constraints | Select Yes to enable constraints on the table after the bulk load ends.. Default: false | |
enable_quoted_ids | Select Yes to enclose database object names in quotation marks when SQL statements are generated. Quotation marks preserve the case of object names.. Default: false | |
enable_triggers | Select Yes to enable triggers on the table after the bulk load ends.. Default: false | |
exceptions_table_name | Enter the name of the table to use to store the row identifiers for rows that failed the constraint checks. If the table exists, it will be truncated before the bulk load. If the table does not exist, it will be created. | |
fail_if_no_rows_deleted | Select Yes to stop the job if the input record does not result in any table deletes and the record is not sent to the reject link. Select No to resume the job.. Default: false | |
fail_if_no_rows_updated | Select Yes to stop the job if the input record does not result in any table updates and the record is not sent to the reject link. Select No to resume the job.. Default: false | |
after_sql_node.fail_on_error | Select Yes to stop the job if the After SQL (node) statement fails.. Default: true | |
after_sql.fail_on_error | Select Yes to stop the job if the After SQL statement fails.. Default: true | |
before_sql_node.fail_on_error | Select Yes to stop the job if the Before SQL (node) statement fails.. Default: true | |
before_sql.fail_on_error | Select Yes to stop the job if the Before SQL statement fails.. Default: true | |
fail_on_rebuild_index | Select Yes to stop the job if the rebuilding of the indexes fails.. Default: false | |
fail_on_row_error_px | Fail the job if a write operation to the target is unsuccessful. Default: true | |
generate_sql | Select Yes to automatically generate SQL statements at runtime.. Default: false | |
generate_create_statement | Select Yes to automatically generate the CREATE TABLE statement at runtime.. Default: false | |
generate_drop_statement | Select Yes to automatically generate the DROP TABLE statement at runtime.. Default: false | |
generate_truncate_statement | Select Yes to automatically generate the TRUNCATE TABLE statement at runtime.. Default: false | |
inactivity_period * | Enter the period of inactivity in seconds after which the connection should be closed. Default: 300 | |
skip_indexes | Select the maintenance option to use for indexes during the bulk load.. Values: [0, 2, 1]. Default: 0 | |
insert_statement * | Enter an INSERT statement. The statement is used to insert rows into the database. | |
retry_interval * | Enter the interval in seconds to wait between attempts to establish a connection. Default: 10 | |
isolation_level | Select the isolation level to use for each transaction.. Values: [0, 2, 1]. Default: 0 | |
load_opt | Load options | |
log_column_values | Specifies whether to log column values for the first row that fails to be written. Default: false | |
log_keys_only | Specifies whether to log key columns or all columns for failing statements. Default: false | |
logging_clause | Select the logging clause to include in the ALTER INDEX statement when rebuilding indexes.. Values: [0, 2, 1]. Default: 0 | |
application_failover_control | Select Yes to configure the connector to participate in the Oracle transparent application failover (TAF) process and to report failover progress in the log.. Default: false | |
manual_mode | Manual mode | |
number_of_retries | Enter a number that represents how many retries the connector will allow for completion of transparent application failover (TAF) after it has been initiated.. Default: 10 | |
retry_count * | Enter the number of attempts to establish a connection. Default: 3 | |
pl_sql_statement * | Enter a PL/SQL anonymous block. The block must begin with the keyword BEGIN or DECLARE and end with the keyword END. | |
parallel_clause | Select the parallel clause to include in the ALTER INDEX statement when rebuilding indexes.. Values: [0, 1, 2, 3]. Default: 0 | |
partition_name * | Enter the name of the partition to access. | |
after_load | Select Yes to perform selected operations on the table after the bulk load ends.. Default: false | |
before_load | Select Yes to perform selected operations on the table before the bulk load starts.. Default: false | |
table_action_first | Select Yes to perform table action first. Select No to run Before SQL statements first.. Default: true | |
preserve_trailing_blanks | Select Yes to preserve trailing blanks in input text values. Select No for the connector to trim trailing spaces from input text values.. Default: true | |
treat_warnings_as_errors | Select Yes to stop the job when the first warning message occurs.. Default: false | |
rebuild_indexes | Select Yes to rebuild indexes on the table after the bulk load ends.. Default: false | |
reconnect | Select Yes to retry to establish connection to the database when the initial connection is unsuccessful or when the active connection is dropped.. Default: false | |
record_count | Enter the number of records to process in each transaction. The record count must be a multiple of the value specified for the Array size property. To process all available records in one transaction, enter 0.. Default: 2000 | |
replay_before_sql_node | Select Yes to run the Before SQL (node) statement on each node in a parallel job after a successful transparent application failover (TAF).. Default: false | |
replay_before_sql | Select Yes to run the Before SQL statement after a successful transparent application failover (TAF).. Default: false | |
resume_write | Select Yes to resubmit the current transaction and to resume sending records to the database after the failover has completed.. Default: false | |
subpartition_name * | Enter the name of the subpartition to access. | |
table_action * | Select the action to perform before writing data to the table.. Values: [0, 1, 2, 3]. Default: 0 | |
table_name * | Enter the name of the Oracle database table or view to access. | |
table_scope | Select the part of the table to access.. Values: [0, 1, 2]. Default: 0 | |
wait_time | Enter a number, in seconds, that represents the time to wait between transparent application failover (TAF) retries.. Default: 10 | |
truncate_statement * | Enter the TRUNCATE TABLE statement to run to truncate the target database table. | |
update_statement * | Enter an UPDATE statement. The statement is used to update rows into the database. | |
use_date_cache | Select Yes to use the Oracle date cache. Using the date cache may improve performance when many identical date values are loaded into date columns in the target table.. Default: false | |
write_mode | Select the mode to use to write to the database.. Values: [6, 2, 5, 0, 9, 3, 8, 1, 4]. Default: 0 |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
query_timeout | Sets the default query timeout in seconds for all statements created by a connection. If not specified the default value of 300 seconds will be used.. Default: 300 | |
retry_limit | Specify the maximum number of retry connection attempts to be made by the connector with an increasing delay between each retry. If no value is provided, two attempts will be made by default if necessary. | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
host * | The hostname or IP address of the database | |
password | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
catalog_name | The name of the catalog that contains the schema to read from. It is required when a fully qualified table name has not been provided. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from. It is required when a fully qualified table name has not been provided. | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|
Name | Type | Description |
---|---|---|
database * | The name of the database | |
ssl_certificate_host | Hostname in the SubjectAlternativeName or Common Name (CN) part of the SSL certificate | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
api_key * | The api key to use for connecting to the service root | |
auth_type | The type of authentication to be used to access the service root. Values: [api_key, none, basic] | |
password * | The password associated with the username for accessing the data source | |
sap_gateway_url * | The URL used to access the SAP gateway catalog | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
timeout_seconds | Timeout value for HTTP calls in seconds | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
entity_set_name * | The entity set to be processed | |
row_limit | The maximum number of rows to return | |
row_start | The first row of data to read | |
service_name * | The name of the service containing the entity set to be processed | |
service_version | The version of the service containing the entity set to be processed |
Name | Type | Description |
---|---|---|
csrf_protection | A flag indicating if this service has Cross-Site Request Forgery protection enabled. Default: true | |
entity_set_name * | The entity set to be processed | |
service_name * | The name of the service containing the entity set to be processed | |
service_version | The version of the service containing the entity set to be processed | |
write_mode | The mode to be used when writing to the entity. Values: [insert, update] |
Name | Type | Description |
---|---|---|
password * | The password associated with the username for accessing the data source | |
server_name | The name of the server to log in. Default: login.salesforce.com | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
authentication_type | Select the Authentication Type. Values: [oauth_jwt, oauth_username_and_password, username_and_password]. Default: username_and_password | |
consumer_key | Consumer Key for OAuth | |
consumer_secret_key | Consumer Secret Key for OAuth 2.0 Username-Password Flow | |
password * | Salesforce.com password | |
server_certificate | Security Certificate for OAuth 2.0 JWT Bearer Flow | |
token_expiry_time | Token Expiry Time for OAuth Authentication | |
url * | SOAP endpoint URL from your SOAP project | |
username * | Salesforce.com user name |
Name | Type | Description |
---|---|---|
access_method | Select the access method. Values: [bulk_mode, real_time_mode]. Default: real_time_mode | |
salesforce_object_name * | The business object to retrieve or update. | |
batch_size | Data batch size, default is 200. Default: 200 | |
end_time * | Enter end time for delta extraction: yyyy-mm-dd hh:mm:ss. Default: CurrentTime | |
delta_extract_id * | Delta extraction ID | |
start_time * | Enter start time for delta extraction: yyyy-mm-dd hh:mm:ss. Default: LastExtractTime | |
pk_chunking | Specify true to enable PK Chunking for bulk query operation. Default: true | |
enable_flat_file | Enable or disable the ability to specify a column that contains a file path for load or extract large object. Default: false | |
flat_file_column_name * | Specify the name of the column that contains a file full path. UNC and relative paths are not supported. | |
flat_file_content_name * | Specify the field name in the object that contains the name of the file to be downloaded | |
flat_file_folder_location * | Specify the path to the folder location for downloading the files that are extracted from Salesforce. UNC and relative paths are not supported. | |
flat_file_overwrite | When set as true it will overwrite the file on the disk. When set as false, a new file will be created with the date and time of the file creation added at the beginning of the file name. Default: true | |
job_id * | Enter the job ID or the name of the file for the job ID | |
file_path_job_id * | Specify the absolute file path for saving Salesforce bulk mode job ID | |
sf_job_id_in_file | Set to yes if job ID to be specified or saved in a file. Default: false | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
reference_soql | SOQL query generated by the importer program for reference | |
read_mode | Salesforce read operation. Values: [get_deleted_delta, get_the_bulk_load_status, get_updated_delta, query, query_all]. Default: query | |
soql_string * | SOQL query statement to be sent to the Salesforce | |
sleep | Number of seconds between job and batch status recheck. Default: 60 | |
tenacity | Maximum number of seconds to recheck the job and batch status. Default: 1800 |
Name | Type | Description |
---|---|---|
access_method | Select the access method. Values: [bulk_mode, real_time_mode]. Default: real_time_mode | |
salesforce_object_name * | The business object to retrieve or update. | |
batch_size | Data batch size, default is 200. Default: 200 | |
hard_delete_property | Specify true to empty the recycle bin after delete operation. Default: false | |
enable_flat_file | Enable or disable the ability to specify a column that contains a file path for load or extract large object. Default: false | |
flat_file_column_name * | Specify the name of the column that contains a file full path. UNC and relative paths are not supported. | |
file_path_job_id * | Specify the absolute file path for saving Salesforce bulk mode job ID | |
sf_job_id_in_file | Set to yes if job ID to be specified or saved in a file. Default: false | |
keep_temp_file | Specify yes to keep the temporary files. Default: false | |
backend_load_method | Select the Salesforce.com concurrency mode. Values: [parallel, sequential]. Default: parallel | |
write_mode | Salesforce write operation. Values: [create, delete, update, upsert]. Default: upsert |
Name | Type | Description |
---|---|---|
database * | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port * | The port of the database | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [none, random]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
account_name * | The full name of your account (provided by Snowflake) | |
auth_method | ||
database * | The name of the database | |
key_passphrase | The key passphrase needed for private key decryption | |
authenticator_url | Authenticate through native Okta. To enable native SSO through Okta, set this property to the Okta URL endpoint for your Okta account. Leave blank to use internal Snowflake authenticator. | |
password * | The password associated with the username for accessing the data source | |
private_key * | The private key | |
role | The default access control role to use in the Snowflake session | |
username * | The username for accessing the data source | |
warehouse * | The virtual warehouse |
Name | Type | Description |
---|---|---|
_before_after._after_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node after all of the data is processed on that node. | |
_before_after._after_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once after all of the data is processed. | |
_auto_commit_mode | Configure the stage to run in auto-commit mode. In auto-commit mode, the transaction is committed automatically after each statement is executed. When the connector writes records to the data source, the transaction is committed after each row is written to the data source. When the stage is configured to run multiple statements on each row, the transaction is committed after each statement is executed on the row. Values: [disable, enable]. Default: enable | |
_before_after._before_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
_before_after._before_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once before any data is processed. | |
_begin_sql | Enter the SQL statement to run one time before any records are processed in the transaction | |
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
_session._character_set_for_non_unicode_columns | Select the character set option for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. If you select the Default option, the character set encoding of the engine host system locale is used. If you select the Custom option, you must provide the character set name to be used.. Values: [_custom, _default]. Default: _default | |
_session._character_set_for_non_unicode_columns._character_set_name * | Specify the name of the character set encoding for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
_session._default_length_for_columns | Enter the default length for the Char, NChar, Binary, VarChar, NVarChar, and VarBinary link columns for which the Length attribute is not set.. Default: 200 | |
_session._default_length_for_long_columns | Enter the default length for the LongVarChar, LongNVarChar and LongVarBinary link columns for which the Length attribute is not set.. Default: 20000 | |
_enable_partitioned_reads | Select Yes to run the statement on each processing node. The [[node-number]], [[node-number-base-one]] and [[node-count]] placeholders in the statement are replaced on each processing node with the actual zero-based node index, one-based node index and total number of nodes, respectively. Default: false | |
_enable_quoted_ids | Select Yes to enclose the specified table name and column names on the links in quoting strings when SQL statements are generated. The connector queries the driver to determine the quoting string. If it fails to obtain this information from the driver, the connector uses the backtick (`) character as the quoting string. The default is No.. Default: false | |
_end_sql | Enter the SQL statement to run one time in the transaction after all the records were processed in the transaction and before the transaction completes successfully | |
_session._fetch_size | Specify the number of rows that the driver must try to fetch from the data source when the connector requests a single row. Fetching rows in addition to the row requested by the connector can improve performance because the driver can complete the subsequent requests for more rows from the connector locally without a need to access the data source. The default value is 0, which indicates that the driver optimizes the fetch operation based on its internal logic.. Default: 0 | |
_generate_sql | Select Yes to automatically generate the SQL statements at run time.. Default: true | |
_session._generate_all_columns_as_unicode | Always generate columns as NChar, NVarChar and LongNVarChar columns instead of Char, VarChar and LongVarChar columns.. Default: false | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
_isolation_level | Specify how the connector manages statements in transactions. As soon as the connector establishes a connection and issues the first transactional statement, the connector implicitly starts a transaction that uses the specified isolation level. Values: [default, read_committed]. Default: default | |
_session._keep_conductor_connection_alive | Select Yes to keep the connection alive in the conductor process while the player processes are processing records. Select No to close the connection in the conductor process before player processes start processing records, and to connect again if necessary after the player processes complete processing the records.. Default: true | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
_end_of_wave | Select Yes to generate an end-of-wave record after each wave of records, where the number of records in each wave is specified in the Record count property. When the Record count property is set to 0, the end-of-wave records are not generated. Values: [_no, _yes]. Default: _no | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
_record_count | Specify the number of rows that the stage reads from or writes to the data source in a single transaction. When this property is set to 0, the transaction is committed only once on each processing node of the stage after the stage processes all the rows on that node. When rows arrive on the input link of the stage in waves, the Record count value applies to each wave separately. Default: 2000 | |
_session._report_schema_mismatch | Select Yes to perform early comparison of the column definitions on the link with the column definitions in the data source and to issue warning messages for any detected discrepancies which can result in data corruption. Depending on the environment and the usage scenario the early detection of discrepancies may not be possible in which case the error messages are reported only when the actual data corruption is detected. . Default: false | |
_limit_rows._limit | Enter the maximum number of rows that will be returned by the connector | |
row_limit | The maximum number of rows to return | |
_before_after | Select Yes to run SQL statements before and after data is accessed in the database.. Default: false | |
_begin_end_sql | Select Yes to run SQL statements every time when a transaction begins and every time before a transaction ends. Default: false | |
_run_end_sql_if_no_records_processed | Select Yes to run the End SQL statement irrespective of the number of records processed in the transaction. Select No to run the End SQL statement only if one or more records were processed in the transaction. Default: false | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_seed | Seed to be used for getting a repeatable sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none, row]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
_select_statement * | Enter a SELECT statement or the fully qualified name of the file that contains the SELECT statement. The statement is used to read rows from the database. | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
_before_after._after_sql_node._fail_on_error | Select Yes to stop the job if the After SQL (node) statement fails.. Default: true | |
_before_after._after_sql._fail_on_error | Select Yes to stop the job if the After SQL statement fails.. Default: true | |
_before_after._before_sql_node._fail_on_error | Select Yes to stop the job if the Before SQL (node) statement fails.. Default: true | |
_before_after._before_sql._fail_on_error | Select Yes to stop the job if the Before SQL statement fails.. Default: true | |
_table_name * | Enter the fully qualified name of the table that you want to access in the data source. | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
_load_from_file._s3._access_key * | Specify the Amazon Web Services access key | |
_before_after._after_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node after all of the data is processed on that node. | |
_before_after._after_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once after all of the data is processed. | |
_auto_commit_mode | Configure the stage to run in auto-commit mode. In auto-commit mode, the transaction is committed automatically after each statement is executed. When the connector writes records to the data source, the transaction is committed after each row is written to the data source. When the stage is configured to run multiple statements on each row, the transaction is committed after each statement is executed on the row. Values: [disable, enable]. Default: enable | |
_load_from_file._azure._storage_area_name * | Specify the name of the Azure Storage account name | |
_session._batch_size | Enter the number of records to include in the batch of records for each statement execution. The value 0 indicates that all input records are passed to the statements in a single batch.. Default: 2000 | |
_before_after._before_sql_node | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
_before_after._before_sql | Enter the SQL statement or the fully qualified name of the file that contains the SQL statement to run once before any data is processed. | |
_begin_sql | Enter the SQL statement to run one time before any records are processed in the transaction | |
_load_from_file._file_format._binary_as_text | Select to enable binary as text. Default: false | |
_load_from_file._s3._bucket_name * | Specify the S3 bucket name | |
_session._character_set_for_non_unicode_columns | Select the character set option for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. If you select the Default option, the character set encoding of the engine host system locale is used. If you select the Custom option, you must provide the character set name to be used.. Values: [_custom, _default]. Default: _default | |
_session._character_set_for_non_unicode_columns._character_set_name * | Specify the name of the character set encoding for the values of Char, VarChar and LongVarChar link columns for which the Extended attribute is not set to Unicode. | |
_load_from_file._file_format._compression | Specify the compression used for the files. Values: [auto, brotli, bz2, deflate, gzip, none, raw_deflate, zstd]. Default: none | |
_load_from_file._create_staging_area | Specify whether to create staging area or use an existing one. Default: true | |
create_statement | The Create DDL statement for recreating the target table | |
_table_action._generate_create_statement._create_statement * | Enter the CREATE TABLE statement to run to create the target database table | |
_custom_statements | Custom statements to be run for each input row | |
_load_from_file._file_format._date_format | Specify the Date Format | |
_session._default_length_for_columns | Enter the default length for the Char, NChar, Binary, VarChar, NVarChar, and VarBinary link columns for which the Length attribute is not set.. Default: 200 | |
_session._default_length_for_long_columns | Enter the default length for the LongVarChar, LongNVarChar and LongVarBinary link columns for which the Length attribute is not set.. Default: 20000 | |
_load_from_file._delete_staging_area | Select No is the staging area has to be retained, by default it is deleted. Default: false | |
_delete_statement * | Enter a DELETE statement or the fully qualified name of the file that contains a DELETE statement. The statement is used to delete rows from the database. | |
_table_action._generate_drop_statement._drop_statement * | Enter the DROP TABLE statement to run to drop the target database table. | |
_session._drop_unmatched_fields | Select Yes to drop any fields from the input link for which there are no matching parameters in the statements configured for the stage. Select No to issue error message when an unmatched field is present on the link.. Default: false | |
_enable_quoted_ids | Select Yes to enclose the specified table name and column names on the links in quoting strings when SQL statements are generated. The connector queries the driver to determine the quoting string. If it fails to obtain this information from the driver, the connector uses the backtick (`) character as the quoting string. The default is No.. Default: false | |
_load_from_file._file_format._encoding | Specify the Encoding | |
_load_from_file._staging_area_format._encoding | Specify the Encoding. Default: UTF-8 | |
_load_from_file._azure._encryption | Specify the Encryption method (either NONE or AZURE_CSE). If no value is provided then it would be considered as NONE. Values: [azure_cse, none]. Default: none | |
_load_from_file._s3._encryption | Specify Encryption. Values: [aws_sse_s3, none]. Default: none | |
_end_sql | Enter the SQL statement to run one time in the transaction after all the records were processed in the transaction and before the transaction completes successfully | |
_load_from_file._staging_area_format._escape_character | Specify the character to use to escape field and row delimiters. If an escape character exists in the data, the escape character is also escaped. Because escape characters require additional processing, do not specify a value for this property if you do not need to include escape characters in the data | |
_load_from_file._file_format._field_delimiter | Specify the Field Delimiter | |
_load_from_file._staging_area_format._field_delimiter | Specify the Field Delimiter. Default: , | |
_load_from_file._gcs._file_name * | Specify the URL path of the file (bucket/folder/filename) | |
_load_from_file._azure._file_format_name * | Specify the name of the predefined file format to be used for this load | |
_load_from_file._gcs._file_format * | Specify the name of the predefined file format to be used for this load. The specified file format should exist in the database | |
_load_from_file._file_format | Specify file format options for when using external staging location. Values: [avro, csv, orc, parquet]. Default: csv | |
_load_from_file._azure._file_name * | Specify the fully qualified filename in the container/folder/filename format | |
_generate_sql | Select Yes to automatically generate the SQL statements at run time.. Default: true | |
_table_action._generate_create_statement | Select Yes to automatically generate the CREATE TABLE statement at run time. Depending on the input link column data types, the driver, and the data source in question, the connector may not be able to determine the corresponding native data types and produce a valid statement. Default: true | |
_table_action._generate_drop_statement | Select Yes to automatically generate the DROP TABLE statement at run time.. Default: true | |
_table_action._generate_truncate_statement | Select Yes to automatically generate the TRUNCATE TABLE statement at run time.. Default: true | |
_java._heap_size | Specify the maximum Java Virtual Machine heap size in megabytes.. Default: 256 | |
_insert_statement * | Enter an INSERT statement or the fully qualified name of the file that contains an INSERT statement. The statement is used to insert rows into the database. | |
_isolation_level | Specify how the connector manages statements in transactions. As soon as the connector establishes a connection and issues the first transactional statement, the connector implicitly starts a transaction that uses the specified isolation level. Values: [default, read_committed]. Default: default | |
_session._keep_conductor_connection_alive | Select Yes to keep the connection alive in the conductor process while the player processes are processing records. Select No to close the connection in the conductor process before player processes start processing records, and to connect again if necessary after the player processes complete processing the records.. Default: true | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
_load_from_file._azure._master_key | Specify the Master Key , required when Encryption = AZURE_CSE | |
_load_from_file._copy_options._on_error | Specify to continue or abort the load if an error occurs. Values: [abort_statement, continue, skip_file]. Default: abort_statement | |
_load_from_file._copy_options._other_copy_options | Specify other copy options to be used while loading from S3 | |
_load_from_file._staging_area_format._other_file_format_options | Specify format options apart from the ones listed above which will be used either in creating the staging area or in the execution of COPY command. If Create staging area=Yes, then these format options will be used when creating the staging area and if Create staging area = NO, then these format options will be used directly in the execution of COPY command. The options should be specified in the option=value format, for example EMPTY_FIELD_AS_NULL=TRUE, if more than one option is to be specified then provide a space between the two options. For list of File format options available please refer Snowflake documentation | |
_load_from_file._file_format._other_format_options | Specify other format options apart from the ones listed above in option=value format | |
_table_action._table_action_first | Select Yes to perform the table action first. Select No to run the Before SQL statements first.. Default: true | |
_load_from_file._purge_copied_files | Specify whether to purge the files which are copied into the table from the staging area. For external staging area, PURGE=TRUE would delete the file from the bucket after loading the file. Default: true | |
_load_from_file._staging_area_format._quotes | Fields optionally enclosed by - Character used to enclose strings. Value can be NONE, single quote character ('), or double quote character ("). Values: [double, none, single]. Default: none | |
_load_from_file._staging_area_format._record_delimiter | Specify a string or one of the following values: <NL>, <CR>, <LF>, <TAB>. The string can include Unicode escape strings in the form \uNNNN where NNNN is the Unicode character code. Default: | |
_record_count | Specify the number of rows that the stage reads from or writes to the data source in a single transaction. When this property is set to 0, the transaction is committed only once on each processing node of the stage after the stage processes all the rows on that node. When rows arrive on the input link of the stage in waves, the Record count value applies to each wave separately. Default: 2000 | |
_load_from_file._file_format._record_delimiter | Specify the Record Delimiter | |
_session._report_schema_mismatch | Select Yes to perform early comparison of the column definitions on the link with the column definitions in the data source and to issue warning messages for any detected discrepancies which can result in data corruption. Depending on the environment and the usage scenario the early detection of discrepancies may not be possible in which case the error messages are reported only when the actual data corruption is detected. . Default: false | |
_run_end_sql_if_no_records_processed | Select Yes to run the End SQL statement irrespective of the number of records processed in the transaction. Select No to run the End SQL statement only if one or more records were processed in the transaction. Default: false | |
_before_after | Select Yes to run SQL statements before and after data is accessed in the database.. Default: false | |
_begin_end_sql | Select Yes to run SQL statements every time when a transaction begins and every time before a transaction ends. Default: false | |
_load_from_file._s3._file_name * | Specify the location of the S3 file name from which data needs to be moved to table | |
_load_from_file._azure._sastoken * | Specify the SAS token required for connecting to Azure Storage | |
schema_name | The name of the schema that contains the table to write to | |
_load_from_file._s3._secret_key * | Specify the Amazon Web Services secret key | |
_load_from_file._file_format._skip_byte_order_mark | Select to skip the Byte Order Mark(BOM). Default: false | |
_load_from_file._file_format._snappy_compression | Select to enable Snappy Compression. Default: false | |
_load_from_file._gcs._storage_integration * | Specify the name of the Google cloud storage stage integration. This integration has to be created outside in Snowflake database | |
_load_from_file._staging_area_name * | Specify the external staging area name | |
_load_from_file._staging_area_type | The type of staging area either Snowflake managed or Externally managed one. Values: [external_azure, external_gcs, external_s3, internal_location]. Default: internal_location | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
_before_after._after_sql_node._fail_on_error | Select Yes to stop the job if the After SQL (node) statement fails.. Default: true | |
_before_after._after_sql._fail_on_error | Select Yes to stop the job if the After SQL statement fails.. Default: true | |
_before_after._before_sql_node._fail_on_error | Select Yes to stop the job if the Before SQL (node) statement fails.. Default: true | |
_before_after._before_sql._fail_on_error | Select Yes to stop the job if the Before SQL statement fails.. Default: true | |
_table_action._generate_create_statement._fail_on_error | Select Yes to stop the job if the CREATE TABLE statement fails. Default: true | |
_table_action._generate_drop_statement._fail_on_error | Select Yes to stop the job if the DROP TABLE statement fails.. Default: false | |
_table_action._generate_truncate_statement._fail_on_error | Select Yes to stop the job if the TRUNCATE TABLE statement fails.. Default: true | |
_table_action * | Select the action to complete before writing data to the table.. Values: [_append, _create, _replace, _truncate]. Default: _append | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
_table_name * | Enter the fully qualified name of the table that you want to access in the data source. | |
table_name | The name of the table to write to | |
_load_from_file._file_format._time_format | Specify the Time Format | |
_load_from_file._file_format._timestamp_format | Specify the Timestamp Format | |
_table_action._generate_truncate_statement._truncate_statement * | Enter the TRUNCATE TABLE statement to run to truncate the target database table. | |
_update_statement * | Enter an UPDATE statement or the fully qualified name of the file that contains an UPDATE statement. The statement is used to update rows in the database. | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
_load_from_file._azure._use_existing_file_format * | Specify whether to use an existing file format or use the File format options when creating the stage. Default: true | |
_load_from_file._gcs._use_existing_file_format * | Specify whether to use an existing file format or use the File format options when creating the stage. Default: true | |
_use_merge_statement | Specify Yes to use Snowflake merge statement functionality for Update, Delete, Insert then Update and Delete then Insert write modes. If No is selected Snowflake driver functionality for these write modes will be invoked. Default: true | |
_write_mode | Select the mode that you want to use to write to the data source. Values: [custom, delete, delete_insert, insert, insert_overwrite, insert_update, load_from_file, update]. Default: insert | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
access_token_name * | The name of the personal access token to use | |
access_token_secret * | The secret of the personal access token to use | |
auth_method | ||
host * | The hostname or IP address of the Tableau server | |
password * | The password associated with the username for accessing the data source | |
port | The port of the tableau server | |
ssl | The port is configured to accept SSL connections. Default: true | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
site | The name of the Tableau site to use | |
username * | The username for accessing the data source |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
file_name * | The name of the file to read | |
infer_as_varchar | Treat the data in all columns as VARCHARs. Default: false | |
infer_record_count | The number of records to process to obtain the structure of the data. Default: 1000 | |
infer_schema | Obtain the schema from the file. Default: false | |
invalid_data_handling | How to handle values that are not valid: fail the job, null the column, or drop the row. Values: [column, fail, row]. Default: fail | |
read_mode | The method for reading files. Values: [read_single, read_raw, read_raw_multiple_wildcard, read_multiple_regex, read_multiple_wildcard]. Default: read_single | |
row_limit | The maximum number of rows to return | |
row_start | Indicates the offset from the row where reading starts to treat as the start of the data | |
type_mapping | Overrides the data types of specified columns in the file's inferred schema, for example, inferredType1:newType1;inferredType2:newType2 |
Name | Type | Description |
---|
Name | Type | Description |
---|---|---|
authentication_method | The way of authentication. Values: [ldap, td2]. Default: td2 | |
client_charset | The name of the client character set to be used. This property should be used in specific cases. Although the CLIENT_CHARSET connection parameter can be used to override the Teradata JDBC Drivers normal mapping of Teradata session character sets to Java character sets, the CLIENT_CHARSET connection parameter is not intended for use in new Teradata deployments. Data corruption will occur if the wrong Java character set is specified with the CLIENT_CHARSET connection parameter. It is a legacy support feature and would be deprecated soon. If you do not provide a client character set, UTF16 will be used by default. | |
database | The name of the database | |
host * | The hostname or IP address of the database | |
password * | The password associated with the username for accessing the data source | |
port | The port of the database. Default: 1025 | |
ssl | The port is configured to accept SSL connections. Default: false | |
ssl_certificate | The SSL certificate of the host to be trusted which is only needed when the host certificate was not signed by a known certificate authority | |
username * | The username for accessing the data source | |
ssl_certificate_validation | Validate that the SSL certificate returned by the host is trusted |
Name | Type | Description |
---|---|---|
byte_limit | The maximum number of bytes to return. Use any of following suffixes to change the unit: KB, MB, GB, or TB. A value of 0 returns all data. | |
call_statement | The SQL statement to execute the stored procedure | |
decimal_rounding_mode | Different rounding modes for values in columns of decimal and numeric data types. Values: [ceiling, down, floor, halfdown, halfeven, halfup, up]. Default: floor | |
read_mode | The method for reading records from the table. Values: [general, select]. Default: general | |
row_limit | The maximum number of rows to return | |
sampling_percentage | Percentage for each row or block to be included in the sample | |
sampling_type | Indicates which data sampling type should be used in the select statement. Values: [block, none]. Default: none | |
schema_name | The name of the schema that contains the table to read from | |
select_statement * | The SQL SELECT statement for retrieving data from the table | |
table_name * | The name of the table to read from |
Name | Type | Description |
---|---|---|
create_statement | The Create DDL statement for recreating the target table | |
key_column_names | A comma separated list of column names to override the primary key used during an update or merge | |
schema_name | The name of the schema that contains the table to write to | |
static_statement * | The SQL used for setup operations, for example a CREATE statement | |
table_action | The action to take on the target table to handle the new data set. Values: [append, replace, truncate]. Default: append | |
table_name | The name of the table to write to | |
update_statement | The SQL INSERT, UPDATE, MERGE, or DELETE statement for updating data in the table. | |
write_mode | The mode for writing records to the target table. Values: [insert, merge, static_statement, update, update_statement, update_statement_table_action]. Default: insert |
Name | Type | Description |
---|---|---|
account | User account ID for resource accounting | |
auto_map_charset_encoding | Default is Yes. User can set it to No and specify the required ICU charset encoding. Default: true | |
queryband.read_from_file.character_set | IANA character set name | |
client_character_set | Teradata client character set. Default: UTF8 | |
database | Default database | |
log_on_mech | Select the security mechanism to use to authenticate the user. Select Default to use the default logon mechanism of the Teradata server. Select TD2 to use the Teradata security mechanism. Select LDAP to use an LDAP security mechanism for external authentication.. Values: [default, ldap, td2]. Default: default | |
max_bytes_per_character * | Maximum bytes per character | |
nls_map_name * | Specify the ICU charset encoding name, to be used | |
password * | Specify the password to use to connect to the database. | |
queryband | Semicolon-separated list of name-value pairs used in the generated query band statement for the session | |
queryband.read_from_file | Select YES to read the query band expression from the file that is specified in the Query band expression property.. Default: false | |
server | Specify the Teradata Director Program ID. | |
transaction_mode | Semantics for SQL transactions. Values: [ansi, teradata]. Default: ansi | |
unicode_pass_thru | Enable or Disable Unicode pass through. Default: false | |
username * | Specify the user name to use to connect to the database. |
Name | Type | Description |
---|---|---|
access_method | Specify whether to use immediate or bulk access. Values: [bulk, immediate]. Default: immediate | |
before_after.after | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once after all data is processed. | |
before_after.after_node | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once one each node after all data is processed on that node. | |
before_after.after_sql_file | File on the conductor node that contains After SQL statements | |
session.array_size | The array size to be used for all read and write database operations. Default: 2000 | |
before_after.before | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once before any data is processed. | |
before_after.before_node | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
before_after.before_sql_file | File on the conductor node that contains Before SQL statements | |
before_after | Before/After SQL properties. Default: false | |
before_after.after_sql_file.character_set | IANA character set name | |
before_after.before_sql_file.character_set | IANA character set name | |
session.pass_lob_locator.column * | Use to choose columns containing LOBs to be passed by locator (reference) | |
source_temporal_support.transaction_time_qualifier.date_timestamp_expression * | Specifies a date or timestamp expression for the AS OF qualifier | |
source_temporal_support.valid_time_qualifier.date_timestamp_expression * | Specifies a date or timestamp expression for the AS OF qualifier | |
describe_strings_in_bytes | Default is False. Setting this to True, describes char-data as string instead of ustring. Default: false | |
disconnect | Enter the condition under which the connection to the database shall be closed. Values: [never, period_of_inactivity]. Default: never | |
session.pass_lob_locator | Enables/disables the ability to specify LOB columns to be passed using locator (reference) information. LOB columns not specified will be passed inline. Default: false | |
enable_quoted_i_ds | Specifies whether or not to enclose database object names in quotes when generating DDL and DML. Default: true | |
transaction.end_of_wave.end_of_data | Specifies whether to insert an EOW marker for the last set of records when the number is less than the specified transaction record count value. Default: false | |
transaction.end_of_wave | Specify settings for the end of wave handling. None means EOW markers are never inserted, Before means EOW markers are inserted before committing the transaction, After means EOW markers are inserted after committing the transaction. Values: [after, before, none]. Default: none | |
parallel_synchronization.end_timeout | Maximum number of seconds to wait for the other parallel instances to complete. Default: 0 | |
before_after.after.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.after_node.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.after_sql_file.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before_node.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before_sql_file.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
session.schema_reconciliation.fail_on_size_mismatch | Fail if sizes of numeric and string fields mismatch when validating the design schema against the database. Default: true | |
session.schema_reconciliation.fail_on_type_mismatch | Fail if the data types of the fields mismatch when validating the design schema against the database. Default: true | |
generate_sql | Specifies whether to generate SQL statement(s) at runtime. Default: false | |
disconnect.inactivity_period * | Enter the period of inactivity after which the connection should be closed. Default: 300 | |
reconnect.retry_interval * | Enter the interval in seconds to wait between attempts to establish a connection. Default: 10 | |
session.isolation_level | Degree of isolation of an application process from concurrent application processes. Values: [default, read_committed, read_uncommitted, repeatable_read, serializable]. Default: default | |
limit_rows.limit | Enter the maximum number of rows that will be returned by the connector.. Default: 1000 | |
limit_rows | Select Yes to limit the number of rows that are returned by the connector.. Default: false | |
lookup_type | Lookup Type. Values: [empty, pxbridge]. Default: empty | |
limit_settings.max_buffer_size | Maximum request or response buffer size. Default: 0 | |
limit_settings.max_partition_sessions | Maximum number of connection sessions per partition. Default: 0 | |
limit_settings.max_sessions | Maximum number of connection sessions. Default: 0 | |
limit_settings.min_sessions | Minimum number of connection sessions. Default: 0 | |
reconnect.retry_count * | Enter the number of attempts to establish a connection. Default: 3 | |
sql.other_clause | The other clause predicate of the SQL statement | |
parallel_synchronization | Parallel synchronization properties. Default: false | |
source_temporal_support.valid_time_qualifier.period_expression | Specifies a period expression for the NONSEQUENCED or SEQUENCED qualifier | |
limit_settings.progress_interval | Number of rows per partition before a progress message is displayed, or 0 for no messages. Default: 100000 | |
reconnect | Select Yes to retry to establish connection to the database when the initial connection is unsuccessful or when the active connection is dropped.. Default: false | |
transaction.record_count | Number of records per transaction. The value 0 means all available records. Default: 2000 | |
sql.select_statement * | Statement to be executed when reading rows from the database | |
bulk_access.sleep | Number of minutes between logon retries. Default: 0 | |
parallel_synchronization.sync_id | Sync table key value | |
parallel_synchronization.sync_database | Sync table database | |
parallel_synchronization.sync_password | Sync user password | |
parallel_synchronization.sync_poll | Number of seconds between retries to update the sync table. Default: 0 | |
parallel_synchronization.sync_server | Sync table server name | |
parallel_synchronization.sync_table * | Sync table name | |
parallel_synchronization.sync_table_action | Select the table action to perform on the sync table. Values: [append, create, replace, truncate]. Default: create | |
parallel_synchronization.sync_table_cleanup | Select the cleanup action to perform on the sync table. Values: [drop, keep]. Default: keep | |
parallel_synchronization.sync_table_write_mode | The mode to be used when writing to the sync table. Values: [delete_then_insert, insert]. Default: insert | |
parallel_synchronization.sync_timeout | Maximum number of seconds to retry an update of the sync table. Default: 0 | |
parallel_synchronization.sync_user | Sync table user name | |
table_name * | The table name to be used in generated SQL | |
source_temporal_support.temporal_columns | Specifies the temporal columns in the table. Values: [bi-temporal, none, transaction_time, valid_time]. Default: none | |
source_temporal_support | Specifies whether the source table has temporal columns. Default: false | |
bulk_access.tenacity | Maximum number of hours to retry the logon operation. Default: 0 | |
source_temporal_support.temporal_columns.transaction_time_column * | Specifies the TRANSACTIONTIME column. If the Generate create statement at runtime property is set to Yes, the column will be designated as TRANSACTIONTIME in the generated CREATE TABLE statement | |
source_temporal_support.transaction_time_qualifier | Specifies the TRANSACTIONTIME qualifier. Values: [as_of, current, non-sequenced, none]. Default: none | |
session.schema_reconciliation.unused_field_action | Specify whether to drop unused fields or abort the job. Values: [abort, drop, keep, warn]. Default: abort | |
source_temporal_support.temporal_columns.valid_time_column * | Specifies the VALIDTIME column. If the Generate create statement at runtime property is set to Yes, the column will be designated as VALIDTIME in the generated CREATE TABLE statement | |
source_temporal_support.valid_time_qualifier | Specifies the VALIDTIME qualifier. Values: [as_of, current, non-sequenced, none, sequenced]. Default: none | |
sql.where_clause | The where clause predicate of the SQL statement |
Name | Type | Description |
---|---|---|
access_method | Specify whether to use immediate or bulk access. Values: [bulk, immediate]. Default: immediate | |
before_after.after | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once after all data is processed. | |
before_after.after_node | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once one each node after all data is processed on that node. | |
before_after.after_sql_file | File on the conductor node that contains After SQL statements | |
table_action.generate_create_statement.create_table_options.allow_duplicate_rows | Controls whether to specify a SET or MULTISET qualifier. Values: [default, no, yes]. Default: default | |
session.array_size | The array size to be used for all read and write database operations. Default: 2000 | |
before_after.before | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once before any data is processed. | |
before_after.before_node | Enter the SQL statement or the fully-qualified name of the file that contains the SQL statement to run once on each node before any data is processed on that node. | |
before_after.before_sql_file | File on the conductor node that contains Before SQL statements | |
before_after | Before/After SQL properties. Default: false | |
immediate_access.buffer_usage | Specify whether requests should share the same buffer or use separate buffers. Values: [separate, share]. Default: share | |
sql.user_defined.file.character_set | IANA character set name | |
before_after.after_sql_file.character_set | IANA character set name | |
before_after.before_sql_file.character_set | IANA character set name | |
parallel_synchronization.checkpoint_timeout | Maximum number of seconds to wait for the other instances to reach the checkpoint. Default: 0 | |
bulk_access.cleanup_mode | Specify whether to drop error tables and the work table if loading ends with an error that cannot be restarted. Values: [drop, keep]. Default: drop | |
logging.log_column_values.delimiter | Specifies the delimiter to use between columns. Values: [comma, newline, space, tab]. Default: space | |
session.pass_lob_locator.column * | Use to choose columns containing LOBs to be passed by locator (reference) | |
table_action.generate_create_statement.create_statement * | A statement to be executed when creating the target database table | |
table_action.generate_create_statement.create_table_options.data_block_size | Controls whether to specify a DATABLOCKSIZE clause. Default: 0 | |
bulk_access.update_load.delete_multiple_rows | Controls whether to use a delete task to delete multiple rows from a table. Default: false | |
sql.delete_statement * | Statement to be executed when deleting rows from the database | |
disconnect | Enter the condition under which the connection to the database shall be closed. Values: [never, period_of_inactivity]. Default: never | |
table_action.generate_drop_statement.drop_statement * | A statement to be executed when dropping the target database table | |
bulk_access.error_control.duplicate_insert_rows | Specify whether to reject or ignore duplicate rows in insert operations. Values: [default, ignore, reject]. Default: default | |
bulk_access.error_control.duplicate_update_rows | Specify whether to reject or ignore duplicate rows in update operations. Values: [default, ignore, reject]. Default: default | |
session.pass_lob_locator | Enables/disables the ability to specify LOB columns to be passed using locator (reference) information. LOB columns not specified will be passed inline. Default: false | |
enable_quoted_i_ds | Specifies whether or not to enclose database object names in quotes when generating DDL and DML. Default: false | |
limit_settings.end_row | Row number to end loading. Default: 0 | |
parallel_synchronization.end_timeout | Maximum number of seconds to wait for the other parallel instances to complete. Default: 0 | |
bulk_access.error_limit | Maximum number of rows rejected to error table 1. Default: 0 | |
bulk_access.error_table1 | Table of rows rejected for SQL errors | |
bulk_access.error_table2 | Table of rows rejected for uniqueness violations | |
bulk_access.fail_on_mloa_derrs | Set to Yes connector aborts the job if errors exist in the et/uv table and Set to No completes job even if errors exist in et/uv table and user should check tables. Default: true | |
sql.user_defined.request_type.fail_on_error | Abort the statement sequence when a statement fails. Default: true | |
before_after.after.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.after_node.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.after_sql_file.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before_node.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
before_after.before_sql_file.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
table_action.generate_create_statement.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
table_action.generate_drop_statement.fail_on_error | Abort the job if there is an error executing a command. Default: false | |
table_action.generate_truncate_statement.fail_on_error | Abort the job if there is an error executing a command. Default: true | |
sql.user_defined.file * | File on the conductor node that contains SQL statements to be executed for each input row | |
table_action.generate_create_statement.create_table_options.table_free_space.free_space_percent | Percent of free space to remain during loading operations. Default: 0 | |
generate_sql | Specifies whether to generate SQL statement(s) at runtime. Default: false | |
tmsmevents.generate_uowid * | Select Yes to automatically generate the UOW ID by the TMSM at runtime. Select No to specify the UOW ID to use for the dual load.. Default: false | |
table_action.generate_create_statement | Select Yes to automatically generate the create statement at runtime.. Default: true | |
table_action.generate_drop_statement | Specifies whether to generate a drop table statement at runtime. Default: true | |
table_action.generate_truncate_statement | Specifies whether to generate a truncate table statement at runtime. Default: true | |
disconnect.inactivity_period * | Enter the period of inactivity after which the connection should be closed. Default: 300 | |
sql.insert_statement * | Statement to be executed when inserting rows into the database | |
reconnect.retry_interval * | Enter the interval in seconds to wait between attempts to establish a connection. Default: 10 | |
bulk_access.load_type | Type of bulk load. Values: [load, stream, update]. Default: load | |
logging.log_column_values | Specifies whether to log column values for the first row that fails to be written. Default: false | |
logging.log_column_values.log_keys_only | Specifies whether to log key columns or all columns for failing statements. Default: false | |
bulk_access.log_table | Restart log table | |
bulk_access.stream_load.macro_database | Database that contains macros used by the Stream load | |
table_action.generate_create_statement.create_table_options.make_duplicate_copies | Controls whether to specify a FALLBACK clause. Values: [default, no, yes]. Default: default | |
limit_settings.max_buffer_size | Maximum request or response buffer size. Default: 0 | |
limit_settings.max_partition_sessions | Maximum number of connection sessions per partition. Default: 0 | |
limit_settings.max_sessions | Maximum number of connection sessions. Default: 0 | |
limit_settings.min_sessions | Minimum number of connection sessions. Default: 0 | |
bulk_access.error_control.missing_delete_rows | Specify whether to reject or ignore missing rows in delete operations. Values: [default, ignore, reject]. Default: default | |
bulk_access.error_control.missing_update_rows | Specify whether to reject or ignore missing rows in update operations. Values: [default, ignore, reject]. Default: default | |
reconnect.retry_count * | Enter the number of attempts to establish a connection. Default: 3 | |
bulk_access.stream_load.pack_size | Number of statements per request. Default: 0 | |
parallel_synchronization | Parallel synchronization properties. Default: false | |
table_action.generate_create_statement.create_table_options.partition_by_expression | Specifies the expression for the PARTITION BY clause | |
target_temporal_support.temporal_qualifier.period_expression | Specifies a period expression for the SEQUENCED VALIDTIME qualifier. This expression can only be used with write modes that include UPDATE or DELETE. | |
table_action.generate_create_statement.create_table_options.primary_index_type | Primary index type for key columns. Values: [no_primary_index, non-unique, unique]. Default: non-unique | |
limit_settings.progress_interval | Number of rows per partition before a progress message is displayed, or 0 for no messages. Default: 100000 | |
reconnect | Select Yes to retry to establish connection to the database when the initial connection is unsuccessful or when the active connection is dropped.. Default: false | |
transaction.record_count | Number of records per transaction. The value 0 means all available records. Default: 2000 | |
sql.user_defined.request_type | Specify whether to separate the statements into individual requests or use a multi-statement request. Values: [individual, multi-statement]. Default: individual | |
bulk_access.stream_load.robust | Robust restart logic. Values: [no, yes]. Default: yes | |
bulk_access.stream_load.serialize | Serialize multiple statements. Values: [no, yes]. Default: yes | |
table_action.generate_create_statement.create_table_options.server_character_set | Server character set for Char and VarChar columns | |
bulk_access.sleep | Number of minutes between logon retries. Default: 0 | |
bulk_access.start_mode | Specify whether to drop error tables before the connector begins loading or to restart an aborted load. Values: [auto, clean, restart]. Default: clean | |
limit_settings.start_row | Row number to start loading. Default: 0 | |
sql.user_defined.statements * | SQL statements to be executed for each input row | |
parallel_synchronization.sync_id | Sync table key value | |
parallel_synchronization.sync_database | Sync table database | |
parallel_synchronization.sync_password | Sync user password | |
parallel_synchronization.sync_poll | Number of seconds between retries to update the sync table. Default: 0 | |
parallel_synchronization.sync_server | Sync table server name | |
parallel_synchronization.sync_table * | Sync table name | |
parallel_synchronization.sync_table_action | Select the table action to perform on the sync table. Values: [append, create, replace, truncate]. Default: create | |
parallel_synchronization.sync_table_cleanup | Select the cleanup action to perform on the sync table. Values: [drop, keep]. Default: keep | |
parallel_synchronization.sync_table_write_mode | The mode to be used when writing to the sync table. Values: [delete_then_insert, insert]. Default: insert | |
parallel_synchronization.sync_timeout | Maximum number of seconds to retry an update of the sync table. Default: 0 | |
parallel_synchronization.sync_user | Sync table user name | |
tmsmevents | Options for the TMSM events. Default: false | |
table_action * | Select the action to perform on the database table. Values: [append, create, replace, truncate]. Default: append | |
table_action.generate_create_statement.create_table_options.table_free_space | Controls whether to specify a FREESPACE clause. Values: [default, yes]. Default: default | |
table_name * | The table name to be used in generated SQL | |
target_temporal_support.temporal_columns | Specifies the temporal columns in the table. Values: [bi-temporal, none, transaction_time, valid_time]. Default: none | |
target_temporal_support.temporal_qualifier | Specifies the temporal qualifier for generated SQL. Values: [current_valid_time, non-sequenced_valid_time, non-temporal, none, sequenced_valid_time]. Default: none | |
target_temporal_support | Specifies whether the target table has temporal columns. Default: false | |
bulk_access.tenacity | Maximum number of hours to retry the logon operation. Default: 0 | |
target_temporal_support.temporal_columns.transaction_time_column * | Specifies the TRANSACTIONTIME column. If the Generate create statement at runtime property is set to Yes, the column will be designated as TRANSACTIONTIME in the generated CREATE TABLE statement | |
table_action.generate_truncate_statement.truncate_statement * | A statement to be executed when truncating the database table | |
tmsmevents.generate_uowid.uowid * | Unique unit of work id | |
tmsmevents.uowclass | The classification of the unit of work | |
tmsmevents.uowsourcesystem | The name of the system the data is sourced from. | |
session.schema_reconciliation.unused_field_action | Specify whether to drop unused fields or abort the job. Values: [abort, drop, keep, warn]. Default: abort | |
sql.update_statement * | Statement to be executed when updating rows in the database | |
sql.user_defined * | Source of the user-defined SQL statements. Values: [file, statements]. Default: statements | |
target_temporal_support.temporal_columns.valid_time_column * | Specifies the VALIDTIME column. If the Generate create statement at runtime property is set to Yes, the column will be designated as VALIDTIME in the generated CREATE TABLE statement | |
bulk_access.work_table | Work table | |
write_mode * | The mode to be used when writing to a database table. Values: [delete, delete_then_insert, insert, insert_then_update, update, update_then_insert, user-defined_sql]. Default: insert |