Apache Kafka Connector for CData Sync

Build 24.0.9175
  • Apache Kafka
    • Establishing a Connection
    • Advanced Features
      • SSL Configuration
      • Firewall and Proxy
    • Data Model
    • Connection String Options
      • Authentication
        • AuthScheme
        • User
        • Password
        • BootstrapServers
        • UseSSL
      • Connection
        • ConsumerGroupId
        • AutoCommit
      • Azure Authentication
        • AzureTenant
        • AzureResource
      • OAuth
        • OAuthClientId
        • OAuthClientSecret
        • DelegatedServiceAccounts
        • RequestingServiceAccount
      • JWT OAuth
        • OAuthJWTCert
        • OAuthJWTCertType
        • OAuthJWTCertPassword
        • OAuthJWTCertSubject
      • Kerberos
        • KerberosKeytabFile
        • KerberosSPN
        • KerberosServiceName
        • UseKerberosTicketCache
      • SSL
        • SSLServerCert
        • SSLServerCertType
        • SSLClientCert
        • SSLClientCertType
        • SSLClientCertPassword
        • SSLIdentificationAlgorithm
      • Schema Registry
        • RegistryUrl
        • RegistryService
        • RegistryAuthScheme
        • RegistryUser
        • RegistryPassword
        • RegistryClientCert
        • RegistryClientCertType
        • RegistryClientCertPassword
        • RegistryClientCertSubject
        • RegistryVersion
        • RegistryServerCert
        • SchemaMergeMode
      • Firewall
        • FirewallType
        • FirewallServer
        • FirewallPort
        • FirewallUser
        • FirewallPassword
      • Proxy
        • ProxyAutoDetect
        • ProxyServer
        • ProxyPort
        • ProxyAuthScheme
        • ProxyUser
        • ProxyPassword
        • ProxySSLType
        • ProxyExceptions
      • Logging
        • LogModules
      • Schema
        • Location
        • BrowsableSchemas
        • Tables
        • Views
      • Miscellaneous
        • AWSWorkloadIdentityConfig
        • CompressionType
        • ConsumerProperties
        • CreateTablePartitions
        • CreateTableReplicationFactor
        • EnableIdempotence
        • FlattenArrays
        • GenerateSchemaFiles
        • MaximumBatchSize
        • MaxRows
        • MessageKeyColumn
        • MessageKeyType
        • OffsetResetStrategy
        • Other
        • Pagesize
        • ProducerProperties
        • PseudoColumns
        • ReadDuration
        • RowScanDepth
        • SchemaRegistryOnly
        • SerializationFormat
        • Timeout
        • TypeDetectionScheme
        • UseConfluentAvroFormat
        • UserDefinedViews
        • ValidateRegistryTopics
        • WorkloadPoolId
        • WorkloadProjectId
        • WorkloadProviderId
    • Third Party Copyrights

Apache Kafka Connector for CData Sync

Overview

The CData Sync App provides a straightforward way to continuously pipeline your Apache Kafka data to any database, data lake, or data warehouse, making it easily available for Analytics, Reporting, AI, and Machine Learning.

The Apache Kafka connector can be used from the CData Sync application to pull data from Apache Kafka and move it to any of the supported destinations.

Apache Kafka Version Support

The Sync App leverages the Apache Kafka client libraries to enable bidirectional access to Kafka topics.

Apache Kafka Connector for CData Sync

Establishing a Connection

Adding a Connection to Apache Kafka

To add a connection to Apache Kafka:

  1. In the application console, navigate to the Connections page.
  2. At the Add Connections panel, select the icon for the connection you want to add.
  3. If the Apache Kafka icon is not available, click the Add More icon to download and install the Apache Kafka connector from the CData site.

For required properties, see the Settings tab.

For connection properties that are not typically required, see the Advanced tab.

Connecting to Apache Kafka

.NET-based editions rely on the Confluent.Kafka and librdkafka libraries to function. These assemblies are bundled with the installer and automatically installed alongside the Sync App. If you are using a different installation method, make sure to install Confluent.Kafka 2.6.1 from NuGet along with its dependencies.

To specify the address of your Apache Kafka server, use the BootstrapServers parameter.

By default, the Sync App communicates with the data source in PLAINTEXT, which means that all data is sent unencrypted. If you want communication to be encrypted:

  1. Configure the Sync App to use SSL encryption by setting UseSSL to true.
  2. Configure SSLServerCert and SSLServerCertType to load the server certificates.

Note: Proxy settings like ProxyServer and firewall settings like FirewallServer do not affect the connection to the Apache Kafka broker because the Sync App connects to Apache Kafka interally using the official libraries, which do not support proxies. These options are only used when the Sync App connects to the schema registry. For details, see Extracting Metadata From Topics.

Authenticating to Apache Kafka

The Apache Kafka data source supports the following authentication methods:

  • Anonymous
  • Plain
  • SCRAM login module
  • SSL client certificate
  • Kerberos

Anonymous

Certain on-premise deployments of Apache Kafka can connect to Apache Kafka without the need to set any authentication connection properties. Such connections are said to be anonymous.

To authenticate anonymously, set this property:

  • AuthScheme: None.

SASL Plain

Plain authentication employs a plain text login module for authentication.

Set these properties:

  • AuthScheme: Plain.
  • User: The authenticating user.
  • Password: The authenticating user's password.

SCRAM Login Module

To authenticate using a SCCRAM login module, set these properties:

  • AuthScheme: Specify SCRAM to use the SCRAM login module with SHA-256 hashing, or SCRAM-SHA-512 to use the SCRAM login module with SHA-512 hashing.
  • User: The authenticating user.
  • Password: The authenticating user's password.

SSL Client Certificate

To authenticate using an SSL client certificate, set these properties:

  • AuthScheme: SSLCertificate.
  • SSLClientCert: The SSL client certificate used to connect to the Apache Kafka Connector for CData Sync broker.
  • SSLClientCertType: The format of the SSL client certificate used to connect to the Apache Kafka Connector for CData Sync broker: PEMKEY_FILE (default), JKSFILE, or PMKEY_BLOB.

Kerberos

Authenticating via Kerberos requires you to specify the system Kerberos configuration file. Set these properties:
  • AuthScheme: KERBEROS.
  • KerberosServiceName: The principal name of the Kafka brokers. For example, if the principal is kafka/[email protected], the KerberosServiceName is kkafka.

Connecting to Azure Event Hubs

The Sync App supports connecting to Azure Event Hubs using OAuth and shared-access signatures. Before you begin, check that your Event Hubs namespace supports connections using the Kafka protocol. The Sync App requires this feature and it may not be available for certain pricing tiers.

All connections to Azure must set these properties, in addition to the scheme-specific properties covered below.

  • BootstrapServers: mynamespace.servicebus.windows.net:9093.
  • UseSSL: True.

Azure AD

Azure AD is Microsoft’s multi-tenant, cloud-based directory and identity management service. It is user-based authentication that requires that you set AuthScheme to AzureAD.

Authentication to Azure AD over a Web application always requires the creation of a custom OAuth application .

For details about creating a custom OAuth application, see Creating an Azure AD Application.

Azure Service Principal

Azure Service Principal is role-based application-based authentication. This means that authentication is done per application, rather than per user. All tasks taken on by the application are executed without a default user context, but based on the assigned roles. The application access to the resources is controlled through the assigned roles' permissions.

For information about how to set up Azure Service Principal authentication, see Creating an Azure AD App with Service Principal.

Managed Service Identity (MSI)

If you are running Apache Kafka on an Azure VM and want to automatically obtain Managed Service Identity (MSI) credentials to connect, set AuthScheme to AzureMSI.

User-Managed Identities

To obtain a token for a managed identity, use the OAuthClientId property to specify the managed identity's client_id.

If your VM has multiple user-assigned managed identities, you must also specify OAuthClientId.

Shared-Access Signature

The Sync App supports password-based authentication using shared-access signatures. After you create the shared secret, set these properties:

  • AuthScheme: Plain.
  • User: $ConnectionString.
  • Password: The Event Hubs connection string from the Shared Access Policies screen.

Connecting to GCP Kafka

The Sync App supports connecting to Google Managed Service for Apache Kafka (GCP Kafka). GCP Kafka uses OAuth authentication and supports service accounts, GCP instance accounts, and Workload Identity Federation.

All connections to GCP Kafka must set these properties:

  • BootstrapServers: bootstrap.myclustername.myregion.managedkafka.mygcpproject.cloud.goog:9092. This value is listed on the Cluster Configuration page, under the Configurations tab.
  • UseSSL: True.

You are ready to connect after you set the appropriate scheme-specific properties, described below.

Authenticating to GCP Kafka

You can authenticate to GCP Kafka as a Google service account, a GCP instance account, or using Workload Identity Federation credentials.

Service Account

GCP Kafka supports authenticating as a Google service account. This service account must have the Managed Kafka Client role.

Provide the service account credentials to the Sync App with these properties:

  • AuthScheme: OAuthJWT.
  • OAuthJWTCertType: GOOGLEJSON.
  • OAuthJWTCert: The path of the JSON file containing the service account credentials.

GCP Instance Account

GCP Kafka supports connections using GCP instance accounts. This requires your Compute Engine instance to have a service account with the Managed Kafka Client role. The instance must also enable the Cloud Platform scope within the Cloud API Access Scopes.

To connect using a GCP instance account, set this property:

  • AuthScheme: GCPInstanceAccount.

Workload Identity Federation Credentials

GCP Kafka supports connections using Workload Identity Federation credentials. However, it only supports these accounts via delegation with the RequestingServiceAccount property. As with normal service accounts, the delegated service account must have the Managed Kafka Client role.

To connect using Workload Identity Federation credentials, set these properties:

  • AuthScheme: AWSWorkloadIdentity.
  • AWSWorkloadIdentityConfig: Various, this depends on how you authenticate to AWS.
  • RequestingServiceAccount: The email of the service account to which the AWS principal can delegate.

Apache Kafka Connector for CData Sync

Advanced Features

This section details a selection of advanced features of the Apache Kafka Sync App.

User Defined Views

The Sync App supports the use of user defined views, virtual tables whose contents are decided by a pre-configured user defined query. These views are useful when you cannot directly control queries being issued to the drivers. For an overview of creating and configuring custom views, see User Defined Views .

SSL Configuration

Use SSL Configuration to adjust how Sync App handles TLS/SSL certificate negotiations. You can choose from various certificate formats;. For further information, see the SSLServerCert property under "Connection String Options" .

Firewall and Proxy

Configure the Sync App for compliance with Firewall and Proxy, including Windows proxies and HTTP proxies. You can also set up tunnel connections.

Query Processing

The Sync App offloads as much of the SELECT statement processing as possible to Apache Kafka and then processes the rest of the query in memory (client-side).

For further information, see Query Processing.

Logging

For an overview of configuration settings that can be used to refine CData logging, see Logging. Only two connection properties are required for basic logging, but there are numerous features that support more refined logging, which enables you to use the LogModules connection property to specify subsets of information to be logged.

Apache Kafka Connector for CData Sync

SSL Configuration

Customizing the SSL Configuration

To enable TLS, set UseSSL to True.

With this configuration, the Sync App attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.

To specify another certificate, see the SSLServerCert connection property.

Client SSL Certificates

The Apache Kafka Sync App also supports setting client certificates. Set the following to connect using a client certificate.

  • SSLClientCert: The name of the certificate store for the client certificate.
  • SSLClientCertType: The type of key store containing the TLS/SSL client certificate.
  • SSLClientCertPassword: The password for the TLS/SSL client certificate.
  • SSLClientCertSubject: The subject of the TLS/SSL client certificate.

Apache Kafka Connector for CData Sync

Firewall and Proxy

Connecting Through a Firewall or Proxy

HTTP Proxies

To authenticate to an HTTP proxy, set the following:

  • ProxyServer: the hostname or IP address of the proxy server that you want to route HTTP traffic through.
  • ProxyPort: the TCP port that the proxy server is running on.
  • ProxyAuthScheme: the authentication method the Sync App uses when authenticating to the proxy server.
  • ProxyUser: the username of a user account registered with the proxy server.
  • ProxyPassword: the password associated with the ProxyUser.

Other Proxies

Set the following properties:

  • To use a proxy-based firewall, set FirewallType, FirewallServer, and FirewallPort.
  • To tunnel the connection, set FirewallType to TUNNEL.
  • To authenticate, specify FirewallUser and FirewallPassword.
  • To authenticate to a SOCKS proxy, additionally set FirewallType to SOCKS5.

Apache Kafka Connector for CData Sync

Data Model

Tables

The CData Sync App dynamically models Apache Kafka topics as tables. A complete list of discovered topics can be obtained from the sys_tables system table.

SELECTing from a topic returns existing messages on the topic, as well as live messages posted before the number of seconds specified by the ReadDuration have elapsed.

Stored Procedures

Stored Procedures are function-like interfaces to Apache Kafka. They can be used to create schema files, commit messages, and more.

Consumer Groups

Connections that the Sync App makes to Apache Kafka are always part of a consumer group. You can control the consumer group by setting a value for the ConsumerGroupId connection property. Using the same consumer group ID across multiple connections puts those connections into the same consumer group. The Sync App generates a random consumer group ID if one is not provided.

All members of a consumer group share an offset that determines what messages are read next within each topic and partition. The Sync App supports two ways of updating the offset:

  • If AutoCommit is enabled, the Sync App periodically commits the offset for any topics and partitions that have been read by SELECT queries. The exact interval is determined by the auto-commit properties in the native library. See ConsumerProperties for details on how to configure these properties.
  • The CommitOffset stored procedure stores the offset of the last item read by the current query. Note that this must be called while the query resultset is still open. The Sync App resets the offset when the resultset is closed.

If there is no existing offset, the Sync App uses the OffsetResetStrategy to determine what the offset should be. This may happen if the broker does not recognize the consumer group or if the consumer group never committed an offset.

Bulk Messages

The Sync App supports reading bulk messages from topics using the CSV, JSON, or XML SerializationFormat. When the Sync App reads CSV data like the following block, it splits the CSV and outputs each line as a separate row. The values of other columns like the partition, timestamp, and key are the same across each row.

"1","alpha"
"2","beta"
"3","gamma"

Bulk messages are not supported for key values. When MessageKeyType is set to a bulk format, the Sync App reads only the first row of the key and ignore the rest. For example, when the Sync App reads the above CSV data as a message key, the entries on the alpha row are repeated across every bulk row from the message value. The entries on the beta and gamma rows are lost.

Bulk Limitations

Apache Kafka does not natively support bulk messages, which can lead to rows being skipped in some circumstances. For example:

  1. A Sync App connection is created with ConsumerGroupId=x
  2. The connection executes the query SELECT * FROM topic LIMIT 3.
  3. The connection commits its offset and closes.
  4. Another connection is created with the same ConsumerGroupId
  5. The connection executes the query SELECT * FROM topic.

Consider what happens if this procedure is performed on the following topic. The first connection consumes all rows from the first message and one row from the second. However, the Sync App has no way to report to Apache Kafka that only part of the second message was read. This means that step 3 commits the offset 3 and the second connection starts on row 5, skipping row 4.

"row 1"
"row 2"
/* End of message 1 */

"row 3"
"row 4"
/* End of message 2 */

"row 5"
"row 6"
/* End of message 3 */

Apache Kafka Connector for CData Sync

Connection String Options

The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.

For more information on establishing a connection, see Establishing a Connection.

Authentication


PropertyDescription
AuthSchemeThe scheme used for authentication with the Apache Kafka broker.
UserThe user who is authenticating to Apache Kafka.
PasswordThe password used to authenticate to Apache Kafka.
BootstrapServersThe address of the Apache Kafka BootstrapServers to which you are connecting to.
UseSSLThis field sets whether SSL is enabled. Automatically enabled if AuthScheme is set to SSL.

Connection


PropertyDescription
ConsumerGroupIdSpecifies which group the consumers created by the driver should belong to.
AutoCommitSpecifies if the Apache Kafka consumer should automatically commit read offsets.

Azure Authentication


PropertyDescription
AzureTenantIdentifies the Apache Kafka tenant being used to access data, either by name (for example, contoso.omnicrosoft.com) or ID. (Conditional).
AzureResourceThe Azure Active resource to authenticate to (used during Azure OAuth exchange).

OAuth


PropertyDescription
OAuthClientIdSpecifies the client Id that was assigned the custom OAuth application was created. (Also known as the consumer key.) This ID registers the custom application with the OAuth authorization server.
OAuthClientSecretSpecifies the client secret that was assigned when the custom OAuth application was created. (Also known as the consumer secret ). This secret registers the custom application with the OAuth authorization server.
DelegatedServiceAccountsA space-delimited list of service account emails for delegated requests.
RequestingServiceAccountA service account email to make a delegated request.

JWT OAuth


PropertyDescription
OAuthJWTCertThe JWT Certificate store.
OAuthJWTCertTypeThe type of key store containing the JWT Certificate.
OAuthJWTCertPasswordThe password for the OAuth JWT certificate used to access a certificate store that requires a password. If the certificate store does not require a password, leave this property blank.
OAuthJWTCertSubjectThe subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate.

Kerberos


PropertyDescription
KerberosKeytabFileThe Keytab file containing your pairs of Kerberos principals and encrypted keys.
KerberosSPNThe service principal name (SPN) for the Kerberos Domain Controller.
KerberosServiceNameThe name of the Kerberos service you want to authenticate with.
UseKerberosTicketCacheSet this to use a ticket cache with the logged in user instead of a keytab file.

SSL


PropertyDescription
SSLServerCertThe SSL server certificate used to validate to the Apache Kafka broker.
SSLServerCertTypeThe format of the SSL server certificate used to verify the Apache Kafka broker.
SSLClientCertThe SSL client certificate used to connect to the Apache Kafka broker.
SSLClientCertTypeThe format of the SSL client certificate used to connect to the Apache Kafka broker.
SSLClientCertPasswordThe password used to decrypt the certificate in SSLClientCert .
SSLIdentificationAlgorithmThe endpoint identification algorithm used by the Apache Kafka data provider client app to validate server host name.

Schema Registry


PropertyDescription
RegistryUrlThe server for the schema registry. When this property is specified, the driver supports reading Avro and JSON schemas from the server.
RegistryServiceThe Schema Registry service used for working with topic schemas.
RegistryAuthSchemeThe scheme used to authenticate to the schema registry.
RegistryUserUsername to authorize with the server specified in RegistryUrl .
RegistryPasswordPassword to authorize with the server specified in RegistryUrl .
RegistryClientCertThe TLS/SSL client certificate store for SSL Client Authentication (2-way SSL) with the schema registry.
RegistryClientCertTypeThe type of key store used by the TLS/SSL client certificate given in RegistryClientCert .
RegistryClientCertPasswordThe password for the TLS/SSL client certificate given in RegistryClientCert .
RegistryClientCertSubjectThe subject of the TLS/SSL client certificate given in RegistryClientCert .
RegistryVersionVersion of the schema read from RegistryUrl for the specified topic.
RegistryServerCertThe certificate to be accepted from the schema registry when connecting using TLS/SSL.
SchemaMergeModeHow the provider exposes schemas with multiple versions.

Firewall


PropertyDescription
FirewallTypeSpecifies the protocol the provider uses to tunnel traffic through a proxy-based firewall.
FirewallServerIdentifies the IP address, DNS name, or host name of a proxy used to traverse a firewall and relay user queries to network resources.
FirewallPortSpecifies the TCP port to be used for a proxy-based firewall.
FirewallUserIdentifies the user ID of the account authenticating to a proxy-based firewall.
FirewallPasswordSpecifies the password of the user account authenticating to a proxy-based firewall.

Proxy


PropertyDescription
ProxyAutoDetectSpecifies whether the provider checks your system proxy settings for existing proxy server configurations, rather than using a manually specified proxy server.
ProxyServerThe hostname or IP address of the proxy server that you want to route HTTP traffic through.
ProxyPortThe TCP port on your specified proxy server (set in the ProxyServer connection property) that has been reserved for routing HTTP traffic to and from the client.
ProxyAuthSchemeSpecifies the authentication method the provider uses when authenticating to the proxy server specified in the ProxyServer connection property.
ProxyUserThe username of a user account registered with the proxy server specified in the ProxyServer connection property.
ProxyPasswordThe password associated with the user specified in the ProxyUser connection property.
ProxySSLTypeThe SSL type to use when connecting to the proxy server specified in the ProxyServer connection property.
ProxyExceptionsA semicolon separated list of destination hostnames or IPs that are exempt from connecting through the proxy server set in the ProxyServer connection property.

Logging


PropertyDescription
LogModulesSpecifies the core modules to include in the log file. Use a semicolon-separated list of module names. By default, all modules are logged.

Schema


PropertyDescription
LocationSpecifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path.
BrowsableSchemasOptional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
TablesOptional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC .
ViewsOptional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC .

Miscellaneous


PropertyDescription
AWSWorkloadIdentityConfigConfiguration properties to provide when using Workload Identity Federation via AWS.
CompressionTypeData compression type. Batches of data will be compressed together.
ConsumerPropertiesAdditional options used to configure Kafka consumers.
CreateTablePartitionsThe number of partitions assigned to a topic created with CREATE TABLE.
CreateTableReplicationFactorThe number of replicas assigned to a topic created with CREATE TABLE.
EnableIdempotenceIf set to true, the Apache Kafka will ensure messages are delivered in the correct order, and without duplicates.
FlattenArraysBy default, nested arrays won't show up if TypeDetectionScheme is set to SchemaRegistry. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays.
GenerateSchemaFilesIndicates the user preference as to when schemas should be generated and saved.
MaximumBatchSizeSpecifies maximum batch size to gather before sending a request.
MaxRowsSpecifies the maximum rows returned for queries without aggregation or GROUP BY.
MessageKeyColumnThe name of the column that message key data is stored in.
MessageKeyTypeThe type of data stored in message keys.
OffsetResetStrategySpecifies an offset for the consumer group.
OtherSpecifies additional hidden properties for specific use cases. These are not required for typical provider functionality. Use a semicolon-separated list to define multiple properties.
PagesizeThe maximum number of rows to fetch from Kafka at one time.
ProducerPropertiesAdditional options used to configure Kafka producers.
PseudoColumnsSpecifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property.
ReadDurationThe duration which additional messages are allowed.
RowScanDepthThe maximum number of messages to scan for the columns available in the topic.
SchemaRegistryOnlyWhether to connect only to the schema registry.
SerializationFormatSpecifies how to serialize/deserialize message contents.
TimeoutSpecifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout.
TypeDetectionSchemeComma-separated list of options specifying how the provider scans the data to determine the fields and datatypes for the topic.
UseConfluentAvroFormatSpecifies how Avro data should be formatted during an INSERT.
UserDefinedViewsSpecifies a filepath to a JSON configuration file defining custom views. The provider automatically detects and uses the views specified in this file.
ValidateRegistryTopicsSpecifies whether or not to validate schema registry topics against the Apache Kafka broker. Only has an effect when TypeDetectionScheme =SchemaRegistry.
WorkloadPoolIdThe ID of your Workload Identity Federation pool.
WorkloadProjectIdThe ID of the Google Cloud project that hosts your Workload Identity Federation pool.
WorkloadProviderIdThe ID of your Workload Identity Federation pool provider.
Apache Kafka Connector for CData Sync

Authentication

This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.


PropertyDescription
AuthSchemeThe scheme used for authentication with the Apache Kafka broker.
UserThe user who is authenticating to Apache Kafka.
PasswordThe password used to authenticate to Apache Kafka.
BootstrapServersThe address of the Apache Kafka BootstrapServers to which you are connecting to.
UseSSLThis field sets whether SSL is enabled. Automatically enabled if AuthScheme is set to SSL.
Apache Kafka Connector for CData Sync

AuthScheme

The scheme used for authentication with the Apache Kafka broker.

Remarks

Supported schemes for Apache Kafka:

None Connect to the data source without specifying the user credentials. User is authenticated anonymously.
Plain Authenticate via credentials passed in a plain text login module.
SCRAM Authenticate via a SCRAM login module with SHA-256 hashing.
SCRAM-SHA-512 Authenticate via a SCRAM login module with SHA-512 hashing.
Kerberos Use Kerberos authentication. (Requires you to specify the system kerberos configuration file.)
SSLCertificate Authenticate via SSL client certificate.
AzureAD Perform Azure Active Directory OAuth authentication.
AzureMSI (Azure VM only) Automatically obtain Managed Service Identity credentials.
AzureServicePrincipal Authenticate as an Azure Service Principal using a Client Secret.
AzureServicePrincipalCert Authenticate as an Azure Service Principal using a Certificate.
OAuthJWT Perform OAuth authentication using an OAuth service account.
GCPInstanceAccount Obtain an Access Token from Google Cloud Platform instance.
AWSWorkloadIdentity Authenticate using Workload Identity Federation.

Schemes for authenticating to Azure Event Hubs:

AzureAD Perform Azure Active Directory OAuth authentication.
AzureMSI (Azure VM only) Automatically obtain Managed Service Identity credentials.
AzureServicePrincipal Authenticate as an Azure Service Principal using a Client Secret.
AzureServicePrincipalCert Authenticate as an Azure Service Principal using a Certificate.

Schemes for authenticating to GMS Kafka:

OAuthJWT Perform OAuth authentication using an OAuth service account.
GCPInstanceAccount Obtain an Access Token from Google Cloud Platform instance.
AWSWorkloadIdentity Authenticate using Workload Identity Federation. Since GMS Kafka does not support using external principals to authenticate directly, you must delegate to a service account using the RequestingServiceAccount property.

Apache Kafka Connector for CData Sync

User

The user who is authenticating to Apache Kafka.

Remarks

If not specified, the driver will attempt unauthorized connection.

Apache Kafka Connector for CData Sync

Password

The password used to authenticate to Apache Kafka.

Remarks

If not specified, the driver will attempt an unauthorized connection.

Apache Kafka Connector for CData Sync

BootstrapServers

The address of the Apache Kafka BootstrapServers to which you are connecting to.

Remarks

Specify both the server and port. The server may be either a hostname or IP address, for example: 10.1.2.3:9092. Multiple comma-separated addresses may be provided. As long as one of the bootstrap servers is in the list responses, the connection will be successful.

If you are connecting to Confluent Cloud, you can find this on the Cluster settings.

Apache Kafka Connector for CData Sync

UseSSL

This field sets whether SSL is enabled. Automatically enabled if AuthScheme is set to SSL.

Remarks

This field sets whether the Sync App will attempt to negotiate TLS/SSL connections to the server. By default, the Sync App checks the server's certificate against the system's trusted certificate store. To specify another certificate, set SSLServerCert.

Apache Kafka Connector for CData Sync

Connection

This section provides a complete list of the Connection properties you can configure in the connection string for this provider.


PropertyDescription
ConsumerGroupIdSpecifies which group the consumers created by the driver should belong to.
AutoCommitSpecifies if the Apache Kafka consumer should automatically commit read offsets.
Apache Kafka Connector for CData Sync

ConsumerGroupId

Specifies which group the consumers created by the driver should belong to.

Remarks

If not specified, the driver will assign a random string.

Apache Kafka Connector for CData Sync

AutoCommit

Specifies if the Apache Kafka consumer should automatically commit read offsets.

Remarks

By default, the Sync App does not commit read offsets unless you invoke CommitOffset. If an offset is committed for a topic, the Sync App starts reading from that offset for future queries (until the next CommitOffset). Otherwise, each query starts reading at the position defined by OffsetResetStrategy.

When AutoCommit is enabled, the provider commits offsets periodically and also at the end of each SELECT query. This means that each SELECT query "consumes" the messages it reads from Kafka, preventing later SELECT queries from returning those messages. You must provide a ConsumerGroupId to enable AutoCommit, as the committed offset is shared across the consumer group, allowing later connections with the same group ID to use the same offsets.

Consider the following when enabling AutoCommit:

  • Queries with OFFSET: Some queries may consume more messages than they return. For example, a SELECT query with an OFFSET clause first consumes messages up to the specified offset and then returns rows after that point. Messages consumed to satisfy the OFFSET are discarded and cannot be read again within the same consumer group.
  • TypeDetectionScheme Options: The None and RowScan options in TypeDetectionScheme perform internal reads on the topic to identify available columns. This can lead to unpredictable results because the Kafka library may auto-commit after these reads if the AutoCommit timer expires. To prevent this, use SchemaRegistry or MessageOnly modes in TypeDetectionScheme to avoid these reads and ensure consistent offsets.
  • Message-Based Offsets: Auto-commit operates in terms of Kafka messages, not rows. Some SerializationFormat options can read multiple rows from a single Kafka message, such as when a Kafka message contains a JSON array. If a SELECT query reads only a subset of rows from that message, Kafka treats the entire message as consumed, meaning any unread rows within that message are discarded.

Apache Kafka Connector for CData Sync

Azure Authentication

This section provides a complete list of the Azure Authentication properties you can configure in the connection string for this provider.


PropertyDescription
AzureTenantIdentifies the Apache Kafka tenant being used to access data, either by name (for example, contoso.omnicrosoft.com) or ID. (Conditional).
AzureResourceThe Azure Active resource to authenticate to (used during Azure OAuth exchange).
Apache Kafka Connector for CData Sync

AzureTenant

Identifies the Apache Kafka tenant being used to access data, either by name (for example, contoso.omnicrosoft.com) or ID. (Conditional).

Remarks

A tenant is a digital representation of your organization, primarily associated with a domain (for example, microsoft.com). The tenant is managed through a Tenant ID (also known as the directory ID), which is specified whenever you assign users permissions to access or manage Azure resources.

To locate the directory ID in the Azure Portal, navigate to Azure Active Directory > Properties.

Specifying AzureTenant is required when AuthScheme = either AzureServicePrincipal or AzureServicePrincipalCert, or if AuthScheme = AzureAD and the user belongs to more than one tenant.

A tenant is a digital representation of your organization, primarily associated with a domain (for example, microsoft.com). The tenant is managed through a Tenant ID (also known as the directory ID), which is specified whenever you assign users permissions to access or manage Azure resources.

To locate the directory ID in the Azure Portal, navigate to Azure Active Directory > Properties.

Specifying AzureTenant is required when AuthScheme = either AzureServicePrincipal or AzureServicePrincipalCert, or if AuthScheme = AzureAD and the user belongs to more than one tenant.

Apache Kafka Connector for CData Sync

AzureResource

The Azure Active resource to authenticate to (used during Azure OAuth exchange).

Remarks

The resource must be specified if using Azure OAuth. It should be set to the App Id URI of the web API (secured resource).

Apache Kafka Connector for CData Sync

OAuth

This section provides a complete list of the OAuth properties you can configure in the connection string for this provider.


PropertyDescription
OAuthClientIdSpecifies the client Id that was assigned the custom OAuth application was created. (Also known as the consumer key.) This ID registers the custom application with the OAuth authorization server.
OAuthClientSecretSpecifies the client secret that was assigned when the custom OAuth application was created. (Also known as the consumer secret ). This secret registers the custom application with the OAuth authorization server.
DelegatedServiceAccountsA space-delimited list of service account emails for delegated requests.
RequestingServiceAccountA service account email to make a delegated request.
Apache Kafka Connector for CData Sync

OAuthClientId

Specifies the client Id that was assigned the custom OAuth application was created. (Also known as the consumer key.) This ID registers the custom application with the OAuth authorization server.

Remarks

OAuthClientId is one of a handful of connection parameters that need to be set before users can authenticate via OAuth. For details, see Establishing a Connection.

Apache Kafka Connector for CData Sync

OAuthClientSecret

Specifies the client secret that was assigned when the custom OAuth application was created. (Also known as the consumer secret ). This secret registers the custom application with the OAuth authorization server.

Remarks

OAuthClientSecret is one of a handful of connection parameters that need to be set before users can authenticate via OAuth. For details, see Establishing a Connection.

Apache Kafka Connector for CData Sync

DelegatedServiceAccounts

A space-delimited list of service account emails for delegated requests.

Remarks

The service account emails must be specified in a space-delimited list.

Each service account must be granted the roles/iam.serviceAccountTokenCreator role on its next service account in the chain.

The last service account in the chain must be granted the roles/iam.serviceAccountTokenCreator role on the requesting service account. The requesting service account is the one specified in the RequestingServiceAccount property.

Note that for delegated requests, the requesting service account must have the permission iam.serviceAccounts.getAccessToken, which can also be granted through the serviceAccountTokenCreator role.

Apache Kafka Connector for CData Sync

RequestingServiceAccount

A service account email to make a delegated request.

Remarks

The service account email of the account for which the credentials are requested in a delegated request. With the list of delegated service accounts in DelegatedServiceAccounts, this property is used to make a delegated request.

You must have the IAM permission iam.serviceAccounts.getAccessToken on this service account.

Apache Kafka Connector for CData Sync

JWT OAuth

This section provides a complete list of the JWT OAuth properties you can configure in the connection string for this provider.


PropertyDescription
OAuthJWTCertThe JWT Certificate store.
OAuthJWTCertTypeThe type of key store containing the JWT Certificate.
OAuthJWTCertPasswordThe password for the OAuth JWT certificate used to access a certificate store that requires a password. If the certificate store does not require a password, leave this property blank.
OAuthJWTCertSubjectThe subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate.
Apache Kafka Connector for CData Sync

OAuthJWTCert

The JWT Certificate store.

Remarks

The name of the certificate store for the client certificate.

The OAuthJWTCertType field specifies the type of the certificate store specified by OAuthJWTCert. If the store is password protected, specify the password in OAuthJWTCertPassword.

OAuthJWTCert is used in conjunction with the OAuthJWTCertSubject field in order to specify client certificates. If OAuthJWTCert has a value, and OAuthJWTCertSubject is set, a search for a certificate is initiated. Please refer to the OAuthJWTCertSubject field for details.

Designations of certificate stores are platform-dependent.

The following are designations of the most common User and Machine certificate stores in Windows:

MYA certificate store holding personal certificates with their associated private keys.
CACertifying authority certificates.
ROOTRoot certificates.
SPCSoftware publisher certificates.

In Java, the certificate store normally is a file containing certificates and optional private keys.

When the certificate store type is PFXFile, this property must be set to the name of the file. When the type is PFXBlob, the property must be set to the binary contents of a PFX file (i.e. PKCS12 certificate store).

Apache Kafka Connector for CData Sync

OAuthJWTCertType

The type of key store containing the JWT Certificate.

Remarks

This property can take one of the following values:

USERFor Windows, this specifies that the certificate store is a certificate store owned by the current user. Note: This store type is not available in Java.
MACHINEFor Windows, this specifies that the certificate store is a machine store. Note: this store type is not available in Java.
PFXFILEThe certificate store is the name of a PFX (PKCS12) file containing certificates.
PFXBLOBThe certificate store is a string (base-64-encoded) representing a certificate store in PFX (PKCS12) format.
JKSFILEThe certificate store is the name of a Java key store (JKS) file containing certificates. Note: this store type is only available in Java.
JKSBLOBThe certificate store is a string (base-64-encoded) representing a certificate store in Java key store (JKS) format. Note: this store type is only available in Java.
PEMKEY_FILEThe certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate.
PEMKEY_BLOBThe certificate store is a string (base64-encoded) that contains a private key and an optional certificate.
PUBLIC_KEY_FILEThe certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate.
PUBLIC_KEY_BLOBThe certificate store is a string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate.
SSHPUBLIC_KEY_FILEThe certificate store is the name of a file that contains an SSH-style public key.
SSHPUBLIC_KEY_BLOBThe certificate store is a string (base-64-encoded) that contains an SSH-style public key.
P7BFILEThe certificate store is the name of a PKCS7 file containing certificates.
PPKFILEThe certificate store is the name of a file that contains a PPK (PuTTY Private Key).
XMLFILEThe certificate store is the name of a file that contains a certificate in XML format.
XMLBLOBThe certificate store is a string that contains a certificate in XML format.
BCFKSFILEThe certificate store is the name of a file that contains an Bouncy Castle keystore.
BCFKSBLOBThe certificate store is a string (base-64-encoded) that contains a Bouncy Castle keystore.
GOOGLEJSONThe certificate store is the name of a JSON file containing the service account information. Only valid when connecting to a Google service.
GOOGLEJSONBLOBThe certificate store is a string that contains the service account JSON. Only valid when connecting to a Google service.

Apache Kafka Connector for CData Sync

OAuthJWTCertPassword

The password for the OAuth JWT certificate used to access a certificate store that requires a password. If the certificate store does not require a password, leave this property blank.

Remarks

This property specifies the password needed to open the certificate store, but only if the store type requires one. To determine if a password is necessary, refer to the documentation or configuration for your specific certificate store.

This is not required when using the GOOGLEJSON OAuthJWTCertType. Google JSON keys are not encrypted.

Apache Kafka Connector for CData Sync

OAuthJWTCertSubject

The subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate.

Remarks

The value of this property is used to locate a matching certificate in the store. The search process works as follows:

  • If an exact match for the subject is found, the corresponding certificate is selected.
  • If no exact match is found, the store is searched for certificates whose subjects contain the property value.
  • If no match is found, no certificate is selected.

You can set the value to '*' to automatically select the first certificate in the store. The certificate subject is a comma-separated list of distinguished name fields and values. For example: CN=www.server.com, OU=test, C=US, [email protected]. Common fields include:

FieldMeaning
CNCommon Name. This is commonly a host name like www.server.com.
OOrganization
OUOrganizational Unit
LLocality
SState
CCountry
EEmail Address

If a field value contains a comma, enclose it in quotes. For example: "O=ACME, Inc.".

Apache Kafka Connector for CData Sync

Kerberos

This section provides a complete list of the Kerberos properties you can configure in the connection string for this provider.


PropertyDescription
KerberosKeytabFileThe Keytab file containing your pairs of Kerberos principals and encrypted keys.
KerberosSPNThe service principal name (SPN) for the Kerberos Domain Controller.
KerberosServiceNameThe name of the Kerberos service you want to authenticate with.
UseKerberosTicketCacheSet this to use a ticket cache with the logged in user instead of a keytab file.
Apache Kafka Connector for CData Sync

KerberosKeytabFile

The Keytab file containing your pairs of Kerberos principals and encrypted keys.

Remarks

The Keytab file containing your pairs of Kerberos principals and encrypted keys.

Apache Kafka Connector for CData Sync

KerberosSPN

The service principal name (SPN) for the Kerberos Domain Controller.

Remarks

The service principal name (SPN) for the Kerberos Domain Controller.

Apache Kafka Connector for CData Sync

KerberosServiceName

The name of the Kerberos service you want to authenticate with.

Remarks

The name of the Kerberos service you want to authenticate with.

Apache Kafka Connector for CData Sync

UseKerberosTicketCache

Set this to use a ticket cache with the logged in user instead of a keytab file.

Remarks

Set this to use a ticket cache with the logged in user instead of a keytab file

Apache Kafka Connector for CData Sync

SSL

This section provides a complete list of the SSL properties you can configure in the connection string for this provider.


PropertyDescription
SSLServerCertThe SSL server certificate used to validate to the Apache Kafka broker.
SSLServerCertTypeThe format of the SSL server certificate used to verify the Apache Kafka broker.
SSLClientCertThe SSL client certificate used to connect to the Apache Kafka broker.
SSLClientCertTypeThe format of the SSL client certificate used to connect to the Apache Kafka broker.
SSLClientCertPasswordThe password used to decrypt the certificate in SSLClientCert .
SSLIdentificationAlgorithmThe endpoint identification algorithm used by the Apache Kafka data provider client app to validate server host name.
Apache Kafka Connector for CData Sync

SSLServerCert

The SSL server certificate used to validate to the Apache Kafka broker.

Remarks

The value of this property must be provided in the format described on the SSLServerCertType page. Please refer to it for more details.

Apache Kafka Connector for CData Sync

SSLServerCertType

The format of the SSL server certificate used to verify the Apache Kafka broker.

Remarks

This property is used to determine what format the SSLServerCert property expects. This property can take one of the following values:

PEMKEY_FILEThe certificate store is the name of a PEM-encoded file that contains a the server certificate.
PEMKEY_BLOBThe certificate store is a string that contains the server certificate.

Apache Kafka Connector for CData Sync

SSLClientCert

The SSL client certificate used to connect to the Apache Kafka broker.

Remarks

The value of this property must be provided in the format described on the SSLClientCertType page. Please refer to it for more details.

Apache Kafka Connector for CData Sync

SSLClientCertType

The format of the SSL client certificate used to connect to the Apache Kafka broker.

Remarks

This property is used to determine what format the SSLClientCert property expects. This property can take one of the following values:

PEMKEY_FILEThe certificate store is the name of a PEM-encoded file that contains a private key and certificate.
PEMKEY_BLOBThe certificate store is a string that contains a private key and certificate, optionally encoded in base64.

Apache Kafka Connector for CData Sync

SSLClientCertPassword

The password used to decrypt the certificate in SSLClientCert .

Remarks

Leave this blank if the client certificate isn't password protected.

Apache Kafka Connector for CData Sync

SSLIdentificationAlgorithm

The endpoint identification algorithm used by the Apache Kafka data provider client app to validate server host name.

Remarks

The default value is 'https' and the server host name validation is enabled. You can disable it by setting its value to a blank space.

Apache Kafka Connector for CData Sync

Schema Registry

This section provides a complete list of the Schema Registry properties you can configure in the connection string for this provider.


PropertyDescription
RegistryUrlThe server for the schema registry. When this property is specified, the driver supports reading Avro and JSON schemas from the server.
RegistryServiceThe Schema Registry service used for working with topic schemas.
RegistryAuthSchemeThe scheme used to authenticate to the schema registry.
RegistryUserUsername to authorize with the server specified in RegistryUrl .
RegistryPasswordPassword to authorize with the server specified in RegistryUrl .
RegistryClientCertThe TLS/SSL client certificate store for SSL Client Authentication (2-way SSL) with the schema registry.
RegistryClientCertTypeThe type of key store used by the TLS/SSL client certificate given in RegistryClientCert .
RegistryClientCertPasswordThe password for the TLS/SSL client certificate given in RegistryClientCert .
RegistryClientCertSubjectThe subject of the TLS/SSL client certificate given in RegistryClientCert .
RegistryVersionVersion of the schema read from RegistryUrl for the specified topic.
RegistryServerCertThe certificate to be accepted from the schema registry when connecting using TLS/SSL.
SchemaMergeModeHow the provider exposes schemas with multiple versions.
Apache Kafka Connector for CData Sync

RegistryUrl

The server for the schema registry. When this property is specified, the driver supports reading Avro and JSON schemas from the server.

Remarks

  • If you are connecting to Confluent Cloud, this corresponds to the Schema Registry endpoint value in Schemas > Schema Registry > Instructions.
  • If you are connecting to AWS Glue service, this corresponds to the ARN value of the AWS Registry you want to connect.

Apache Kafka Connector for CData Sync

RegistryService

The Schema Registry service used for working with topic schemas.

Remarks

The Schema Registry service used for working with topic schemas.

Apache Kafka Connector for CData Sync

RegistryAuthScheme

The scheme used to authenticate to the schema registry.

Remarks

The schemes are as follows. Note that some schemes are only available when connecting to a specific RegistryService:

None No authentication will be used.
Basic RegistryUser and RegistryPassword are used. In Confluent these are the API user/password, while in Glue these are the IAM access key/secret key.
SSLCertificate RegistryClientCert is used with SSL client authentication. This is only supported when connecting to a Confluent registry.

Apache Kafka Connector for CData Sync

RegistryUser

Username to authorize with the server specified in RegistryUrl .

Remarks

If you are connecting to Confluent Cloud, this corresponds to the Access Key value in Schemas > Schema Registry > API access.

Apache Kafka Connector for CData Sync

RegistryPassword

Password to authorize with the server specified in RegistryUrl .

Remarks

If you are connecting to Confluent Cloud, this corresponds to the Secret Key value in Schemas > Schema Registry > API access.

Apache Kafka Connector for CData Sync

RegistryClientCert

The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL) with the schema registry.

Remarks

The name of the certificate store for the client certificate.

The RegistryClientCertType field specifies the type of the certificate store specified by RegistryClientCert. If the store is password protected, specify the password in RegistryClientCertPassword.

RegistryClientCert is used in conjunction with the RegistryClientCertSubject field in order to specify client certificates. If RegistryClientCert has a value, and RegistryClientCertSubject is set, a search for a certificate is initiated. See RegistryClientCertSubject for more information.

Designations of certificate stores are platform-dependent.

The following are designations of the most common User and Machine certificate stores in Windows:

MYA certificate store holding personal certificates with their associated private keys.
CACertifying authority certificates.
ROOTRoot certificates.
SPCSoftware publisher certificates.

In Java, the certificate store normally is a file containing certificates and optional private keys.

When the certificate store type is PFXFile, this property must be set to the name of the file. When the type is PFXBlob, the property must be set to the binary contents of a PFX file (for example, PKCS12 certificate store).

Apache Kafka Connector for CData Sync

RegistryClientCertType

The type of key store used by the TLS/SSL client certificate given in RegistryClientCert .

Remarks

This property can take one of the following values:

USER - defaultFor Windows, this specifies that the certificate store is a certificate store owned by the current user. Note that this store type is not available in Java.
MACHINEFor Windows, this specifies that the certificate store is a machine store. Note that this store type is not available in Java.
PFXFILEThe certificate store is the name of a PFX (PKCS12) file containing certificates.
PFXBLOBThe certificate store is a string (base-64-encoded) representing a certificate store in PFX (PKCS12) format.
JKSFILEThe certificate store is the name of a Java key store (JKS) file containing certificates. Note that this store type is only available in Java.
JKSBLOBThe certificate store is a string (base-64-encoded) representing a certificate store in JKS format. Note that this store type is only available in Java.
PEMKEY_FILEThe certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate.
PEMKEY_BLOBThe certificate store is a string (base64-encoded) that contains a private key and an optional certificate.
PUBLIC_KEY_FILEThe certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate.
PUBLIC_KEY_BLOBThe certificate store is a string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate.
SSHPUBLIC_KEY_FILEThe certificate store is the name of a file that contains an SSH-style public key.
SSHPUBLIC_KEY_BLOBThe certificate store is a string (base-64-encoded) that contains an SSH-style public key.
P7BFILEThe certificate store is the name of a PKCS7 file containing certificates.
PPKFILEThe certificate store is the name of a file that contains a PuTTY Private Key (PPK).
XMLFILEThe certificate store is the name of a file that contains a certificate in XML format.
XMLBLOBThe certificate store is a string that contains a certificate in XML format.

Apache Kafka Connector for CData Sync

RegistryClientCertPassword

The password for the TLS/SSL client certificate given in RegistryClientCert .

Remarks

If the certificate store is of a type that requires a password, this property is used to specify that password to open the certificate store.

Apache Kafka Connector for CData Sync

RegistryClientCertSubject

The subject of the TLS/SSL client certificate given in RegistryClientCert .

Remarks

When loading a certificate the subject is used to locate the certificate in the store.

If an exact match is not found, the store is searched for subjects containing the value of the property. If a match is still not found, the property is set to an empty string, and no certificate is selected.

The special value "*" picks the first certificate in the certificate store.

The certificate subject is a comma separated list of distinguished name fields and values. For example, "CN=www.server.com, OU=test, C=US, [email protected]". The common fields and their meanings are shown below.

FieldMeaning
CNCommon Name. This is commonly a host name like www.server.com.
OOrganization
OUOrganizational Unit
LLocality
SState
CCountry
EEmail Address

If a field value contains a comma, it must be quoted.

Apache Kafka Connector for CData Sync

RegistryVersion

Version of the schema read from RegistryUrl for the specified topic.

Remarks

Version of the schema read from RegistryUrl for the specified topic. This can be either latest or a specific version number. If you provide a specific version number, that version must be available on every schema within the registry.

Note that SchemaMergeMode=Simple overrides this property. The Sync App combines the schemas of every version for any topics it discovers.

Apache Kafka Connector for CData Sync

RegistryServerCert

The certificate to be accepted from the schema registry when connecting using TLS/SSL.

Remarks

If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.

This property can take the following forms:

Description Example
A full PEM Certificate (example shortened for brevity) -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE-----
A path to a local file containing the certificate C:\cert.cer
The public key (example shortened for brevity) -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY-----
The MD5 Thumbprint (hex values can also be either space or colon separated) ecadbdda5a1529c58a1e9e09828d70e4
The SHA1 Thumbprint (hex values can also be either space or colon separated) 34a929226ae0819f2ec14b4a3d904f801cbb150d

If not specified, any certificate trusted by the machine is accepted.

Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.

Apache Kafka Connector for CData Sync

SchemaMergeMode

How the provider exposes schemas with multiple versions.

Remarks

By default the Sync App uses SchemaMergeMode=None.

None

This mode only supports one version for schemas in the registry. It is normally the latest version, but you can change RegistryVersion to use a specific version number. The Sync App ignores the content of any message that does not match the schema for its topic. Reading the topic returns the message, but all of its data fields (fields other than Partition, Offset, and Timestamp) are reported as NULL.

Limitations

This mode supports both SELECT and INSERT queries into each topic. An INSERT always uses the version of the schema specified by RegistryVersion.

This mode supports all options for RegistryService.

Schema Confusion

For compatibility with previous versions, the Sync App does not enforce the schema ID included on messages when using RegistryService=Confluent. With SchemaMergeMode=None this ID is always ignored, but even with SchemaMergeMode=Simple the Sync App ignores the ID if it cannot find a matching schema. This may cause the Sync App to output field values under unexpected columns.

For example, consider the following two Avro schemas that store names and address details. The schemas are binary compatible: even though the field names differ, they have the same number of fields with the same types in the same order.

{
  "type": "record",
  "name": "personname",
  "fields": [
    { "name": "PersonID", "type": "int" },
    { "name": "LastName", "type": "string" },
    { "name": "FirstName", "type": "string" }
  ],
}
{
  "type": "record",
  "name": "personaddress",
  "fields": [
    { "name": "PersonID", "type": "int" },
    { "name": "Address", "type": "string" },
    { "name": "City", "type": "string" }
  ],
}

If you produce these messages to the topic using the personname schema, the Sync App may parse these messages using the personaddress schema. This happens if, for example, personname and personaddress are two versions of the same registry schema. The Sync App sees that personaddress is the latest version and uses it for this topic.

{"PersonID": 1, "LastName": "Smithers", "FirstName": "William"}
{"PersonID": 2, "LastName": "McAllister", "FirstName": "Amy"}

In that scenario, the Sync App outputs these results:

PersonID Address City
1 Smithers William
2 McAllister Amy

Simple

Setting SchemaMergeMode=Simple causes the Sync App to load all versions of each topic schema and merge them according to the following rules. These rules ensure that the Sync App produces NULL or a valid value for each column. If any rule fails, the Sync App fails the schema merge by logging an error and outputting a schema with no data fields.

Limitations

This mode supports only SELECT queries. The Sync App does not have a way to specify a specific version of a schema to use for INSERT queries. If you need to produce messages in this mode, use the ProduceMessage stored procedure.

This mode only supports RegistryService=Confluent. Messages produced with the Confluent libraries include the ID of the schema their data conforms to. The Sync App uses this to determine what schema to parse each message with.

If a message does not have an ID, or if the ID refers to a schema that does not match the topic name, the Sync App defaults to the latest schema. This may cause field values to appear in unexpected columns if the schemas are different but produce compatible output. See the Schema Confusion section above for a more detailed discussion of this issue.

Schema Validation Rules

If all versions of the schema are valid according to these rules, the Sync App includes every field from every version of the schema in the table.

  • Each field must have the same type across all versions where they appear. Fields may appear in some versions and not others. Those fields are reported as NULL when they are not present.
  • All versions must use the same format (Avro or JSON).

Remember that these rules are applied transitively. This means that two versions of the schema may be valid in isolation, but not when considering all versions of the schema. For example, consider a schema where v1 contains an integer amount field, v2 removes it, and v3 adds a decimal amount field. v1 and v2 are valid together because removing fields is allowed, and v2 and v3 are valid together because adding fields is allowed. However, all three versions combined violate the rules because the amount field changed type between v1 and v3.

For best results, we recommend enabling one of the transitive schema compatibility modes within the schema registry. The Sync App does not check the compatibility mode as part of its validation rules. However, setting a transitive schema compatibility mode prevents you from creating schemas that the Sync App cannot process.

Apache Kafka Connector for CData Sync

Firewall

This section provides a complete list of the Firewall properties you can configure in the connection string for this provider.


PropertyDescription
FirewallTypeSpecifies the protocol the provider uses to tunnel traffic through a proxy-based firewall.
FirewallServerIdentifies the IP address, DNS name, or host name of a proxy used to traverse a firewall and relay user queries to network resources.
FirewallPortSpecifies the TCP port to be used for a proxy-based firewall.
FirewallUserIdentifies the user ID of the account authenticating to a proxy-based firewall.
FirewallPasswordSpecifies the password of the user account authenticating to a proxy-based firewall.
Apache Kafka Connector for CData Sync

FirewallType

Specifies the protocol the provider uses to tunnel traffic through a proxy-based firewall.

Remarks

A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.

Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.

Note: By default, the Sync App connects to the system proxy. To disable this behavior and connect to one of the following proxy types, set ProxyAutoDetect to false.

The following table provides port number information for each of the supported protocols.

Protocol Default Port Description
TUNNEL 80 The port where the Sync App opens a connection to Apache Kafka. Traffic flows back and forth via the proxy at this location.
SOCKS4 1080 The port where the Sync App opens a connection to Apache Kafka. SOCKS 4 then passes theFirewallUser value to the proxy, which determines whether the connection request should be granted.
SOCKS5 1080 The port where the Sync App sends data to Apache Kafka. If the SOCKS 5 proxy requires authentication, set FirewallUser and FirewallPassword to credentials the proxy recognizes.

To connect to HTTP proxies, use ProxyServer and ProxyPort. To authenticate to HTTP proxies, use ProxyAuthScheme, ProxyUser, and ProxyPassword.

Apache Kafka Connector for CData Sync

FirewallServer

Identifies the IP address, DNS name, or host name of a proxy used to traverse a firewall and relay user queries to network resources.

Remarks

A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.

Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.

Apache Kafka Connector for CData Sync

FirewallPort

Specifies the TCP port to be used for a proxy-based firewall.

Remarks

A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.

Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.

Apache Kafka Connector for CData Sync

FirewallUser

Identifies the user ID of the account authenticating to a proxy-based firewall.

Remarks

A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.

Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.

Apache Kafka Connector for CData Sync

FirewallPassword

Specifies the password of the user account authenticating to a proxy-based firewall.

Remarks

A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.

Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.

Apache Kafka Connector for CData Sync

Proxy

This section provides a complete list of the Proxy properties you can configure in the connection string for this provider.


PropertyDescription
ProxyAutoDetectSpecifies whether the provider checks your system proxy settings for existing proxy server configurations, rather than using a manually specified proxy server.
ProxyServerThe hostname or IP address of the proxy server that you want to route HTTP traffic through.
ProxyPortThe TCP port on your specified proxy server (set in the ProxyServer connection property) that has been reserved for routing HTTP traffic to and from the client.
ProxyAuthSchemeSpecifies the authentication method the provider uses when authenticating to the proxy server specified in the ProxyServer connection property.
ProxyUserThe username of a user account registered with the proxy server specified in the ProxyServer connection property.
ProxyPasswordThe password associated with the user specified in the ProxyUser connection property.
ProxySSLTypeThe SSL type to use when connecting to the proxy server specified in the ProxyServer connection property.
ProxyExceptionsA semicolon separated list of destination hostnames or IPs that are exempt from connecting through the proxy server set in the ProxyServer connection property.
Apache Kafka Connector for CData Sync

ProxyAutoDetect

Specifies whether the provider checks your system proxy settings for existing proxy server configurations, rather than using a manually specified proxy server.

Remarks

When this connection property is set to True, the Sync App checks your system proxy settings for existing proxy server configurations (no need to manually supply proxy server details).

This connection property takes precedence over other proxy settings. Set to False if you want to manually configure the Sync App to connect to a specific proxy server.

To connect to an HTTP proxy, see ProxyServer. For other proxies, such as SOCKS or tunneling, see FirewallType.

Apache Kafka Connector for CData Sync

ProxyServer

The hostname or IP address of the proxy server that you want to route HTTP traffic through.

Remarks

The Sync App only routes HTTP traffic through the proxy server specified in this connection property when ProxyAutoDetect is set to False. If ProxyAutoDetect is set to True, which is the default, the Sync App instead routes HTTP traffic through the proxy server specified in your system proxy settings.

Apache Kafka Connector for CData Sync

ProxyPort

The TCP port on your specified proxy server (set in the ProxyServer connection property) that has been reserved for routing HTTP traffic to and from the client.

Remarks

The Sync App only routes HTTP traffic through the proxy server port specified in this connection property when ProxyAutoDetect is set to False. If ProxyAutoDetect is set to True, which is the default, the Sync App instead routes HTTP traffic through the proxy server port specified in your system proxy settings.

For other proxy types, see FirewallType.

Apache Kafka Connector for CData Sync

ProxyAuthScheme

Specifies the authentication method the provider uses when authenticating to the proxy server specified in the ProxyServer connection property.

Remarks

The authentication type can be one of the following:

  • BASIC: The Sync App performs HTTP BASIC authentication.
  • DIGEST: The Sync App performs HTTP DIGEST authentication.
  • NTLM: The Sync App retrieves an NTLM token.
  • NEGOTIATE: The Sync App retrieves an NTLM or Kerberos token based on the applicable protocol for authentication.
  • NONE: Set this when the ProxyServer does not require authentication.

For all values other than "NONE", you must also set the ProxyUser and ProxyPassword connection properties.

If you need to use another authentication type, such as SOCKS 5 authentication, see FirewallType.

Apache Kafka Connector for CData Sync

ProxyUser

The username of a user account registered with the proxy server specified in the ProxyServer connection property.

Remarks

The ProxyUser and ProxyPassword connection properties are used to connect and authenticate against the HTTP proxy specified in ProxyServer.

After selecting one of the available authentication types in ProxyAuthScheme, set this property as follows:

ProxyAuthScheme Value Value to set for ProxyUser
BASIC The user name of a user registered with the proxy server.
DIGEST The user name of a user registered with the proxy server.
NEGOTIATE The username of a Windows user who is a valid user in the domain or trusted domain that the proxy server is part of, in the format user@domain or domain\user.
NTLM The username of a Windows user who is a valid user in the domain or trusted domain that the proxy server is part of, in the format user@domain or domain\user.
NONE Do not set the ProxyPassword connection property.

The Sync App only uses this username if ProxyAutoDetect is set to False. If ProxyAutoDetect is set to True, which is the default, the Sync App instead uses the username specified in your system proxy settings.

Apache Kafka Connector for CData Sync

ProxyPassword

The password associated with the user specified in the ProxyUser connection property.

Remarks

The ProxyUser and ProxyPassword connection properties are used to connect and authenticate against the HTTP proxy specified in ProxyServer.

After selecting one of the available authentication types in ProxyAuthScheme, set this property as follows:

ProxyAuthScheme Value Value to set for ProxyPassword
BASIC The password associated with the proxy server user specified in ProxyUser.
DIGEST The password associated with the proxy server user specified in ProxyUser.
NEGOTIATE The password associated with the Windows user account specified in ProxyUser.
NTLM The password associated with the Windows user account specified in ProxyUser.
NONE Do not set the ProxyPassword connection property.

For SOCKS 5 authentication or tunneling, see FirewallType.

The Sync App only uses this password if ProxyAutoDetect is set to False. If ProxyAutoDetect is set to True, which is the default, the Sync App instead uses the password specified in your system proxy settings.

Apache Kafka Connector for CData Sync

ProxySSLType

The SSL type to use when connecting to the proxy server specified in the ProxyServer connection property.

Remarks

This property determines when to use SSL for the connection to the HTTP proxy specified by ProxyServer. You can set this connection property to the following values :

AUTODefault setting. If ProxyServer is set to an HTTPS URL, the Sync App uses the TUNNEL option. If ProxyServer is set to an HTTP URL, the component uses the NEVER option.
ALWAYSThe connection is always SSL enabled.
NEVERThe connection is not SSL enabled.
TUNNELThe connection is made through a tunneling proxy. The proxy server opens a connection to the remote host and traffic flows back and forth through the proxy.

Apache Kafka Connector for CData Sync

ProxyExceptions

A semicolon separated list of destination hostnames or IPs that are exempt from connecting through the proxy server set in the ProxyServer connection property.

Remarks

The ProxyServer is used for all addresses, except for addresses defined in this property. Use semicolons to separate entries.

Note that the Sync App uses the system proxy settings by default, without further configuration needed. If you want to explicitly configure proxy exceptions for this connection, set ProxyAutoDetect to False.

Apache Kafka Connector for CData Sync

Logging

This section provides a complete list of the Logging properties you can configure in the connection string for this provider.


PropertyDescription
LogModulesSpecifies the core modules to include in the log file. Use a semicolon-separated list of module names. By default, all modules are logged.
Apache Kafka Connector for CData Sync

LogModules

Specifies the core modules to include in the log file. Use a semicolon-separated list of module names. By default, all modules are logged.

Remarks

This property lets you customize the log file content by specifying the logging modules to include. Logging modules categorize logged information into distinct areas, such as query execution, metadata, or SSL communication. Each module is represented by a four-character code, with some requiring a trailing space for three-letter names.

For example, EXEC logs query execution, and INFO logs general provider messages. To include multiple modules, separate their names with semicolons as follows: INFO;EXEC;SSL.

The Verbosity connection property takes precedence over the module-based filtering specified by this property. Only log entries that meet the verbosity level and belong to the specified modules are logged. Leave this property blank to include all available modules in the log file.

For a complete list of available modules and detailed guidance on configuring logging, refer to the Advanced Logging section in Logging.

Apache Kafka Connector for CData Sync

Schema

This section provides a complete list of the Schema properties you can configure in the connection string for this provider.


PropertyDescription
LocationSpecifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path.
BrowsableSchemasOptional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
TablesOptional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC .
ViewsOptional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC .
Apache Kafka Connector for CData Sync

Location

Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path.

Remarks

The Location property is only needed if you want to either customize definitions (for example, change a column name, ignore a column, etc.) or extend the data model with new tables, views, or stored procedures.

If left unspecified, the default location is %APPDATA%\\CData\\ApacheKafka Data Provider\\Schema, where %APPDATA% is set to the user's configuration directory:

Platform %APPDATA%
Windows The value of the APPDATA environment variable
Linux ~/.config

Apache Kafka Connector for CData Sync

BrowsableSchemas

Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .

Remarks

Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.

Apache Kafka Connector for CData Sync

Tables

Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC .

Remarks

Listing all available tables from some databases can take extra time, thus degrading performance. Providing a list of tables in the connection string saves time and improves performance.

If there are lots of tables available and you already know which ones you want to work with, you can use this property to restrict your viewing to only those tables. To do this, specify the tables you want in a comma-separated list. Each table should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Tables=TableA,[TableB/WithSlash],WithCatalog.WithSchema.`TableC With Space`.

Note: If you are connecting to a data source with multiple schemas or catalogs, you must specify each table you want to view by its fully qualified name. This avoids ambiguity between tables that may exist in multiple catalogs or schemas.

Apache Kafka Connector for CData Sync

Views

Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC .

Remarks

Listing all available views from some databases can take extra time, thus degrading performance. Providing a list of views in the connection string saves time and improves performance.

If there are lots of views available and you already know which ones you want to work with, you can use this property to restrict your viewing to only those views. To do this, specify the views you want in a comma-separated list. Each view should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Views=ViewA,[ViewB/WithSlash],WithCatalog.WithSchema.`ViewC With Space`.

Note: If you are connecting to a data source with multiple schemas or catalogs, you must specify each view you want to examine by its fully qualified name. This avoids ambiguity between views that may exist in multiple catalogs or schemas.

Apache Kafka Connector for CData Sync

Miscellaneous

This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.


PropertyDescription
AWSWorkloadIdentityConfigConfiguration properties to provide when using Workload Identity Federation via AWS.
CompressionTypeData compression type. Batches of data will be compressed together.
ConsumerPropertiesAdditional options used to configure Kafka consumers.
CreateTablePartitionsThe number of partitions assigned to a topic created with CREATE TABLE.
CreateTableReplicationFactorThe number of replicas assigned to a topic created with CREATE TABLE.
EnableIdempotenceIf set to true, the Apache Kafka will ensure messages are delivered in the correct order, and without duplicates.
FlattenArraysBy default, nested arrays won't show up if TypeDetectionScheme is set to SchemaRegistry. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays.
GenerateSchemaFilesIndicates the user preference as to when schemas should be generated and saved.
MaximumBatchSizeSpecifies maximum batch size to gather before sending a request.
MaxRowsSpecifies the maximum rows returned for queries without aggregation or GROUP BY.
MessageKeyColumnThe name of the column that message key data is stored in.
MessageKeyTypeThe type of data stored in message keys.
OffsetResetStrategySpecifies an offset for the consumer group.
OtherSpecifies additional hidden properties for specific use cases. These are not required for typical provider functionality. Use a semicolon-separated list to define multiple properties.
PagesizeThe maximum number of rows to fetch from Kafka at one time.
ProducerPropertiesAdditional options used to configure Kafka producers.
PseudoColumnsSpecifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property.
ReadDurationThe duration which additional messages are allowed.
RowScanDepthThe maximum number of messages to scan for the columns available in the topic.
SchemaRegistryOnlyWhether to connect only to the schema registry.
SerializationFormatSpecifies how to serialize/deserialize message contents.
TimeoutSpecifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout.
TypeDetectionSchemeComma-separated list of options specifying how the provider scans the data to determine the fields and datatypes for the topic.
UseConfluentAvroFormatSpecifies how Avro data should be formatted during an INSERT.
UserDefinedViewsSpecifies a filepath to a JSON configuration file defining custom views. The provider automatically detects and uses the views specified in this file.
ValidateRegistryTopicsSpecifies whether or not to validate schema registry topics against the Apache Kafka broker. Only has an effect when TypeDetectionScheme =SchemaRegistry.
WorkloadPoolIdThe ID of your Workload Identity Federation pool.
WorkloadProjectIdThe ID of the Google Cloud project that hosts your Workload Identity Federation pool.
WorkloadProviderIdThe ID of your Workload Identity Federation pool provider.
Apache Kafka Connector for CData Sync

AWSWorkloadIdentityConfig

Configuration properties to provide when using Workload Identity Federation via AWS.

Remarks

The properties are formatted as a semicolon-separated list of Key=Value properties, where the value is optionally quoted. For example, this setting authenticates in AWS using a user's root keys:

AWSWorkloadIdentityConfig="AuhtScheme=AwsRootKeys;AccessKey='AKIAABCDEF123456';SecretKey=...;Region=us-east-1"

Apache Kafka Connector for CData Sync

CompressionType

Data compression type. Batches of data will be compressed together.

Remarks

The following values are supported:

NONE Messages will not be compressed.
GZIP Messages will be compressed using gzip.
SNAPPY Messages will be compressed using snappy.
LZ4 Messages will be compressed using lz4.

Apache Kafka Connector for CData Sync

ConsumerProperties

Additional options used to configure Kafka consumers.

Remarks

The Sync App exposes several Kafka consumer configuration values directly as connection properties. Internally, these are all mapped into properties that are passed to the Kafka client libraries.

If the Sync App does not expose an option for the consumer configuration, it can be set here. This option takes a connection string value and passes all its options directly to the consumer. For example, security.protocol=SASL_SSL;sasl.mechanism=SCRAM-SHA-512 sets the security.protocol and sasl.mechanism consumer properties.

Apache Kafka Connector for CData Sync

CreateTablePartitions

The number of partitions assigned to a topic created with CREATE TABLE.

Remarks

When executing a CREATE TABLE statement, the Sync App creates a new empty topic. By default, the Sync App creates this new topic with 1 partition.

You can create topics with more partitions by changing this setting. This can be useful if you plan on having multiple consumers process the messages on this topic.

Apache Kafka Connector for CData Sync

CreateTableReplicationFactor

The number of replicas assigned to a topic created with CREATE TABLE.

Remarks

When executing a CREATE TABLE statement, the Sync App creates a new empty topic. By default, the Sync App creates this topic with a replication factor of 3.

You can create topics with a different number of replicas by changing this setting. There are two main cases where this is useful:

  • When your cluster has fewer than 3 nodes. This setting should never be higher than the number of nodes in your cluster. For example, creating topics with a replication factor of 3 on a cluster with 2 nodes will fail.
  • When your cluster has more than 3 nodes and you want more safety in case of failover. Apache Kafka uses replicas to prevent data loss when a node fails, and if all the replicas fail then the topic is unavailable.

Apache Kafka Connector for CData Sync

EnableIdempotence

If set to true, the Apache Kafka will ensure messages are delivered in the correct order, and without duplicates.

Remarks

Gives each message a sequence number when enabled. Specifies how to serialize/deserialize the incoming or outgoing message.

Apache Kafka Connector for CData Sync

FlattenArrays

By default, nested arrays won't show up if TypeDetectionScheme is set to SchemaRegistry. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays.

Remarks

Set FlattenArrays to the number of elements you want to return from nested arrays. The specified elements are returned as columns.

For example, you can return an arbitrary number of elements from an array of strings:

["FLOW-MATIC","LISP","COBOL"]
When FlattenArrays is set to 1, the preceding array is flattened into the following table:

Column NameColumn Value
languages.0FLOW-MATIC

Apache Kafka Connector for CData Sync

GenerateSchemaFiles

Indicates the user preference as to when schemas should be generated and saved.

Remarks

This property outputs schemas to .rsd files in the path specified by Location.

Available settings are the following:

  • Never: A schema file will never be generated.
  • OnUse: A schema file will be generated the first time a table is referenced, provided the schema file for the table does not already exist.
  • OnStart: A schema file will be generated at connection time for any tables that do not currently have a schema file.
  • OnCreate: A schema file will be generated by when running a CREATE TABLE SQL query.
Note that if you want to regenerate a file, you will first need to delete it.

Generate Schemas with SQL

When you set GenerateSchemaFiles to OnUse, the Sync App generates schemas as you execute SELECT queries. Schemas are generated for each table referenced in the query.

When you set GenerateSchemaFiles to OnCreate, schemas are only generated when a CREATE TABLE query is executed.

Generate Schemas on Connection

Another way to use this property is to obtain schemas for every table in your database when you connect. To do so, set GenerateSchemaFiles to OnStart and connect.

Apache Kafka Connector for CData Sync

MaximumBatchSize

Specifies maximum batch size to gather before sending a request.

Remarks

A batch can be formed by one or more messages. The size is specified in bytes.

Apache Kafka Connector for CData Sync

MaxRows

Specifies the maximum rows returned for queries without aggregation or GROUP BY.

Remarks

This property sets an upper limit on the number of rows the Sync App returns for queries that do not include aggregation or GROUP BY clauses. This limit ensures that queries do not return excessively large result sets by default.

When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting. If MaxRows is set to "-1", no row limit is enforced unless a LIMIT clause is explicitly included in the query.

This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.

Apache Kafka Connector for CData Sync

MessageKeyColumn

The name of the column that message key data is stored in.

Remarks

See MessageKeyType for a complete description of how the Sync App supports message keys.

Apache Kafka Connector for CData Sync

MessageKeyType

The type of data stored in message keys.

Remarks

By default the Sync App does not report message keys. To enable message keys, this property must be set to a value other than Null and MessageKeyColumn must be set to a valid column name.

See Extracting Metadata From Topics for a description of how this interacts with the TypeDetectionScheme property. SerializationFormat describes how each of these supported formats is encoded. There are three main differences between how that property and this property work:

  • Complex key columns are always prefixed with MessageKeyColumn and a dot, while primitive key columns use MessageKeyColumn as their name. For example, if MessageKeyColumn is Key, an Avro key would expose columns like Key.field1. A string key would be expose a single column called Key.
  • SerializationFormat uses NONE for binary fields while this property uses Binary.
  • The Sync App only supports reading key schemas from the registry when connected to a Confluent RegistryService. COnfluent registries use a naming convention that allows for both key and value schemas that cover the same topic.

Apache Kafka Connector for CData Sync

OffsetResetStrategy

Specifies an offset for the consumer group.

Remarks

Select one of the following strategies:

Latest Will only consume messages that are produced after the consumer group is created.
Earliest Will consume any unconsumed messages including any message produced before the lifetime of the consumer group.

Apache Kafka Connector for CData Sync

Other

Specifies additional hidden properties for specific use cases. These are not required for typical provider functionality. Use a semicolon-separated list to define multiple properties.

Remarks

This property allows advanced users to configure hidden properties for specialized scenarios. These settings are not required for normal use cases but can address unique requirements or provide additional functionality. Multiple properties can be defined in a semicolon-separated list.

Note: It is strongly recommended to set these properties only when advised by the support team to address specific scenarios or issues.

Specify multiple properties in a semicolon-separated list.

Integration and Formatting

DefaultColumnSizeSets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000.
ConvertDateTimeToGMTDetermines whether to convert date-time values to GMT, instead of the local time of the machine.
RecordToFile=filenameRecords the underlying socket data transfer to the specified file.

Apache Kafka Connector for CData Sync

Pagesize

The maximum number of rows to fetch from Kafka at one time.

Remarks

The Sync App batches reads to Kafka to reduce overhead. Instead of fetching a single row from the broker every time a query row is read, the Sync App will read multiple rows and save them to the resultset. This means that only the first row read from the resultset must wait for the broker. Later rows can be read out of this buffer directly.

This option controls the maximum number of rows the Sync App stores on the resultset. Setting this to a higher value will use more memory but requires waiting on the broker less often. Lower values will give lower throughput while using less memory.

Apache Kafka Connector for CData Sync

ProducerProperties

Additional options used to configure Kafka producers.

Remarks

This option is like ConsumerProperties but applies to producers instead. Please refer to that property for more information.

Apache Kafka Connector for CData Sync

PseudoColumns

Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property.

Remarks

This property allows you to define which pseudocolumns the Sync App exposes as table columns.

To specify individual pseudocolumns, use the following format: "Table1=Column1;Table1=Column2;Table2=Column3"

To include all pseudocolumns for all tables use: "*=*"

Apache Kafka Connector for CData Sync

ReadDuration

The duration which additional messages are allowed.

Remarks

A timeout for the Sync App to stop waiting for additional messages to come.

Apache Kafka Connector for CData Sync

RowScanDepth

The maximum number of messages to scan for the columns available in the topic.

Remarks

Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly.

Apache Kafka Connector for CData Sync

SchemaRegistryOnly

Whether to connect only to the schema registry.

Remarks

Enable this option to prevent connections to the Kafka broker. In this mode the Sync App does not perform any action that would require a broker connection. Performing any of the following actions in registry-only mode reports an error:

  • SELECT queries (excluding system tables)
  • INSERT statements
  • CREATE TABLE statements
  • CommitOffset
  • ProduceMessage

Note that registry-only mode requires that other properties be set appropriately:

  • TypeDetectionScheme=SchemaRegistry. Any other value requires connecting to the Kafka broker to list tables.
  • ValidateRegistryTopics=false. When this option is enabled, the Sync App compares the topics in the broker to the subjects in the schema registry.

Apache Kafka Connector for CData Sync

SerializationFormat

Specifies how to serialize/deserialize message contents.

Remarks

The Sync App uses this property differently based on the value of TypeDetectionScheme. See Extracting Metadata From Topics for details on how these properties interact.

Primitive and Complex Formats

This section applies only to the SchemaRegistry, None, and RowScan TypeDetectionScheme modes. MessageOnly always reports the message as a single column regardless of the format. The only difference is the column type.

The Sync App supports two different kinds of formats: primitive formats and complex formats. Primitive formats are reported in a single column called Message. The primitive formats use encodings that are compatible with the kafka-clients and Confluent.Kafka libraries.

Avro, CSV, CSV_WITH_HEADERS, XML, and JSON are all complex formats. The Sync App parses these formats into one or more columns, flattening nested Avro, XML, and JSON values as necessary.

Auto is also a complex format but the exact data format is determined at runtime. The Sync App determines whether a value is Avro, CSV, XML, or JSON by looking for either a specific header (the Avro OBJ header) or specific characters. If none of these methods succeed the Sync App assumes the value is CSV.

Available formats:

NONE Message is always BASE64 encoded on both the consume and produce operations.
AUTO Attempt to automatically figure out the current topic's serialization format. See Extracting Metadata From Topics for a discussion of how this occurs in different contexts.
JSON Message is serialized using the JSON format.
CSV Message is serialized using the CSV format.
CSV_WITH_HEADERS Message is serialized using the CSV format with a separate header line before the data. Note that this option only applies to messages created using INSERT. It behaves the same as CSV in SELECT.
XML Message is serialized using the XML format.
AVRO Message is serialized using the Avro format.
LONG Message is serialized as a 64-bit big-endian integer.
INTEGER Message is serialized as a 32-bit big-endian integer.
FLOAT Message is serialized as a 32-bit floating-point number.
DOUBLE Message is serialized as a 64-bit floating-point number.
STRING Message is serialized as text. By default the Sync App uses UTF-8, but setting Charset overrides this.

Apache Kafka Connector for CData Sync

Timeout

Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout.

Remarks

This property controls the maximum time, in seconds, that the Sync App waits for an operation to complete before canceling it. If the timeout period expires before the operation finishes, the Sync App cancels the operation and throws an exception.

The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.

Setting this property to 0 disables the timeout, allowing operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server. Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.

Apache Kafka Connector for CData Sync

TypeDetectionScheme

Comma-separated list of options specifying how the provider scans the data to determine the fields and datatypes for the topic.

Remarks

See Extracting Metadata From Topics for more information on how this property interacts with different values of SerializationFormat, RegistryService, and MessageKeyType.

Apache Kafka Connector for CData Sync

UseConfluentAvroFormat

Specifies how Avro data should be formatted during an INSERT.

Remarks

By default the Sync App writes out Avro data as a series of file blocks (as defined in the Avro specification). Confluent tools and libraries cannot decode this format and it cannot be used with Confluent schema validation. However, it is more compact because it allows multiple rows of Avro data to be stored in a single message.

Enable this option if you use Confluent schema validation, or otherwise require compatibility with Confluent tools and libraries. Each row inserted into an Avro topic will be a separate message and contain a reference to an schema stored in the registry.

Note that this cannot be enabled if there is no RegistryUrl set or RegistryUrl points to an AWS Glue schema registry. AWS Glue schemas do not support schema IDs which are a key part of how Confluent handles Avro data.

Apache Kafka Connector for CData Sync

UserDefinedViews

Specifies a filepath to a JSON configuration file defining custom views. The provider automatically detects and uses the views specified in this file.

Remarks

This property allows you to define and manage custom views through a JSON-formatted configuration file called UserDefinedViews.json. These views are automatically recognized by the Sync App and enable you to execute custom SQL queries as if they were standard database views. The JSON file defines each view as a root element with a child element called "query", which contains the SQL query for the view. For example:


{
	"MyView": {
		"query": "SELECT * FROM SampleTable_1 WHERE MyColumn = 'value'"
	},
	"MyView2": {
		"query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
	}
}

You can define multiple views in a single file and specify the filepath using this property. For example: UserDefinedViews=C:\Path\To\UserDefinedViews.json. When you use this property, only the specified views are seen by the Sync App.

Refer to User Defined Views for more information.

Apache Kafka Connector for CData Sync

ValidateRegistryTopics

Specifies whether or not to validate schema registry topics against the Apache Kafka broker. Only has an effect when TypeDetectionScheme =SchemaRegistry.

Remarks

Schema registries can include metadata for topics that cannot be accessed in Kafka. This can happen because the topic doesn't exist on the broker. It is also possible that the principal the connection is authenticated to does not have access to the topic.

By default, the Sync App will get a list of schemas from the registry and then filter out any that the broker does not report. All the remaining valid topics are exposed as tables. You can disable this behavior by setting this option to false. This will report all schemas in the registry as tables regardless of whether they are accessible on the broker.

Apache Kafka Connector for CData Sync

WorkloadPoolId

The ID of your Workload Identity Federation pool.

Remarks

The ID of your Workload Identity Federation pool.

Apache Kafka Connector for CData Sync

WorkloadProjectId

The ID of the Google Cloud project that hosts your Workload Identity Federation pool.

Remarks

The ID of the Google Cloud project that hosts your Workload Identity Federation pool.

Apache Kafka Connector for CData Sync

WorkloadProviderId

The ID of your Workload Identity Federation pool provider.

Remarks

The ID of your Workload Identity Federation pool provider.

Apache Kafka Connector for CData Sync

Third Party Copyrights

Copyright (c) 2025 CData Software, Inc. - All rights reserved.
Build 24.0.9175