Establishing a Connection
Creating a JDBC Data Source
You can create a JDBC data source to connect from your Java application. Creating a JDBC data source based on the CData JDBC Driver for Apache Kafka consists of three basic steps:
- Add the driver JAR file to the classpath. The JAR file is located in the lib subfolder of the installation directory. Note that the .lic file must be located in the same folder as the JAR file.
- Provide the driver class. For example:
cdata.jdbc.apachekafka.ApacheKafkaDriver
- Provide the JDBC URL. For example:
jdbc:apachekafka:User=admin;Password=pass;BootStrapServers=https://localhost:9091;Topic=MyTopic; or jdbc:cdata:apachekafka:User=admin;Password=pass;BootStrapServers=https://localhost:9091;Topic=MyTopic;
The second format above can be used whenever there is a conflict in your application between drivers using the same URL format to ensure you are using the CData driver. The URL must start with either "jdbc:apachekafka:" or "jdbc:cdata:apachekafka:" and can include any of the connection properties in name-value pairs separated with semicolons.
Connecting to Apache Kafka
Set BootstrapServers and the Topic properties to specify the address of your Apache Kafka server, as well as the topic you would like to interact with.
By default, the driver communicates with the data source in PLAINTEXT, which means that all data is sent in the clear. To encrypt communication, you should configure the driver to use SSL encryption. To do this, set UseSSL to true and configure SSLServerCert and SSLServerCertType to load the server certificates. JKS and PEM files are supported certificate stores.
Note that proxy settings like ProxyServer and firewall settings like FirewallServer do not affect the connection to the Apache Kafka broker. Internally the driver connects to Apache Kafka using the official libraries which do not support proxies. These options are only used when the driver connects to the schema registry as described in Extracting Metadata From Topics.
Authenticating to Apache Kafka
The Apache Kafka data source supports the following authentication methods:
- Anonymous
- Plain
- Scram
- Kerberos
Anonymous
Certain on-premise deployments of Apache Kafka are able to connect to Apache Kafka without setting any authentication connection properties. To do so, simply set the AuthScheme to "None", and you are ready to connect.
SASL Plain
The User and Password properties should be specified. AuthScheme should be set to Plain.
SCRAM login module
The User and Password properties should be specified. The AuthScheme should be set to 'SCRAM' (for SCRAM-SHA-256) or 'SCRAM-SHA-512'.
SSL client certificates
The SSLClientCert and SSLClientCertType properties should be specified and AuthScheme should be set to SSLCertificate. The JKS certificate format is recommended but both PEM and JKS are supported. Using PEM often requires several conversion steps as Java only supports a subset of the encodings and encryption methods supported by tools like OpenSSL.
Kerberos
To authenticate to Apache Kafka using Kerberos, set the following properties:- AuthScheme: Set this to KERBEROS.
- KerberosServiceName: This should match to the principal name of the Kafka brokers. For example, the principal is "kafka/[email protected]", so: KerberosServiceName=kafka.
- KerberosKeytabFile: Set this to The Keytab file absolute path containing your pairs of Kerberos principals and encrypted keys.
- KerberosSPN: Set this to the service and host of the Apache Kafka Kerberos Principal. This will be the value prior to the '@' symbol (for instance, kafka/kafka1.hostname.com) of the principal value (for instance, kafka/[email protected]).
Use Ticket cache
You can set the UseKerberosTicketCache to TRUE in order to use a ticket cache instead of specifying the keytab file. In that case, the KerberosKeytabFile will be ignored even if it's specified.