JDBC Driver for Apache Hive

Build 22.0.8462

Establishing a Connection

Creating a JDBC Data Source

You can create a JDBC data source to connect from your Java application. Creating a JDBC data source based on the CData JDBC Driver for Apache Hive consists of three basic steps:

  • Add the driver JAR file to the classpath. The JAR file is located in the lib subfolder of the installation directory. Note that the .lic file must be located in the same folder as the JAR file.
  • Provide the driver class. For example:
    cdata.jdbc.apachehive.ApacheHiveDriver
  • Provide the JDBC URL. For example:
    jdbc:apachehive:Server=127.0.0.1;Port=10000;TransportMode=BINARY
    
    or
    
    jdbc:cdata:apachehive:Server=127.0.0.1;Port=10000;TransportMode=BINARY

    The second format above can be used whenever there is a conflict in your application between drivers using the same URL format to ensure you are using the CData driver. The URL must start with either "jdbc:apachehive:" or "jdbc:cdata:apachehive:" and can include any of the connection properties in name-value pairs separated with semicolons.

Connecting to Apache Hive

Self-hosted Instances

Specify the following to establish a connection with Apache Hive:

  • TransportMode: The transport mode to use to communicate with the Hive server. Accepted entries are BINARY and HTTP. BINARY is selected by default.
  • Server: Set this to the host name or IP address of the server hosting HiveServer2.
  • Port: Set this to the port for the connection to the HiveServer2 instance.
  • UseSSL (optional): Set this to enable TLS/SSL.

Amazon EMR Instances

Connections to Amazon EMR will need to be established using an SSH tunnel.

Use the following procedure to create an SSH tunnel to EMR.

  1. To begin, you will need an active EMR cluster and an EC2 key pair. The key pair can be in .ppk or .pem format.
  2. Next, authorize inbound traffic in your cluster settings.

Set the following to connect (while running an active tunnel session to EMR):

  • Server: Set this to the master node (master-public-dns-name) where the Apache Hive server is running.
  • Port: Set this to the port required to connect to Apache Hive.
  • UseSSH: Set this to true.
  • SSHServer: Set this to the master node (master-public-dns-name).
  • SSHPort: Set this to 22.
  • SSHAuthMode: Set this to PUBLIC_KEY.
  • SSHUser: Set this to hadoop.
  • SSHClientCert: Set this to the full path to the key file.
  • SSHClientCertType: Set this to type that corresponds to the key file. Typically either PEMKEY_FILE or PPKFILE.

Hadoop Cluster on Azure HDInsight Instances

You will need to supply the following to establish a connection to a Hadoop cluster hosted on Azure HDInsight:

  • User: Set this to the cluster username that you specified when creating the cluster on Azure HDInsight.
  • Password: Set this to the cluster password that you specified when creating the cluster on Azure HDInsight.
  • Server: The server corresponding to your cluster. For example: myclustername.azurehdinsight.net.
  • Port: Set this to the port running HiveServer2. This will be 443 by default.
  • HTTPPath: Set this to the HTTP path for the hive2 service. This will be hive2 by default.
  • TransportMode: Set this to HTTP.
  • UseSSL: Set this to true.
  • QueryPassthrough (optional): Set QueryPassthrough to true to bypass the SQL engine of the driver and execute HiveQL queries directly to Apache Hive.

Google DataProc Instances

Before Connecting

Ensure that the Apache Hive server on DataProc was created with the DataProc Component Gateway enabled.

Next, obtain the external IP address of the Hive Cluster. To find this, load up the Cloud Shell and list the instances.

gcloud compute instances list

Note the external IP of the relevant machine.

Build an SSH Tunnel to the Hive Cluster Web Interface

Navigate to the Hive cluster on DataProc and select the WEB INTERFACES tab. Select Create an SSH tunnel to connect to a web interface.

A cloud console command will be shown that can be used to create an SSH key pair. Download the private key from the directory specified in the console.

Configure the SSH tunnel in an SSH utility:

  • Host Name: Set this to the external IP noted above.
  • Port: 22
  • Point the tool to your private SSH key.
  • For the Tunnel, map an open port to localhost:10000. localhost will be resolved properly on the server.

Connecting to Hive on Google DataProc

Specify the following information to connect to Apache Hive:

  • TransportMode: Set this to BINARY.
  • AuthScheme: Set this to Plain.
  • Port: Set this to the chosen SSH Tunnel port on the local machine.

Authenticating to Apache Hive

PLAIN

Set AuthScheme to PLAIN when the hive.server2.authentication property is set to None (uses Plain SASL), PAM, or CUSTOM. In addition, set the following connection properties:

  • User: Set this to user to login as. If nothing is set, 'anonymous' will be sent instead.
  • Password: Set this to the password of the user. If nothing is set, 'anonymous' will be sent instead.

LDAP

Set AuthScheme to LDAP when the hive.server2.authentication property is set to LDAP. In addition, set the following connection properties:

  • User: Set this to user to login as.
  • Password: Set this to the password of the user.

NOSASL

Set AuthScheme to NOSASL when the hive.server2.authentication property is set to NOSASL. There are no user credentials submitted in this auth scheme.

Kerberos

Set AuthScheme to Kerberos when the hive.server2.authentication property is set to Kerberos. Please see Using Kerberos for details on how to authenticate with Kerberos.

Copyright (c) 2023 CData Software, Inc. - All rights reserved.
Build 22.0.8462