Cloud

Build 24.0.9175
  • Databricks
    • Getting Started
      • Establishing a Connection
      • SSL Configuration
      • Firewall and Proxy
    • Data Model
      • Stored Procedures
      • System Tables
        • sys_catalogs
        • sys_schemas
        • sys_tables
        • sys_tablecolumns
        • sys_procedures
        • sys_procedureparameters
        • sys_keycolumns
        • sys_foreignkeys
        • sys_primarykeys
        • sys_indexes
        • sys_connection_props
        • sys_sqlinfo
        • sys_identity
        • sys_information
    • Connection String Options
      • Authentication
        • AuthScheme
        • Server
        • User
        • ProtocolVersion
        • Database
        • HTTPPath
        • Token
      • AWS Authentication
        • AWSAccessKey
        • AWSSecretKey
        • AWSRegion
        • AWSS3Bucket
      • Azure Authentication
        • AzureStorageAccount
        • AzureAccessKey
        • AzureTenant
        • AzureBlobContainer
      • AzureServicePrincipal Authentication
        • AzureTenantId
        • AzureClientId
        • AzureClientSecret
      • OAuth
        • OAuthClientId
        • OAuthClientSecret
        • OAuthLevel
        • DatabricksAccountId
      • SSL
        • SSLServerCert
      • Logging
        • Verbosity
      • Schema
        • BrowsableSchemas
        • Catalog
        • PrimaryKeyIdentifiers
      • Databricks
        • CloudStorageType
        • StoreTableInCloud
        • QueryTableDetails
        • UseUploadApi
        • UseCloudFetch
        • UseLegacyDataModel
        • QueryAllMetadata
        • CheckSQLWarehouseAvailability
      • Miscellaneous
        • AllowPreparedStatement
        • ConnectRetryWaitTime
        • ApplicationName
        • AsyncQueryTimeout
        • DefaultColumnSize
        • DescribeCommand
        • DetectView
        • MaxRows
        • PseudoColumns
        • ServerConfigurations
        • ServerTimeZone
        • Timeout
        • UseDescTableQuery
        • UseInsertSelectSyntax

Databricks - CData Cloud

Overview

CData Cloud offers access to Databricks across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a MySQL or SQL Server database can connect to Databricks through CData Cloud.

CData Cloud allows you to standardize and configure connections to Databricks as though it were any other OData endpoint, or standard SQL Server/MySQL database.

Key Features

  • Full SQL Support: Databricks appears as standard relational databases, allowing you to perform operations - Filter, Group, Join, etc. - using standard SQL, regardless of whether these operations are supported by the underlying API.
  • CRUD Support: Both read and write operations are supported, restricted only by security settings that you can configure in Cloud or downstream in the source itself.
  • Secure Access: The administrator can create users and define their access to specific databases and read-only operations or grant full read & write privileges.
  • Comprehensive Data Model & Dynamic Discovery: CData Cloud provides comprehensive access to all of the data exposed in the underlying data source, including full access to dynamic data and easily searchable metadata.

CData Cloud

Getting Started

This page provides a guide to Establishing a Connection to Databricks in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.

Connecting to Databricks

Establishing a Connection shows how to authenticate to Databricks and configure any necessary connection properties to create a database in CData Cloud

Accessing Data from CData Cloud Services

Accessing data from Databricks through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.

CData Cloud

Establishing a Connection

Connect to Databricks by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.

Connecting to Databricks

To connect to a Databricks cluster, set the following properties:

  • Database: The name of the Databricks database.
  • Server: The Server Hostname of your Databricks cluster.
  • HTTPPath: The HTTP Path of your Databricks cluster.
  • Token: Your personal access token. You can obtain this value by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab.

You can find the required values in your Databricks instance by navigating to Clusters and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.

Configuring Cloud Storage

The Cloud supports DBFS, Azure Blob Storage, and AWS S3 for uploading CSV files.

DBFS Cloud Storage

To use DBFS for cloud storage, set the CloudStorageType property to DBFS.

Azure Blob Storage

Set the following properties:

  • CloudStorageType: Azure Blob storage.
  • StoreTableInCloud: True to store tables in cloud storage when creating a new table.
  • AzureStorageAccount: The name of your Azure storage account.
  • AzureAccessKey: The storage key associated with your Databricks account. Find this via the azure portal (using the root account). Select your storage account and click Access Keys to find this value.
  • AzureBlobContainer: Set to the name of you Azure Blob storage container.

AWS S3 Storage

Set the following properties:

  • CloudStorageType: AWS S3.
  • StoreTableInCloud: True to store tables in cloud storage when creating a new table.
  • AWSAccessKey: The AWS account access key. You can acquire this value from your AWS security credentials page.
  • AWSSecretKey: Your AWS account secret key. You can acquire this value from your AWS security credentials page.
  • AWSS3Bucket: The name of your AWS S3 bucket.
  • AWSRegion: The hosting region for your Amazon Web Services. You can obtain the AWS Region value by navigating to the Buckets List page of your Amazon S3 service, for example, us-east-1.

Authenticating to Databricks

CData supports the following authentication schemes:

  • Basic
  • Personal Access Token
  • Azure Active Directory (AD)
  • Azure Service Principal
  • OAuthU2M
  • OAuthM2M

Basic

Basic authentication requires a username and password. Set the following:

  • AuthScheme: Basic.
  • User: Your username. This overrides the default value ("Token").
  • Token: Your password.

Personal Access Token

To authenticate, set the following:

  • AuthScheme: PersonalAccessToken.
  • Token: The token used to access the Databricks server. It can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab.

Azure Active Directory

To authenticate, follow these steps:

  1. Register an application with the AzureAD (now known as Microsoft Entra ID) endpoint in the Azure portal. See Configure an app in Azure portal for information on how to create and register the application. Alternatively, you can use a AzureAD application that is already registered.

  2. Set these properties:

    • AuthScheme: AzureAD.
    • AzureTenant: The "Directory(tenant) ID" in the AzureAD application "Overview" page
    • OAuthClientId: The "Application(client) ID" in the AzureAD application "Overview" page.
    • CallbackURL: The "Redirect URIs" in AzureAD application "Authentication" page

  3. When connecting, a web page opens that prompts you to authenticate. After successful authentication, the connection is established.

Here is an example of the connection string:

"Server=https://adb-8439982502599436.16.azuredatabricks.net;HTTPPath=sql/protocolv1/o/8439982502599436/0810-011933-odsz4s3r;database=default;
AuthScheme=AzureAD;InitiateOAuth=GETANDREFRESH;AzureTenant=94be69e7-edb4-4fda-ab12-95bfc22b232f;OAuthClientId=f544a825-9b69-43d9-bec2-3e99727a1669;CallbackURL=http://localhost;"

Azure AD Service Principal

To authenticate, set the following properties:

  • AuthScheme: AzureServicePrincipal.
  • AzureTenantId: The tenant ID of your Microsoft Azure Active Directory.
  • AzureClientId: The application (client) ID of your Microsoft Azure Active Directory application.
  • AzureClientSecret: The application (client) secret of your Microsoft Azure Active Directory application.

OAuthU2M

OAuthU2M (User-to-Machine) authentication allows users to grant applications, such as CLI or SDK, access to their workspace. It uses a secure OAuth token, eliminating the need to share the user's password.

The following explains how OAuthU2M works:

After a user signs in and consents to the OAuthU2M authentication request, the tool or SDK receives an OAuth token. This token allows the tool or SDK to authenticate on the user's behalf.

The required settings are:

  • OAuthClientId: Assigned when you register your application with an OAuth authorization server.
  • OAuthClientSecret: Assigned when you register your application with an OAuth authorization server.
  • DatabricksAccountId: Required only when the OAuthU2MLevel is set to AccountLevel.

OAuthM2M

OAuthM2M (Machine-to-Machine) authentication verifies the identiy of devices or applications communicating over a network. It ensures that only authorized machines can securely exchange data and access resources without human intervention.

The following explains how OAuthM2M works:

Register your application with the authorization server to obtain a client ID and secret. When accessing a protected resource, your machine sends a request with these credentials and desired scopes. The server verifies the provided information and, if valid, returns an access token. This token is included in the request header for API calls to access the resource.

The required settings are:

  • OAuthClientId: Assigned when you register your application with an OAuth authorization server.
  • OAuthClientSecret: Assigned when you register your application with an OAuth authorization server.
  • DatabricksAccountId: Required only when the OAuthM2MLevel is set to AccountLevel.

CData Cloud

SSL Configuration

Customizing the SSL Configuration

By default, the Cloud attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.

To specify another certificate, see the SSLServerCert connection property.

Client SSL Certificates

The Databricks Cloud also supports setting client certificates. Set the following to connect using a client certificate.

  • SSLClientCert: The name of the certificate store for the client certificate.
  • SSLClientCertType: The type of key store containing the TLS/SSL client certificate.
  • SSLClientCertPassword: The password for the TLS/SSL client certificate.
  • SSLClientCertSubject: The subject of the TLS/SSL client certificate.

CData Cloud

Firewall and Proxy

Connecting Through a Firewall or Proxy

HTTP Proxies

To authenticate to an HTTP proxy, set the following:

  • ProxyServer: the hostname or IP address of the proxy server that you want to route HTTP traffic through.
  • ProxyPort: the TCP port that the proxy server is running on.
  • ProxyAuthScheme: the authentication method the Cloud uses when authenticating to the proxy server.
  • ProxyUser: the username of a user account registered with the proxy server.
  • ProxyPassword: the password associated with the ProxyUser.

Other Proxies

Set the following properties:

  • To use a proxy-based firewall, set FirewallType, FirewallServer, and FirewallPort.
  • To tunnel the connection, set FirewallType to TUNNEL.
  • To authenticate, specify FirewallUser and FirewallPassword.
  • To authenticate to a SOCKS proxy, additionally set FirewallType to SOCKS5.

CData Cloud

Data Model

The Cloud leverages Databricks Thrift to enable bidirectional SQL access to Databricks data. It supports Databricks databases running Databricks Runtime Version 9.1 - 13.X and the Pro and Classic Databricks SQL versions.

Discovering Schemas

The CData Cloud dynamically obtains the Databricks schemas; reconnect to pick up any metadata changes, such as added or removed columns or changes in data type.

CData Cloud

Stored Procedures

Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with Databricks.

Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Databricks, along with an indication of whether the procedure succeeded or failed.

CData Cloud - Databricks Stored Procedures

Name Description

CData Cloud

System Tables

You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.

Schema Tables

The following tables return database metadata for Databricks:

  • sys_catalogs: Lists the available databases.
  • sys_schemas: Lists the available schemas.
  • sys_tables: Lists the available tables and views.
  • sys_tablecolumns: Describes the columns of the available tables and views.
  • sys_procedures: Describes the available stored procedures.
  • sys_procedureparameters: Describes stored procedure parameters.
  • sys_keycolumns: Describes the primary and foreign keys.
  • sys_indexes: Describes the available indexes.

Data Source Tables

The following tables return information about how to connect to and query the data source:

  • sys_connection_props: Returns information on the available connection properties.
  • sys_sqlinfo: Describes the SELECT queries that the Cloud can offload to the data source.

Query Information Tables

The following table returns query statistics for data modification queries, including batch operations::

  • sys_identity: Returns information about batch operations or single updates.

CData Cloud

sys_catalogs

Lists the available databases.

The following query retrieves all databases determined by the connection string:

SELECT * FROM sys_catalogs

Columns

Name Type Description
CatalogName String The database name.

CData Cloud

sys_schemas

Lists the available schemas.

The following query retrieves all available schemas:

          SELECT * FROM sys_schemas
          

Columns

Name Type Description
CatalogName String The database name.
SchemaName String The schema name.

CData Cloud

sys_tables

Lists the available tables.

The following query retrieves the available tables and views:

          SELECT * FROM sys_tables
          

Columns

Name Type Description
CatalogName String The database containing the table or view.
SchemaName String The schema containing the table or view.
TableName String The name of the table or view.
TableType String The table type (table or view).
Description String A description of the table or view.
IsUpdateable Boolean Whether the table can be updated.

CData Cloud

sys_tablecolumns

Describes the columns of the available tables and views.

The following query returns the columns and data types for the [CData].[Sample].Customers table:

SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Customers' AND CatalogName='CData' AND SchemaName='Sample'

Columns

Name Type Description
CatalogName String The name of the database containing the table or view.
SchemaName String The schema containing the table or view.
TableName String The name of the table or view containing the column.
ColumnName String The column name.
DataTypeName String The data type name.
DataType Int32 An integer indicating the data type. This value is determined at run time based on the environment.
Length Int32 The storage size of the column.
DisplaySize Int32 The designated column's normal maximum width in characters.
NumericPrecision Int32 The maximum number of digits in numeric data. The column length in characters for character and date-time data.
NumericScale Int32 The column scale or number of digits to the right of the decimal point.
IsNullable Boolean Whether the column can contain null.
Description String A brief description of the column.
Ordinal Int32 The sequence number of the column.
IsAutoIncrement String Whether the column value is assigned in fixed increments.
IsGeneratedColumn String Whether the column is generated.
IsHidden Boolean Whether the column is hidden.
IsArray Boolean Whether the column is an array.
IsReadOnly Boolean Whether the column is read-only.
IsKey Boolean Indicates whether a field returned from sys_tablecolumns is the primary key of the table.

CData Cloud

sys_procedures

Lists the available stored procedures.

The following query retrieves the available stored procedures:

          SELECT * FROM sys_procedures
          

Columns

Name Type Description
CatalogName String The database containing the stored procedure.
SchemaName String The schema containing the stored procedure.
ProcedureName String The name of the stored procedure.
Description String A description of the stored procedure.
ProcedureType String The type of the procedure, such as PROCEDURE or FUNCTION.

CData Cloud

sys_procedureparameters

Describes stored procedure parameters.

The following query returns information about all of the input parameters for the SearchSuppliers stored procedure:

SELECT * FROM sys_procedureparameters WHERE ProcedureName='SearchSuppliers' AND Direction=1 OR Direction=2

Columns

Name Type Description
CatalogName String The name of the database containing the stored procedure.
SchemaName String The name of the schema containing the stored procedure.
ProcedureName String The name of the stored procedure containing the parameter.
ColumnName String The name of the stored procedure parameter.
Direction Int32 An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters.
DataTypeName String The name of the data type.
DataType Int32 An integer indicating the data type. This value is determined at run time based on the environment.
Length Int32 The number of characters allowed for character data. The number of digits allowed for numeric data.
NumericPrecision Int32 The maximum precision for numeric data. The column length in characters for character and date-time data.
NumericScale Int32 The number of digits to the right of the decimal point in numeric data.
IsNullable Boolean Whether the parameter can contain null.
IsRequired Boolean Whether the parameter is required for execution of the procedure.
IsArray Boolean Whether the parameter is an array.
Description String The description of the parameter.
Ordinal Int32 The index of the parameter.

CData Cloud

sys_keycolumns

Describes the primary and foreign keys.

The following query retrieves the primary key for the [CData].[Sample].Customers table:

         SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Customers' AND CatalogName='CData' AND SchemaName='Sample'
          

Columns

Name Type Description
CatalogName String The name of the database containing the key.
SchemaName String The name of the schema containing the key.
TableName String The name of the table containing the key.
ColumnName String The name of the key column.
IsKey Boolean Whether the column is a primary key in the table referenced in the TableName field.
IsForeignKey Boolean Whether the column is a foreign key referenced in the TableName field.
PrimaryKeyName String The name of the primary key.
ForeignKeyName String The name of the foreign key.
ReferencedCatalogName String The database containing the primary key.
ReferencedSchemaName String The schema containing the primary key.
ReferencedTableName String The table containing the primary key.
ReferencedColumnName String The column name of the primary key.

CData Cloud

sys_foreignkeys

Describes the foreign keys.

The following query retrieves all foreign keys which refer to other tables:

         SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
          

Columns

Name Type Description
CatalogName String The name of the database containing the key.
SchemaName String The name of the schema containing the key.
TableName String The name of the table containing the key.
ColumnName String The name of the key column.
PrimaryKeyName String The name of the primary key.
ForeignKeyName String The name of the foreign key.
ReferencedCatalogName String The database containing the primary key.
ReferencedSchemaName String The schema containing the primary key.
ReferencedTableName String The table containing the primary key.
ReferencedColumnName String The column name of the primary key.
ForeignKeyType String Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key.

CData Cloud

sys_primarykeys

Describes the primary keys.

The following query retrieves the primary keys from all tables and views:

         SELECT * FROM sys_primarykeys
          

Columns

Name Type Description
CatalogName String The name of the database containing the key.
SchemaName String The name of the schema containing the key.
TableName String The name of the table containing the key.
ColumnName String The name of the key column.
KeySeq String The sequence number of the primary key.
KeyName String The name of the primary key.

CData Cloud

sys_indexes

Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.

The following query retrieves all indexes that are not primary keys:

          SELECT * FROM sys_indexes WHERE IsPrimary='false'
          

Columns

Name Type Description
CatalogName String The name of the database containing the index.
SchemaName String The name of the schema containing the index.
TableName String The name of the table containing the index.
IndexName String The index name.
ColumnName String The name of the column associated with the index.
IsUnique Boolean True if the index is unique. False otherwise.
IsPrimary Boolean True if the index is a primary key. False otherwise.
Type Int16 An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3).
SortOrder String The sort order: A for ascending or D for descending.
OrdinalPosition Int16 The sequence number of the column in the index.

CData Cloud

sys_connection_props

Returns information on the available connection properties and those set in the connection string.

The following query retrieves all connection properties that have been set in the connection string or set through a default value:

SELECT * FROM sys_connection_props WHERE Value <> ''

Columns

Name Type Description
Name String The name of the connection property.
ShortDescription String A brief description.
Type String The data type of the connection property.
Default String The default value if one is not explicitly set.
Values String A comma-separated list of possible values. A validation error is thrown if another value is specified.
Value String The value you set or a preconfigured default.
Required Boolean Whether the property is required to connect.
Category String The category of the connection property.
IsSessionProperty String Whether the property is a session property, used to save information about the current connection.
Sensitivity String The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms.
PropertyName String A camel-cased truncated form of the connection property name.
Ordinal Int32 The index of the parameter.
CatOrdinal Int32 The index of the parameter category.
Hierarchy String Shows dependent properties associated that need to be set alongside this one.
Visible Boolean Informs whether the property is visible in the connection UI.
ETC String Various miscellaneous information about the property.

CData Cloud

sys_sqlinfo

Describes the SELECT query processing that the Cloud can offload to the data source.

See SQL Compliance for SQL syntax details.

Discovering the Data Source's SELECT Capabilities

Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.

NameDescriptionPossible Values
AGGREGATE_FUNCTIONSSupported aggregation functions.AVG, COUNT, MAX, MIN, SUM, DISTINCT
COUNTWhether COUNT function is supported.YES, NO
IDENTIFIER_QUOTE_OPEN_CHARThe opening character used to escape an identifier.[
IDENTIFIER_QUOTE_CLOSE_CHARThe closing character used to escape an identifier.]
SUPPORTED_OPERATORSA list of supported SQL operators.=, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR
GROUP_BYWhether GROUP BY is supported, and, if so, the degree of support.NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE
OJ_CAPABILITIESThe supported varieties of outer joins supported.NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS
OUTER_JOINSWhether outer joins are supported.YES, NO
SUBQUERIESWhether subqueries are supported, and, if so, the degree of support.NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED
STRING_FUNCTIONSSupported string functions.LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE
NUMERIC_FUNCTIONSSupported numeric functions.ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE
TIMEDATE_FUNCTIONSSupported date/time functions.NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT
REPLICATION_SKIP_TABLESIndicates tables skipped during replication.
REPLICATION_TIMECHECK_COLUMNSA string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication.
IDENTIFIER_PATTERNString value indicating what string is valid for an identifier.
SUPPORT_TRANSACTIONIndicates if the provider supports transactions such as commit and rollback.YES, NO
DIALECTIndicates the SQL dialect to use.
KEY_PROPERTIESIndicates the properties which identify the uniform database.
SUPPORTS_MULTIPLE_SCHEMASIndicates if multiple schemas may exist for the provider.YES, NO
SUPPORTS_MULTIPLE_CATALOGSIndicates if multiple catalogs may exist for the provider.YES, NO
DATASYNCVERSIONThe CData Data Sync version needed to access this driver.Standard, Starter, Professional, Enterprise
DATASYNCCATEGORYThe CData Data Sync category of this driver.Source, Destination, Cloud Destination
SUPPORTSENHANCEDSQLWhether enhanced SQL functionality beyond what is offered by the API is supported.TRUE, FALSE
SUPPORTS_BATCH_OPERATIONSWhether batch operations are supported.YES, NO
SQL_CAPAll supported SQL capabilities for this driver.SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX
PREFERRED_CACHE_OPTIONSA string value specifies the preferred cacheOptions.
ENABLE_EF_ADVANCED_QUERYIndicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side.YES, NO
PSEUDO_COLUMNSA string array indicating the available pseudo columns.
MERGE_ALWAYSIf the value is true, The Merge Mode is forcibly executed in Data Sync.TRUE, FALSE
REPLICATION_MIN_DATE_QUERYA select query to return the replicate start datetime.
REPLICATION_MIN_FUNCTIONAllows a provider to specify the formula name to use for executing a server side min.
REPLICATION_START_DATEAllows a provider to specify a replicate startdate.
REPLICATION_MAX_DATE_QUERYA select query to return the replicate end datetime.
REPLICATION_MAX_FUNCTIONAllows a provider to specify the formula name to use for executing a server side max.
IGNORE_INTERVALS_ON_INITIAL_REPLICATEA list of tables which will skip dividing the replicate into chunks on the initial replicate.
CHECKCACHE_USE_PARENTIDIndicates whether the CheckCache statement should be done against the parent key column.TRUE, FALSE
CREATE_SCHEMA_PROCEDURESIndicates stored procedures that can be used for generating schema files.

The following query retrieves the operators that can be used in the WHERE clause:

SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.

Columns

Name Type Description
NAME String A component of SQL syntax, or a capability that can be processed on the server.
VALUE String Detail on the supported SQL or SQL syntax.

CData Cloud

sys_identity

Returns information about attempted modifications.

The following query retrieves the Ids of the modified rows in a batch operation:

         SELECT * FROM sys_identity
          

Columns

Name Type Description
Id String The database-generated Id returned from a data modification operation.
Batch String An identifier for the batch. 1 for a single operation.
Operation String The result of the operation in the batch: INSERTED, UPDATED, or DELETED.
Message String SUCCESS or an error message if the update in the batch failed.

CData Cloud

sys_information

Describes the available system information.

The following query retrieves all columns:

SELECT * FROM sys_information

Columns

NameTypeDescription
ProductStringThe name of the product.
VersionStringThe version number of the product.
DatasourceStringThe name of the datasource the product connects to.
NodeIdStringThe unique identifier of the machine where the product is installed.
HelpURLStringThe URL to the product's help documentation.
LicenseStringThe license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.)
LocationStringThe file path location where the product's library is stored.
EnvironmentStringThe version of the environment or rumtine the product is currently running under.
DataSyncVersionStringThe tier of CData Sync required to use this connector.
DataSyncCategoryStringThe category of CData Sync functionality (e.g., Source, Destination).

CData Cloud

Connection String Options

The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.

For more information on establishing a connection, see Establishing a Connection.

Authentication


PropertyDescription
AuthSchemeThe authentication scheme used. Accepted entries are PersonalAccessToken, Basic, OAuthU2M, AzureServicePrincipal and AzureAD.
ServerThe host name or IP address of the server hosting the Databricks database.
UserThe username used to authenticate with Databricks.
ProtocolVersionThe Protocol Version used to authenticate with Databricks.
DatabaseThe name of the Databricks database.
HTTPPathThe path component of the URL endpoint.
TokenThe token used to access the Databricks server.

AWS Authentication


PropertyDescription
AWSAccessKeySpecifies your AWS account access key. This value is accessible from your AWS security credentials page.
AWSSecretKeyYour AWS account secret key. This value is accessible from your AWS security credentials page.
AWSRegionThe hosting region for your Amazon Web Services.
AWSS3BucketThe name of your AWS S3 bucket.

Azure Authentication


PropertyDescription
AzureStorageAccountThe name of your Azure storage account.
AzureAccessKeyThe storage key associated with your Azure account.
AzureTenantIdentifies the Databricks tenant being used to access data, either by name (for example, contoso.omnicrosoft.com) or ID. (Conditional).
AzureBlobContainerThe name of your Azure Blob storage container.

AzureServicePrincipal Authentication


PropertyDescription
AzureTenantIdThe Tenant id of your Microsoft Azure Active Directory.
AzureClientIdThe application(client) id of your Microsoft Azure Active Directory application.
AzureClientSecretThe application(client) secret of your Microsoft Azure Active Directory application.

OAuth


PropertyDescription
OAuthClientIdSpecifies the client Id that was assigned the custom OAuth application was created. (Also known as the consumer key.) This ID registers the custom application with the OAuth authorization server.
OAuthClientSecretSpecifies the client secret that was assigned when the custom OAuth application was created. (Also known as the consumer secret ). This secret registers the custom application with the OAuth authorization server.
OAuthLevelYou can generate an access token at either the Databricks account level or workspace level.
DatabricksAccountIdThe Databricks account ID.

SSL


PropertyDescription
SSLServerCertSpecifies the certificate to be accepted from the server when connecting using TLS/SSL.

Logging


PropertyDescription
VerbositySpecifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.

Schema


PropertyDescription
BrowsableSchemasOptional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
CatalogThe default catalog name.
PrimaryKeyIdentifiersSet this property to define primary keys.

Databricks


PropertyDescription
CloudStorageTypeDetermine which cloud storage service will be used.
StoreTableInCloudThis option specifies whether Databricks server will create and save tables in cloud storage.
QueryTableDetailsSpecifies whether to use DESCRIBE FORMATTED ... to query detailed table information. If set to True, the query runs for a long time.
UseUploadApiThis option specifies whether the Databricks Upload API will be used when executing Bulk INSERT operations.
UseCloudFetchThis option specifies whether to use CloudFetch to improve query efficiency when the data volume of the table is large.
UseLegacyDataModelThis option specifies whether to support Unity Catalog.
QueryAllMetadataThis option controls whether to query all catalogs and schemas/databases or only specified ones. The default catalog is specified by the property Catalog . The default schema/database is specified by the property Database .
CheckSQLWarehouseAvailabilityThis option specifies whether to check if the Databricks SQL Warehouse is up.

Miscellaneous


PropertyDescription
AllowPreparedStatementPrepare a query statement before its execution.
ConnectRetryWaitTimeThis property specifies the number of seconds to wait prior to retrying a connection request.
ApplicationNameThe application name connection string property expresses the HTTP User-Agent.
AsyncQueryTimeoutThe timeout for asynchronous requests issued by the provider to download large result sets.
DefaultColumnSizeSets the default length of a string field for a provider.
DescribeCommandThe describe command used to communicate with the Hive server. Accepted entries are DESCRIBE and DESC.
DetectViewSpecifies whether to use DESCRIBE FORMATTED ... to detect the specified table is view or not.
MaxRowsSpecifies the maximum rows returned for queries without aggregation or GROUP BY.
PseudoColumnsSpecifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property.
ServerConfigurationsA name-value list of server configuration variables to override the server defaults.
ServerTimeZoneDetermine how to interpret datetime values ​​from the server.
TimeoutSpecifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout.
UseDescTableQueryThis option specifies whether the columns will be retrieved using a DESC TABLE query or the GetColumns Thrift API.The GetColumns Thrift API works for the Apache Spark 3.0.0 or later.
UseInsertSelectSyntaxDEPRECATED. This property is no longer supported, and should not be used. It will be removed in a future release.
CData Cloud

Authentication

This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.


PropertyDescription
AuthSchemeThe authentication scheme used. Accepted entries are PersonalAccessToken, Basic, OAuthU2M, AzureServicePrincipal and AzureAD.
ServerThe host name or IP address of the server hosting the Databricks database.
UserThe username used to authenticate with Databricks.
ProtocolVersionThe Protocol Version used to authenticate with Databricks.
DatabaseThe name of the Databricks database.
HTTPPathThe path component of the URL endpoint.
TokenThe token used to access the Databricks server.
CData Cloud

AuthScheme

The authentication scheme used. Accepted entries are PersonalAccessToken, Basic, OAuthU2M, AzureServicePrincipal and AzureAD.

Possible Values

PersonalAccessToken, Basic, OAuthU2M, OAuthM2M, AzureServicePrincipal, AzureAD

Data Type

string

Default Value

"PersonalAccessToken"

Remarks

The Cloud supports the following authentication mechanisms. See the Getting Started chapter for authentication guides.

  • PersonalAccessToken: Set this to authenticate with Databricks' access token.
  • Basic: Set this to authenticate with Databricks' user and access token.
  • OAuthU2M: Set this along with OAuthLevel and DatabricksAccountId(optional) to authenticate with Databricks' OAuth user-to-machine (U2M).
  • OAuthM2M: Set this along with OAuthLevel, DatabricksAccountId(optional), OAuthClientId and OAuthClientSecret to authenticate with Databricks' OAuth machine-to-machine (M2M). The OAuthClientId and OAuthClientSecret can be generated by creating a Databricks service principal.
  • AzureServicePrincipal: Set this along with AzureTenantId, AzureClientId and AzureClientSecret to authenticate with the Azure Service Principal. You should follow the instructions in https://docs.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/aad/service-prin-aad-token#--provision-a-service-principal-in-azure-portal to register an AzureAD application(client), and then follow the instructions in https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current to make sure that the service principal is assigned the Contributor or Owner role on the target Databricks workspace resource in Azure.
  • AzureAD: Set this along with AzureTenant, OAuthClientId and CallbackURL to authenticate with the Azure Active Directory OAuth. You should follow the instructions in Configure an app in Azure portal to register an AzureAD application (client).

CData Cloud

Server

The host name or IP address of the server hosting the Databricks database.

Data Type

string

Default Value

""

Remarks

The host name or IP address of the server hosting the Databricks database.

CData Cloud

User

The username used to authenticate with Databricks.

Data Type

string

Default Value

""

Remarks

The username used to authenticate with Databricks.

CData Cloud

ProtocolVersion

The Protocol Version used to authenticate with Databricks.

Possible Values

1, 2, 3, 4, 5, 6, 7, 8

Data Type

string

Default Value

"8"

Remarks

The Protocol Version used to authenticate with Databricks.

CData Cloud

Database

The name of the Databricks database.

Data Type

string

Default Value

""

Remarks

The name of the Databricks database.

CData Cloud

HTTPPath

The path component of the URL endpoint.

Data Type

string

Default Value

""

Remarks

This property is used to specify the path component of the URL endpoint.

This property can be found by following the path: Databricks main page -> Compute(in left panel) -> {your Cluster} -> Advanced options(in Configuration tab) -> JDBC/ODBC - HTTP Path

CData Cloud

Token

The token used to access the Databricks server.

Data Type

string

Default Value

""

Remarks

The token can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab.

CData Cloud

AWS Authentication

This section provides a complete list of the AWS Authentication properties you can configure in the connection string for this provider.


PropertyDescription
AWSAccessKeySpecifies your AWS account access key. This value is accessible from your AWS security credentials page.
AWSSecretKeyYour AWS account secret key. This value is accessible from your AWS security credentials page.
AWSRegionThe hosting region for your Amazon Web Services.
AWSS3BucketThe name of your AWS S3 bucket.
CData Cloud

AWSAccessKey

Specifies your AWS account access key. This value is accessible from your AWS security credentials page.

Data Type

string

Default Value

""

Remarks

To find your AWS account access key:

  1. Sign into the AWS Management console with the credentials for your root account.
  2. Select your account name or number.
  3. Select My Security Credentials in the menu.
  4. Click Continue to Security Credentials.
  5. To view or manage root account access keys, expand the Access Keys section.

CData Cloud

AWSSecretKey

Your AWS account secret key. This value is accessible from your AWS security credentials page.

Data Type

string

Default Value

""

Remarks

Your AWS account secret key. This value is accessible from your AWS security credentials page:

  1. Sign into the AWS Management console with the credentials for your root account.
  2. Select your account name or number and select My Security Credentials in the menu that is displayed.
  3. Click Continue to Security Credentials and expand the Access Keys section to manage or create root account access keys.

CData Cloud

AWSRegion

The hosting region for your Amazon Web Services.

Possible Values

OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, HYDERABAD, JAKARTA, MALAYSIA, MELBOURNE, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, TOKYO, CENTRAL, CALGARY, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, SPAIN, STOCKHOLM, ZURICH, TELAVIV, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST, ISOLATEDUSEAST, ISOLATEDUSEASTB, ISOLATEDUSWEST, ISOLATEDEUWEST

Data Type

string

Default Value

"NORTHERNVIRGINIA"

Remarks

The hosting region for your Amazon Web Services. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, HYDERABAD, JAKARTA, MALAYSIA, MELBOURNE, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, TOKYO, CENTRAL, CALGARY, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, SPAIN, STOCKHOLM, ZURICH, TELAVIV, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST, ISOLATEDUSEAST, ISOLATEDUSEASTB, ISOLATEDUSWEST, and ISOLATEDEUWEST.

CData Cloud

AWSS3Bucket

The name of your AWS S3 bucket.

Data Type

string

Default Value

""

Remarks

The name of your AWS S3 bucket.

CData Cloud

Azure Authentication

This section provides a complete list of the Azure Authentication properties you can configure in the connection string for this provider.


PropertyDescription
AzureStorageAccountThe name of your Azure storage account.
AzureAccessKeyThe storage key associated with your Azure account.
AzureTenantIdentifies the Databricks tenant being used to access data, either by name (for example, contoso.omnicrosoft.com) or ID. (Conditional).
AzureBlobContainerThe name of your Azure Blob storage container.
CData Cloud

AzureStorageAccount

The name of your Azure storage account.

Data Type

string

Default Value

""

Remarks

The name of your Azure storage account.

CData Cloud

AzureAccessKey

The storage key associated with your Azure account.

Data Type

string

Default Value

""

Remarks

The storage key associated with your Databricks account. You can retrieve it as follows:

  1. Sign into the azure portal with the credentials for your root account. (https://portal.azure.com/)
  2. Click on storage accounts and select the storage account you want to use.
  3. Under settings, click Access keys.
  4. Your storage account name and key will be displayed on that page.

CData Cloud

AzureTenant

Identifies the Databricks tenant being used to access data, either by name (for example, contoso.omnicrosoft.com) or ID. (Conditional).

Data Type

string

Default Value

""

Remarks

A tenant is a digital representation of your organization, primarily associated with a domain (for example, microsoft.com). The tenant is managed through a Tenant ID (also known as the directory ID), which is specified whenever you assign users permissions to access or manage Azure resources.

To locate the directory ID in the Azure Portal, navigate to Azure Active Directory > Properties.

Specifying AzureTenant is required when AuthScheme = either AzureServicePrincipal or AzureServicePrincipalCert, or if AuthScheme = AzureAD and the user belongs to more than one tenant.

CData Cloud

AzureBlobContainer

The name of your Azure Blob storage container.

Data Type

string

Default Value

""

Remarks

The name of your Azure Blob storage container.

CData Cloud

AzureServicePrincipal Authentication

This section provides a complete list of the AzureServicePrincipal Authentication properties you can configure in the connection string for this provider.


PropertyDescription
AzureTenantIdThe Tenant id of your Microsoft Azure Active Directory.
AzureClientIdThe application(client) id of your Microsoft Azure Active Directory application.
AzureClientSecretThe application(client) secret of your Microsoft Azure Active Directory application.
CData Cloud

AzureTenantId

The Tenant id of your Microsoft Azure Active Directory.

Data Type

string

Default Value

""

Remarks

The Tenant id of your Microsoft Azure Active Directory.

CData Cloud

AzureClientId

The application(client) id of your Microsoft Azure Active Directory application.

Data Type

string

Default Value

""

Remarks

The application(client) can be registered following the AuthScheme -> AzureServicePrincipal.

CData Cloud

AzureClientSecret

The application(client) secret of your Microsoft Azure Active Directory application.

Data Type

string

Default Value

""

Remarks

The application(client) can be registered following the AuthScheme -> AzureServicePrincipal.

CData Cloud

OAuth

This section provides a complete list of the OAuth properties you can configure in the connection string for this provider.


PropertyDescription
OAuthClientIdSpecifies the client Id that was assigned the custom OAuth application was created. (Also known as the consumer key.) This ID registers the custom application with the OAuth authorization server.
OAuthClientSecretSpecifies the client secret that was assigned when the custom OAuth application was created. (Also known as the consumer secret ). This secret registers the custom application with the OAuth authorization server.
OAuthLevelYou can generate an access token at either the Databricks account level or workspace level.
DatabricksAccountIdThe Databricks account ID.
CData Cloud

OAuthClientId

Specifies the client Id that was assigned the custom OAuth application was created. (Also known as the consumer key.) This ID registers the custom application with the OAuth authorization server.

Data Type

string

Default Value

""

Remarks

OAuthClientId is one of a handful of connection parameters that need to be set before users can authenticate via OAuth. For details, see Establishing a Connection.

CData Cloud

OAuthClientSecret

Specifies the client secret that was assigned when the custom OAuth application was created. (Also known as the consumer secret ). This secret registers the custom application with the OAuth authorization server.

Data Type

string

Default Value

""

Remarks

OAuthClientSecret is one of a handful of connection parameters that need to be set before users can authenticate via OAuth. For details, see Establishing a Connection.

CData Cloud

OAuthLevel

You can generate an access token at either the Databricks account level or workspace level.

Possible Values

WorkspaceLevel, AccountLevel

Data Type

string

Default Value

"WorkspaceLevel"

Remarks

Accepted entries are WorkspaceLevel and AccountLevel.

  • WorkspaceLevel: In Databricks, a workspace is a Databricks deployment in the cloud that functions as an environment for your team to access Databricks assets.
  • AccountLevel: A Databricks account represents a single entity that can include multiple workspaces. Accounts enabled for Unity Catalog can be used to manage users and their access to data centrally across all of the workspaces in the account.

CData Cloud

DatabricksAccountId

The Databricks account ID.

Data Type

string

Default Value

""

Remarks

To retrieve your account ID, go to the account console and click the down arrow next to your username in the upper right corner. In the drop-down menu you can view and copy your Account ID.

You must be in the account console to retrieve the account ID, the ID will not display inside a workspace.

CData Cloud

SSL

This section provides a complete list of the SSL properties you can configure in the connection string for this provider.


PropertyDescription
SSLServerCertSpecifies the certificate to be accepted from the server when connecting using TLS/SSL.
CData Cloud

SSLServerCert

Specifies the certificate to be accepted from the server when connecting using TLS/SSL.

Data Type

string

Default Value

""

Remarks

If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.

This property can take the following forms:

Description Example
A full PEM Certificate (example shortened for brevity) -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE-----
A path to a local file containing the certificate C:\cert.cer
The public key (example shortened for brevity) -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY-----
The MD5 Thumbprint (hex values can also be either space or colon separated) ecadbdda5a1529c58a1e9e09828d70e4
The SHA1 Thumbprint (hex values can also be either space or colon separated) 34a929226ae0819f2ec14b4a3d904f801cbb150d

If not specified, any certificate trusted by the machine is accepted.

Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.

CData Cloud

Logging

This section provides a complete list of the Logging properties you can configure in the connection string for this provider.


PropertyDescription
VerbositySpecifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.
CData Cloud

Verbosity

Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.

Data Type

string

Default Value

"1"

Remarks

This property defines the level of detail the Cloud includes in the log file. Higher verbosity levels increase the detail of the logged information, but may also result in larger log files and slower performance due to the additional data being captured.

The default verbosity level is 1, which is recommended for regular operation. Higher verbosity levels are primarily intended for debugging purposes. For more information on each level, refer to Logging.

When combined with the LogModules property, Verbosity can refine logging to specific categories of information.

CData Cloud

Schema

This section provides a complete list of the Schema properties you can configure in the connection string for this provider.


PropertyDescription
BrowsableSchemasOptional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
CatalogThe default catalog name.
PrimaryKeyIdentifiersSet this property to define primary keys.
CData Cloud

BrowsableSchemas

Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .

Data Type

string

Default Value

""

Remarks

Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.

CData Cloud

Catalog

The default catalog name.

Data Type

string

Default Value

"hive_metastore"

Remarks

When the property UseLegacyDataModel is set to True, this property also needs to be set to specify a default catalog. In most cases this should be "hive_metastore".

CData Cloud

PrimaryKeyIdentifiers

Set this property to define primary keys.

Data Type

string

Default Value

""

Remarks

Databricks does not natively support primary keys, but for certain DML operations or database tools you may need to define them. By default this option is disabled so that no tables have primary keys.

Primary keys are defined using a list of rules that match tables and provide a list of key columns. For example, PrimaryKeyIdentifiers="*=my_key;my_table=my_key2,my_key3;my_nokeys_table=;" has three rules separated by semicolons:

  1. The first rule *=my_key means that every table without a more specific rule contains one primary key column called my_key. Tables without a my_key column do not have any primary keys. Multiple keys are supported; set *=my_key,my_key2" to specify them.
  2. The second rule my_table=my_key2,my_key3 means that the my_table table contains the two primary key columns my_key2 and my_key3. If any of those columns are missing from the table they are ignored.
  3. The third rule my_nokeys_table= means that the my_nokeys_table table has no primary keys. The only use that empty key lists have is overriding the default rule. If there is no default rule present, only tables with primary keys are explicitly listed.

Note that the table names can include

  • just the table
  • the table and schema
  • the table, schema, and catalog
You can use SQL quotes to specify column and table names:
/* Rules with just table names use the default connection Catalog and Schema. 
   All these rules refer to the same table with a connection where Catalog=someCatalog;Schema=someSchema */

someTable=a,b,c
someSchema.someTable=a,b,c
someCatalog.someSchema.someTable=a,b,c

/* Any table or column name may be quoted */
`someCatalog`."someSchema".[someTable]=`a`,[b],"c"

CData Cloud

Databricks

This section provides a complete list of the Databricks properties you can configure in the connection string for this provider.


PropertyDescription
CloudStorageTypeDetermine which cloud storage service will be used.
StoreTableInCloudThis option specifies whether Databricks server will create and save tables in cloud storage.
QueryTableDetailsSpecifies whether to use DESCRIBE FORMATTED ... to query detailed table information. If set to True, the query runs for a long time.
UseUploadApiThis option specifies whether the Databricks Upload API will be used when executing Bulk INSERT operations.
UseCloudFetchThis option specifies whether to use CloudFetch to improve query efficiency when the data volume of the table is large.
UseLegacyDataModelThis option specifies whether to support Unity Catalog.
QueryAllMetadataThis option controls whether to query all catalogs and schemas/databases or only specified ones. The default catalog is specified by the property Catalog . The default schema/database is specified by the property Database .
CheckSQLWarehouseAvailabilityThis option specifies whether to check if the Databricks SQL Warehouse is up.
CData Cloud

CloudStorageType

Determine which cloud storage service will be used.

Possible Values

DBFS, Azure Blob storage, AWS S3

Data Type

string

Default Value

"DBFS"

Remarks

By default, the "DBFS" provided by Databricks is used. If set to "Azure Blob storage", these properties are required: AzureStorageAccount AzureAccessKey AzureBlobContainer If set to "AWS S3", these properties are required: AWSAccessKey AWSSecretKey AWSS3Bucket AWSRegion

CData Cloud

StoreTableInCloud

This option specifies whether Databricks server will create and save tables in cloud storage.

Data Type

bool

Default Value

false

Remarks

Setting this property to "True" will create and save tables in cloud storage, in this case the CloudStorageType property cannot be "DBFS".

CData Cloud

QueryTableDetails

Specifies whether to use DESCRIBE FORMATTED ... to query detailed table information. If set to True, the query runs for a long time.

Data Type

bool

Default Value

false

Remarks

Specifies whether to use DESCRIBE FORMATTED ... to query detailed table information. If set to True, the query runs for a long time.

CData Cloud

UseUploadApi

This option specifies whether the Databricks Upload API will be used when executing Bulk INSERT operations.

Data Type

bool

Default Value

false

Remarks

Setting this property to true will improve performance if there is a large amount of data in a Bulk INSERT operation.

CData Cloud

UseCloudFetch

This option specifies whether to use CloudFetch to improve query efficiency when the data volume of the table is large.

Data Type

bool

Default Value

false

Remarks

This option specifies whether to use CloudFetch to improve query efficiency when the table contains over one million entries.

CData Cloud

UseLegacyDataModel

This option specifies whether to support Unity Catalog.

Data Type

bool

Default Value

true

Remarks

True by default. This enables multi-catalog support for both the Unity Catalog and the single-catalog case. A single catalog is usually named "hive_metastore".

Setting this property to False disables multi-catalog support, in which case there is only one catalog, named "CData".

CData Cloud

QueryAllMetadata

This option controls whether to query all catalogs and schemas/databases or only specified ones. The default catalog is specified by the property Catalog . The default schema/database is specified by the property Database .

Data Type

bool

Default Value

false

Remarks

True by default. The driver queries metadata from all catalogs and schemas/databases.

When set to False:

  • If only Catalog is set, the driver queries metadata from all schemas/databases under the specified catalog.
  • If both Catalog and Database are set, the driver queries metadata only from the specified catalog and schema/database.
  • If neither is set, the driver queries metadata from the default catalog and schema/database.

CData Cloud

CheckSQLWarehouseAvailability

This option specifies whether to check if the Databricks SQL Warehouse is up.

Data Type

bool

Default Value

true

Remarks

This option specifies whether to check if the Databricks SQL Warehouse is up.

CData Cloud

Miscellaneous

This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.


PropertyDescription
AllowPreparedStatementPrepare a query statement before its execution.
ConnectRetryWaitTimeThis property specifies the number of seconds to wait prior to retrying a connection request.
ApplicationNameThe application name connection string property expresses the HTTP User-Agent.
AsyncQueryTimeoutThe timeout for asynchronous requests issued by the provider to download large result sets.
DefaultColumnSizeSets the default length of a string field for a provider.
DescribeCommandThe describe command used to communicate with the Hive server. Accepted entries are DESCRIBE and DESC.
DetectViewSpecifies whether to use DESCRIBE FORMATTED ... to detect the specified table is view or not.
MaxRowsSpecifies the maximum rows returned for queries without aggregation or GROUP BY.
PseudoColumnsSpecifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property.
ServerConfigurationsA name-value list of server configuration variables to override the server defaults.
ServerTimeZoneDetermine how to interpret datetime values ​​from the server.
TimeoutSpecifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout.
UseDescTableQueryThis option specifies whether the columns will be retrieved using a DESC TABLE query or the GetColumns Thrift API.The GetColumns Thrift API works for the Apache Spark 3.0.0 or later.
UseInsertSelectSyntaxDEPRECATED. This property is no longer supported, and should not be used. It will be removed in a future release.
CData Cloud

AllowPreparedStatement

Prepare a query statement before its execution.

Data Type

bool

Default Value

true

Remarks

If the AllowPreparedStatement property is set to false, statements are parsed each time they are executed. Setting this property to false can be useful if you are executing many different queries only once.

If you are executing the same query repeatedly, you will generally see better performance by leaving this property at the default, true. Preparing the query avoids recompiling the same query over and over. However, prepared statements also require the Cloud to keep the connection active and open while the statement is prepared.

CData Cloud

ConnectRetryWaitTime

This property specifies the number of seconds to wait prior to retrying a connection request.

Data Type

string

Default Value

"-1"

Remarks

This property only applies to the following case: when attempting to establish a connection to the Databricks cluster, you receive the response 'HTTP response with error code 503: The Cluster is starting'.

Specify a reasonable positive integer value to enable this feature, generally 30-60 (seconds).

The default value of '-1' disables this feature.

Specify the maximum number of retries with MaximumRequestRetries.

CData Cloud

ApplicationName

The application name connection string property expresses the HTTP User-Agent.

Data Type

string

Default Value

""

Remarks

The format is

[isv-name+product-name]/[product-version] [comment]> 
where

  • [isv-name+product-name] is the name of the application, with no spaces, parentheses, or new lines.
  • [product-version] is the version number of the application, with no spaces, parentheses, or new lines.
  • [comment] is optional, with no comma or new lines. Nested comments are not supported.

CData Cloud

AsyncQueryTimeout

The timeout for asynchronous requests issued by the provider to download large result sets.

Data Type

int

Default Value

300

Remarks

If the AsyncQueryTimeout property is set to 0, asynchronous operations will not time out; instead, they will run until they complete successfully or encounter an error condition. This property is distinct from Timeout which applies to individual operations while AsyncQueryTimeout applies to execution time of the operation as a whole.

If AsyncQueryTimeout expires and the asynchronous request has not finished being processed, the Cloud raises an error condition.

CData Cloud

DefaultColumnSize

Sets the default length of a string field for a provider.

Data Type

string

Default Value

"1048576"

Remarks

Sets the default length of a string field for a provider. If not set by the provider, the value will be 2000.

Sets the default length of a string field for a provider. If not set by the provider, the value will be 1048576.

CData Cloud

DescribeCommand

The describe command used to communicate with the Hive server. Accepted entries are DESCRIBE and DESC.

Possible Values

DESCRIBE, DESC

Data Type

string

Default Value

"DESCRIBE"

Remarks

The describe command used to communicate with the Hive server. Accepted entries are DESCRIBE and DESC.

CData Cloud

DetectView

Specifies whether to use DESCRIBE FORMATTED ... to detect the specified table is view or not.

Data Type

bool

Default Value

false

Remarks

Specifies whether to use DESCRIBE FORMATTED ... to detect the specified table is view or not.

CData Cloud

MaxRows

Specifies the maximum rows returned for queries without aggregation or GROUP BY.

Data Type

int

Default Value

-1

Remarks

This property sets an upper limit on the number of rows the Cloud returns for queries that do not include aggregation or GROUP BY clauses. This limit ensures that queries do not return excessively large result sets by default.

When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting. If MaxRows is set to "-1", no row limit is enforced unless a LIMIT clause is explicitly included in the query.

This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.

CData Cloud

PseudoColumns

Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property.

Data Type

string

Default Value

""

Remarks

This property allows you to define which pseudocolumns the Cloud exposes as table columns.

To specify individual pseudocolumns, use the following format: "Table1=Column1;Table1=Column2;Table2=Column3"

To include all pseudocolumns for all tables use: "*=*"

CData Cloud

ServerConfigurations

A name-value list of server configuration variables to override the server defaults.

Data Type

string

Default Value

""

Remarks

This property takes a comma separated list of configuration variables specified as name-value pairs. Any values specified here will be sent to the Hive server to override the default values.

Example: hive.enforce.bucketing=true,hive.enforce.sorting=true

CData Cloud

ServerTimeZone

Determine how to interpret datetime values ​​from the server.

Possible Values

UTC, LOCAL

Data Type

string

Default Value

"UTC"

Remarks

Databricks uses the UTC time zone by default. The server returns datetime values in UTC, which the driver converts to the local time zone.

If the datetime value is set to LOCAL, the server's time zone is considered the local time zone without any time zone conversion.

CData Cloud

Timeout

Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout.

Data Type

int

Default Value

60

Remarks

This property controls the maximum time, in seconds, that the Cloud waits for an operation to complete before canceling it. If the timeout period expires before the operation finishes, the Cloud cancels the operation and throws an exception.

The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.

Setting this property to 0 disables the timeout, allowing operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server. Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.

CData Cloud

UseDescTableQuery

This option specifies whether the columns will be retrieved using a DESC TABLE query or the GetColumns Thrift API.The GetColumns Thrift API works for the Apache Spark 3.0.0 or later.

Data Type

bool

Default Value

true

Remarks

When set to true, a DESC TABLE query will be issued to retrieve the columns for the table.

CData Cloud

UseInsertSelectSyntax

DEPRECATED. This property is no longer supported, and should not be used. It will be removed in a future release.

Data Type

bool

Default Value

false

Remarks

When set to true, an INSERT INTO SELECT statement will be used when executing insert statements. When set to false, an INSERT INTO VALUES statement will be used.

Unless explicitly specified, this option will be configured accordingly based on the Databricks version.

Copyright (c) 2025 CData Software, Inc. - All rights reserved.
Build 24.0.9175