CData Cloud offers access to Databricks across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a MySQL or SQL Server database can connect to Databricks through CData Cloud.
CData Cloud allows you to standardize and configure connections to Databricks as though it were any other OData endpoint, or standard SQL Server/MySQL database.
This page provides a guide to Establishing a Connection to Databricks in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to Databricks and configure any necessary connection properties to create a database in CData Cloud
Accessing data from Databricks through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to Databricks by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
To connect to a Databricks cluster, set the following properties:
You can find the required values in your Databricks instance by navigating to Clusters and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
The Cloud supports DBFS, Azure Blob Storage, and AWS S3 for uploading CSV files.
To use DBFS for cloud storage, set the CloudStorageType property to DBFS.
Set the following properties:
Set the following properties:
To authenticate, set the following:
Here is an example of the connection string:
"Server=https://adb-8439982502599436.16.azuredatabricks.net;HTTPPath=sql/protocolv1/o/8439982502599436/0810-011933-odsz4s3r;database=default; AuthScheme=AzureAD;InitiateOAuth=GETANDREFRESH;AzureTenant=94be69e7-edb4-4fda-ab12-95bfc22b232f;OAuthClientId=f544a825-9b69-43d9-bec2-3e99727a1669;CallbackURL=http://localhost;"
The following explains how OAuthU2M works:
After a user signs in and consents to the OAuthU2M authentication request, the tool or SDK receives an OAuth token. This token allows the tool or SDK to authenticate on the user's behalf.
The required settings are:
The following explains how OAuthM2M works:
Register your application with the authorization server to obtain a client ID and secret. When accessing a protected resource, your machine sends a request with these credentials and desired scopes. The server verifies the provided information and, if valid, returns an access token. This token is included in the request header for API calls to access the resource.
The required settings are:
By default, the Cloud attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
The Databricks Cloud also supports setting client certificates. Set the following to connect using a client certificate.
To authenticate to an HTTP proxy, set the following:
Set the following properties:
The Cloud leverages Databricks Thrift to enable bidirectional SQL access to Databricks data. It supports Databricks databases running Databricks Runtime Version 9.1 - 13.X and the Pro and Classic Databricks SQL versions.
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with Databricks.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Databricks, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for Databricks:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries, including batch operations::
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
| Name | Type | Description |
| CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
| Name | Type | Description |
| CatalogName | String | The database name. |
| SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
| Name | Type | Description |
| CatalogName | String | The database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view. |
| TableType | String | The table type (table or view). |
| Description | String | A description of the table or view. |
| IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the [CData].[Sample].Customers table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Customers' AND CatalogName='CData' AND SchemaName='Sample'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view containing the column. |
| ColumnName | String | The column name. |
| DataTypeName | String | The data type name. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The storage size of the column. |
| DisplaySize | Int32 | The designated column's normal maximum width in characters. |
| NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
| IsNullable | Boolean | Whether the column can contain null. |
| Description | String | A brief description of the column. |
| Ordinal | Int32 | The sequence number of the column. |
| IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
| IsGeneratedColumn | String | Whether the column is generated. |
| IsHidden | Boolean | Whether the column is hidden. |
| IsArray | Boolean | Whether the column is an array. |
| IsReadOnly | Boolean | Whether the column is read-only. |
| IsKey | Boolean | Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
| Name | Type | Description |
| CatalogName | String | The database containing the stored procedure. |
| SchemaName | String | The schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure. |
| Description | String | A description of the stored procedure. |
| ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the SearchSuppliers stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName='SearchSuppliers' AND Direction=1 OR Direction=2
| Name | Type | Description |
| CatalogName | String | The name of the database containing the stored procedure. |
| SchemaName | String | The name of the schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure containing the parameter. |
| ColumnName | String | The name of the stored procedure parameter. |
| Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
| DataTypeName | String | The name of the data type. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
| NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
| IsNullable | Boolean | Whether the parameter can contain null. |
| IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
| IsArray | Boolean | Whether the parameter is an array. |
| Description | String | The description of the parameter. |
| Ordinal | Int32 | The index of the parameter. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the [CData].[Sample].Customers table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Customers' AND CatalogName='CData' AND SchemaName='Sample'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
| IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
| ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| KeySeq | String | The sequence number of the primary key. |
| KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the index. |
| SchemaName | String | The name of the schema containing the index. |
| TableName | String | The name of the table containing the index. |
| IndexName | String | The index name. |
| ColumnName | String | The name of the column associated with the index. |
| IsUnique | Boolean | True if the index is unique. False otherwise. |
| IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
| Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
| SortOrder | String | The sort order: A for ascending or D for descending. |
| OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
| Name | Type | Description |
| Name | String | The name of the connection property. |
| ShortDescription | String | A brief description. |
| Type | String | The data type of the connection property. |
| Default | String | The default value if one is not explicitly set. |
| Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
| Value | String | The value you set or a preconfigured default. |
| Required | Boolean | Whether the property is required to connect. |
| Category | String | The category of the connection property. |
| IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
| Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
| PropertyName | String | A camel-cased truncated form of the connection property name. |
| Ordinal | Int32 | The index of the parameter. |
| CatOrdinal | Int32 | The index of the parameter category. |
| Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
| Visible | Boolean | Informs whether the property is visible in the connection UI. |
| ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
| AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
| COUNT | Whether COUNT function is supported. | YES, NO |
| IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
| IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
| SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
| GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
| OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
| OUTER_JOINS | Whether outer joins are supported. | YES, NO |
| SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
| STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
| NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
| TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
| REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
| REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
| IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
| SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
| DIALECT | Indicates the SQL dialect to use. | |
| KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
| SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
| SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
| DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
| DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
| SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
| SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
| SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
| PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
| ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
| PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
| MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
| REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
| REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
| REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
| REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
| REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
| IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
| CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
| CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.
| Name | Type | Description |
| NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
| VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
| Name | Type | Description |
| Id | String | The database-generated Id returned from a data modification operation. |
| Batch | String | An identifier for the batch. 1 for a single operation. |
| Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
| Message | String | SUCCESS or an error message if the update in the batch failed. |
Describes the available system information.
The following query retrieves all columns:
SELECT * FROM sys_information
| Name | Type | Description |
| Product | String | The name of the product. |
| Version | String | The version number of the product. |
| Datasource | String | The name of the datasource the product connects to. |
| NodeId | String | The unique identifier of the machine where the product is installed. |
| HelpURL | String | The URL to the product's help documentation. |
| License | String | The license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.) |
| Location | String | The file path location where the product's library is stored. |
| Environment | String | The version of the environment or rumtine the product is currently running under. |
| DataSyncVersion | String | The tier of CData Sync required to use this connector. |
| DataSyncCategory | String | The category of CData Sync functionality (e.g., Source, Destination). |
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| AuthScheme | The authentication scheme used. Accepted entries are PersonalAccessToken, Basic, OAuthU2M, AzureServicePrincipal and AzureAD. |
| Server | The host name or IP address of the server hosting the Databricks database. |
| User | The username used to authenticate with Databricks. |
| ProtocolVersion | The Protocol Version used to authenticate with Databricks. |
| Database | The name of the Databricks database. |
| HTTPPath | The path component of the URL endpoint. |
| Token | The token used to access the Databricks server. |
| Property | Description |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSS3Bucket | The name of your AWS S3 bucket. |
| Property | Description |
| AzureStorageAccount | The name of your Azure storage account. |
| AzureAccessKey | The storage key associated with your Azure account. |
| AzureTenant | Identifies the Databricks tenant being used to access data, either by name (for example, contoso.omnicrosoft.com) or ID. (Conditional). |
| AzureBlobContainer | The name of your Azure Blob storage container. |
AzureServicePrincipal Authentication
| Property | Description |
| AzureTenantId | The Tenant id of your Microsoft Azure Active Directory. |
| AzureClientId | The application(client) id of your Microsoft Azure Active Directory application. |
| AzureClientSecret | The application(client) secret of your Microsoft Azure Active Directory application. |
| Property | Description |
| OAuthClientId | Specifies the client Id that was assigned the custom OAuth application was created. (Also known as the consumer key.) This ID registers the custom application with the OAuth authorization server. |
| OAuthClientSecret | Specifies the client secret that was assigned when the custom OAuth application was created. (Also known as the consumer secret ). This secret registers the custom application with the OAuth authorization server. |
| OAuthLevel | You can generate an access token at either the Databricks account level or workspace level. |
| DatabricksAccountId | The Databricks account ID. |
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Catalog | The default catalog name. |
| PrimaryKeyIdentifiers | Set this property to define primary keys. |
| Property | Description |
| CloudStorageType | Determine which cloud storage service will be used. |
| StoreTableInCloud | This option specifies whether Databricks server will create and save tables in cloud storage. |
| QueryTableDetails | Specifies whether to use DESCRIBE FORMATTED ... to query detailed table information. If set to True, the query runs for a long time. |
| UseUploadApi | This option specifies whether the Databricks Upload API will be used when executing Bulk INSERT operations. |
| UseCloudFetch | This option specifies whether to use CloudFetch to improve query efficiency when the data volume of the table is large. |
| UseLegacyDataModel | This option specifies whether to support Unity Catalog. |
| QueryAllMetadata | This option controls whether to query all catalogs and schemas/databases or only specified ones. The default catalog is specified by the property Catalog . The default schema/database is specified by the property Database . |
| CheckSQLWarehouseAvailability | This option specifies whether to check if the Databricks SQL Warehouse is up. |
| Property | Description |
| AllowPreparedStatement | Prepare a query statement before its execution. |
| ConnectRetryWaitTime | This property specifies the number of seconds to wait prior to retrying a connection request. |
| ApplicationName | The application name connection string property expresses the HTTP User-Agent. |
| AsyncQueryTimeout | The timeout for asynchronous requests issued by the provider to download large result sets. |
| DefaultColumnSize | Sets the default length of a string field for a provider. |
| DescribeCommand | The describe command used to communicate with the Hive server. Accepted entries are DESCRIBE and DESC. |
| DetectView | Specifies whether to use DESCRIBE FORMATTED ... to detect the specified table is view or not. |
| MaxRows | Specifies the maximum rows returned for queries without aggregation or GROUP BY. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property. |
| ServerConfigurations | A name-value list of server configuration variables to override the server defaults. |
| ServerTimeZone | Determine how to interpret datetime values from the server. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout. |
| UseDescTableQuery | This option specifies whether the columns will be retrieved using a DESC TABLE query or the GetColumns Thrift API.The GetColumns Thrift API works for the Apache Spark 3.0.0 or later. |
| UseInsertSelectSyntax | DEPRECATED. This property is no longer supported, and should not be used. It will be removed in a future release. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | The authentication scheme used. Accepted entries are PersonalAccessToken, Basic, OAuthU2M, AzureServicePrincipal and AzureAD. |
| Server | The host name or IP address of the server hosting the Databricks database. |
| User | The username used to authenticate with Databricks. |
| ProtocolVersion | The Protocol Version used to authenticate with Databricks. |
| Database | The name of the Databricks database. |
| HTTPPath | The path component of the URL endpoint. |
| Token | The token used to access the Databricks server. |
The authentication scheme used. Accepted entries are PersonalAccessToken, Basic, OAuthU2M, AzureServicePrincipal and AzureAD.
string
"PersonalAccessToken"
The Cloud supports the following authentication mechanisms. See the Getting Started chapter for authentication guides.
The host name or IP address of the server hosting the Databricks database.
string
""
The host name or IP address of the server hosting the Databricks database.
The username used to authenticate with Databricks.
string
""
The username used to authenticate with Databricks.
The Protocol Version used to authenticate with Databricks.
string
"8"
The Protocol Version used to authenticate with Databricks.
The name of the Databricks database.
string
""
The name of the Databricks database.
The path component of the URL endpoint.
string
""
This property is used to specify the path component of the URL endpoint.
This property can be found by following the path: Databricks main page -> Compute(in left panel) -> {your Cluster} -> Advanced options(in Configuration tab) -> JDBC/ODBC - HTTP Path
The token used to access the Databricks server.
string
""
The token can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab.
This section provides a complete list of the AWS Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSS3Bucket | The name of your AWS S3 bucket. |
Specifies your AWS account access key. This value is accessible from your AWS security credentials page.
string
""
To find your AWS account access key:
Your AWS account secret key. This value is accessible from your AWS security credentials page.
string
""
Your AWS account secret key. This value is accessible from your AWS security credentials page:
The hosting region for your Amazon Web Services.
string
"NORTHERNVIRGINIA"
The hosting region for your Amazon Web Services. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, HYDERABAD, JAKARTA, MALAYSIA, MELBOURNE, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, TOKYO, CENTRAL, CALGARY, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, SPAIN, STOCKHOLM, ZURICH, TELAVIV, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST, ISOLATEDUSEAST, ISOLATEDUSEASTB, ISOLATEDUSWEST, and ISOLATEDEUWEST.
The name of your AWS S3 bucket.
string
""
The name of your AWS S3 bucket.
This section provides a complete list of the Azure Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AzureStorageAccount | The name of your Azure storage account. |
| AzureAccessKey | The storage key associated with your Azure account. |
| AzureTenant | Identifies the Databricks tenant being used to access data, either by name (for example, contoso.omnicrosoft.com) or ID. (Conditional). |
| AzureBlobContainer | The name of your Azure Blob storage container. |
The name of your Azure storage account.
string
""
The name of your Azure storage account.
The storage key associated with your Azure account.
string
""
The storage key associated with your Databricks account. You can retrieve it as follows:
Identifies the Databricks tenant being used to access data, either by name (for example, contoso.omnicrosoft.com) or ID. (Conditional).
string
""
A tenant is a digital representation of your organization, primarily associated with a domain (for example, microsoft.com). The tenant is managed through a Tenant ID (also known as the directory ID), which is specified whenever you assign users permissions to access or manage Azure resources.
To locate the directory ID in the Azure Portal, navigate to Azure Active Directory > Properties.
Specifying AzureTenant is required when AuthScheme = either AzureServicePrincipal or AzureServicePrincipalCert, or if AuthScheme = AzureAD and the user belongs to more than one tenant.
The name of your Azure Blob storage container.
string
""
The name of your Azure Blob storage container.
This section provides a complete list of the AzureServicePrincipal Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AzureTenantId | The Tenant id of your Microsoft Azure Active Directory. |
| AzureClientId | The application(client) id of your Microsoft Azure Active Directory application. |
| AzureClientSecret | The application(client) secret of your Microsoft Azure Active Directory application. |
The Tenant id of your Microsoft Azure Active Directory.
string
""
The Tenant id of your Microsoft Azure Active Directory.
The application(client) id of your Microsoft Azure Active Directory application.
string
""
The application(client) can be registered following the AuthScheme -> AzureServicePrincipal.
The application(client) secret of your Microsoft Azure Active Directory application.
string
""
The application(client) can be registered following the AuthScheme -> AzureServicePrincipal.
This section provides a complete list of the OAuth properties you can configure in the connection string for this provider.
| Property | Description |
| OAuthClientId | Specifies the client Id that was assigned the custom OAuth application was created. (Also known as the consumer key.) This ID registers the custom application with the OAuth authorization server. |
| OAuthClientSecret | Specifies the client secret that was assigned when the custom OAuth application was created. (Also known as the consumer secret ). This secret registers the custom application with the OAuth authorization server. |
| OAuthLevel | You can generate an access token at either the Databricks account level or workspace level. |
| DatabricksAccountId | The Databricks account ID. |
Specifies the client Id that was assigned the custom OAuth application was created. (Also known as the consumer key.) This ID registers the custom application with the OAuth authorization server.
string
""
OAuthClientId is one of a handful of connection parameters that need to be set before users can authenticate via OAuth. For details, see Establishing a Connection.
Specifies the client secret that was assigned when the custom OAuth application was created. (Also known as the consumer secret ). This secret registers the custom application with the OAuth authorization server.
string
""
OAuthClientSecret is one of a handful of connection parameters that need to be set before users can authenticate via OAuth. For details, see Establishing a Connection.
You can generate an access token at either the Databricks account level or workspace level.
string
"WorkspaceLevel"
Accepted entries are WorkspaceLevel and AccountLevel.
The Databricks account ID.
string
""
To retrieve your account ID, go to the account console and click the down arrow next to your username in the upper right corner. In the drop-down menu you can view and copy your Account ID.
You must be in the account console to retrieve the account ID, the ID will not display inside a workspace.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space or colon separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space or colon separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
If not specified, any certificate trusted by the machine is accepted.
Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.
string
"1"
This property defines the level of detail the Cloud includes in the log file. Higher verbosity levels increase the detail of the logged information, but may also result in larger log files and slower performance due to the additional data being captured.
The default verbosity level is 1, which is recommended for regular operation. Higher verbosity levels are primarily intended for debugging purposes. For more information on each level, refer to Logging.
When combined with the LogModules property, Verbosity can refine logging to specific categories of information.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Catalog | The default catalog name. |
| PrimaryKeyIdentifiers | Set this property to define primary keys. |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
string
""
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
The default catalog name.
string
"hive_metastore"
When the property UseLegacyDataModel is set to True, this property also needs to be set to specify a default catalog. In most cases this should be "hive_metastore".
Set this property to define primary keys.
string
""
Databricks does not natively support primary keys, but for certain DML operations or database tools you may need to define them. By default this option is disabled so that no tables have primary keys.
Primary keys are defined using a list of rules that match tables and provide a list of key columns. For example, PrimaryKeyIdentifiers="*=my_key;my_table=my_key2,my_key3;my_nokeys_table=;" has three rules separated by semicolons:
Note that the table names can include
/* Rules with just table names use the default connection Catalog and Schema. All these rules refer to the same table with a connection where Catalog=someCatalog;Schema=someSchema */ someTable=a,b,c someSchema.someTable=a,b,c someCatalog.someSchema.someTable=a,b,c /* Any table or column name may be quoted */ `someCatalog`."someSchema".[someTable]=`a`,[b],"c"
This section provides a complete list of the Databricks properties you can configure in the connection string for this provider.
| Property | Description |
| CloudStorageType | Determine which cloud storage service will be used. |
| StoreTableInCloud | This option specifies whether Databricks server will create and save tables in cloud storage. |
| QueryTableDetails | Specifies whether to use DESCRIBE FORMATTED ... to query detailed table information. If set to True, the query runs for a long time. |
| UseUploadApi | This option specifies whether the Databricks Upload API will be used when executing Bulk INSERT operations. |
| UseCloudFetch | This option specifies whether to use CloudFetch to improve query efficiency when the data volume of the table is large. |
| UseLegacyDataModel | This option specifies whether to support Unity Catalog. |
| QueryAllMetadata | This option controls whether to query all catalogs and schemas/databases or only specified ones. The default catalog is specified by the property Catalog . The default schema/database is specified by the property Database . |
| CheckSQLWarehouseAvailability | This option specifies whether to check if the Databricks SQL Warehouse is up. |
Determine which cloud storage service will be used.
string
"DBFS"
By default, the "DBFS" provided by Databricks is used. If set to "Azure Blob storage", these properties are required: AzureStorageAccount AzureAccessKey AzureBlobContainer If set to "AWS S3", these properties are required: AWSAccessKey AWSSecretKey AWSS3Bucket AWSRegion
This option specifies whether Databricks server will create and save tables in cloud storage.
bool
false
Setting this property to "True" will create and save tables in cloud storage, in this case the CloudStorageType property cannot be "DBFS".
Specifies whether to use DESCRIBE FORMATTED ... to query detailed table information. If set to True, the query runs for a long time.
bool
false
Specifies whether to use DESCRIBE FORMATTED ... to query detailed table information. If set to True, the query runs for a long time.
This option specifies whether the Databricks Upload API will be used when executing Bulk INSERT operations.
bool
false
Setting this property to true will improve performance if there is a large amount of data in a Bulk INSERT operation.
This option specifies whether to use CloudFetch to improve query efficiency when the data volume of the table is large.
bool
false
This option specifies whether to use CloudFetch to improve query efficiency when the table contains over one million entries.
This option specifies whether to support Unity Catalog.
bool
true
True by default. This enables multi-catalog support for both the Unity Catalog and the single-catalog case. A single catalog is usually named "hive_metastore".
Setting this property to False disables multi-catalog support, in which case there is only one catalog, named "CData".
This option controls whether to query all catalogs and schemas/databases or only specified ones. The default catalog is specified by the property Catalog . The default schema/database is specified by the property Database .
bool
false
True by default. The driver queries metadata from all catalogs and schemas/databases.
When set to False:
This option specifies whether to check if the Databricks SQL Warehouse is up.
bool
true
This option specifies whether to check if the Databricks SQL Warehouse is up.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| AllowPreparedStatement | Prepare a query statement before its execution. |
| ConnectRetryWaitTime | This property specifies the number of seconds to wait prior to retrying a connection request. |
| ApplicationName | The application name connection string property expresses the HTTP User-Agent. |
| AsyncQueryTimeout | The timeout for asynchronous requests issued by the provider to download large result sets. |
| DefaultColumnSize | Sets the default length of a string field for a provider. |
| DescribeCommand | The describe command used to communicate with the Hive server. Accepted entries are DESCRIBE and DESC. |
| DetectView | Specifies whether to use DESCRIBE FORMATTED ... to detect the specified table is view or not. |
| MaxRows | Specifies the maximum rows returned for queries without aggregation or GROUP BY. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property. |
| ServerConfigurations | A name-value list of server configuration variables to override the server defaults. |
| ServerTimeZone | Determine how to interpret datetime values from the server. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout. |
| UseDescTableQuery | This option specifies whether the columns will be retrieved using a DESC TABLE query or the GetColumns Thrift API.The GetColumns Thrift API works for the Apache Spark 3.0.0 or later. |
| UseInsertSelectSyntax | DEPRECATED. This property is no longer supported, and should not be used. It will be removed in a future release. |
Prepare a query statement before its execution.
bool
true
If the AllowPreparedStatement property is set to false, statements are parsed each time they are executed. Setting this property to false can be useful if you are executing many different queries only once.
If you are executing the same query repeatedly, you will generally see better performance by leaving this property at the default, true. Preparing the query avoids recompiling the same query over and over. However, prepared statements also require the Cloud to keep the connection active and open while the statement is prepared.
This property specifies the number of seconds to wait prior to retrying a connection request.
string
"-1"
This property only applies to the following case: when attempting to establish a connection to the Databricks cluster, you receive the response 'HTTP response with error code 503: The Cluster is starting'.
Specify a reasonable positive integer value to enable this feature, generally 30-60 (seconds).
The default value of '-1' disables this feature.
Specify the maximum number of retries with MaximumRequestRetries.
The application name connection string property expresses the HTTP User-Agent.
string
""
The format is
[isv-name+product-name]/[product-version] [comment]>where
The timeout for asynchronous requests issued by the provider to download large result sets.
int
300
If the AsyncQueryTimeout property is set to 0, asynchronous operations will not time out; instead, they will run until they complete successfully or encounter an error condition. This property is distinct from Timeout which applies to individual operations while AsyncQueryTimeout applies to execution time of the operation as a whole.
If AsyncQueryTimeout expires and the asynchronous request has not finished being processed, the Cloud raises an error condition.
Sets the default length of a string field for a provider.
string
"1048576"
Sets the default length of a string field for a provider. If not set by the provider, the value will be 2000.
Sets the default length of a string field for a provider. If not set by the provider, the value will be 1048576.
The describe command used to communicate with the Hive server. Accepted entries are DESCRIBE and DESC.
string
"DESCRIBE"
The describe command used to communicate with the Hive server. Accepted entries are DESCRIBE and DESC.
Specifies whether to use DESCRIBE FORMATTED ... to detect the specified table is view or not.
bool
false
Specifies whether to use DESCRIBE FORMATTED ... to detect the specified table is view or not.
Specifies the maximum rows returned for queries without aggregation or GROUP BY.
int
-1
This property sets an upper limit on the number of rows the Cloud returns for queries that do not include aggregation or GROUP BY clauses. This limit ensures that queries do not return excessively large result sets by default.
When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting. If MaxRows is set to "-1", no row limit is enforced unless a LIMIT clause is explicitly included in the query.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property.
string
""
This property allows you to define which pseudocolumns the Cloud exposes as table columns.
To specify individual pseudocolumns, use the following format: "Table1=Column1;Table1=Column2;Table2=Column3"
To include all pseudocolumns for all tables use: "*=*"
A name-value list of server configuration variables to override the server defaults.
string
""
This property takes a comma separated list of configuration variables specified as name-value pairs. Any values specified here will be sent to the Hive server to override the default values.
Example: hive.enforce.bucketing=true,hive.enforce.sorting=true
Determine how to interpret datetime values from the server.
string
"UTC"
Databricks uses the UTC time zone by default. The server returns datetime values in UTC, which the driver converts to the local time zone.
If the datetime value is set to LOCAL, the server's time zone is considered the local time zone without any time zone conversion.
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout.
int
60
This property controls the maximum time, in seconds, that the Cloud waits for an operation to complete before canceling it. If the timeout period expires before the operation finishes, the Cloud cancels the operation and throws an exception.
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.
Setting this property to 0 disables the timeout, allowing operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server. Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
This option specifies whether the columns will be retrieved using a DESC TABLE query or the GetColumns Thrift API.The GetColumns Thrift API works for the Apache Spark 3.0.0 or later.
bool
true
When set to true, a DESC TABLE query will be issued to retrieve the columns for the table.
DEPRECATED. This property is no longer supported, and should not be used. It will be removed in a future release.
bool
false
When set to true, an INSERT INTO SELECT statement will be used when executing insert statements. When set to false, an INSERT INTO VALUES statement will be used.
Unless explicitly specified, this option will be configured accordingly based on the Databricks version.