CData Cloud offers access to Databricks across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a SQL Server database can connect to Databricks through CData Cloud.
CData Cloud allows you to standardize and configure connections to Databricks as though it were any other OData endpoint or standard SQL Server.
This page provides a guide to Establishing a Connection to Databricks in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to Databricks and configure any necessary connection properties to create a database in CData Cloud
Accessing data from Databricks through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to Databricks by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
To connect to a Databricks cluster, set the following properties:
You can find the required values in your Databricks instance by navigating to Clusters and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
The Cloud supports DBFS, Azure Blob Storage, and AWS S3 for uploading CSV files.
To use DBFS for cloud storage, set the CloudStorageType property to DBFS.
Set the following properties:
Set the following properties:
To authenticate, set the following:
Note: Microsoft has rebranded Azure AD as Entra ID. In topics that require the user to interact with the Entra ID Admin site, we use the same names Microsoft does. However, there are still CData connection properties whose names or values reference "Azure AD".
Before you can authenticate using Entra ID, you must first register an application with the Entra ID endpoint in the Azure portal, as described in Creating an Entra ID (Azure AD) Application.
(See also Microsoft's own Configure an app in Azure portal.)
Once the application has been completed, set these properties:
When connecting, a web page opens that prompts you to authenticate. After successful authentication, the connection is established.
Example connection string:
"Server=https://adb-8439982502599436.16.azuredatabricks.net;HTTPPath=sql/protocolv1/o/8439982502599436/0810-011933-odsz4s3r;database=default; AuthScheme=AzureAD;InitiateOAuth=GETANDREFRESH;AzureTenant=94be69e7-edb4-4fda-ab12-95bfc22b232f;OAuthClientId=f544a825-9b69-43d9-bec2-3e99727a1669;CallbackURL=http://localhost;"
The following explains how OAuthU2M works:
After a user signs in and consents to the OAuthU2M authentication request, the tool or SDK receives an OAuth token. This token allows the tool or SDK to authenticate on the user's behalf.
By default, the Cloud uses an embedded OAuth application with a redirect URL of http://localhost:8020 which requires no setup. However, to customize the redirect URL or scopes used during authentication, you can register a custom OAuth application in the Databricks Account Console.
For instructions on registering a custom OAuth application, see Creating a Custom OAuth Application.
The required settings are:
The following explains how OAuthM2M works:
Register your application with the authorization server to obtain a client ID and secret. When accessing a protected resource, your machine sends a request with these credentials and desired scopes. The server verifies the provided information and, if valid, returns an access token. This token is included in the request header for API calls to access the resource.
The required settings are:
By default, the Cloud attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
The Databricks Cloud also supports setting client certificates. Set the following to connect using a client certificate.
To authenticate to an HTTP proxy, set the following:
Set the following properties:
The CData Cloud leverages Databricks Thrift to enable bidirectional SQL access to Databricks data. The Cloud supports databases that run Databricks Runtime Version 9.1 and later. It also supports the Pro and Classic Databricks SQL versions. The data model is fully dynamic and automatically reflects the data that is available in your Databricks environment. Any updates to tables or schemas are immediately accessible through SQL.
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with Databricks.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Databricks, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for Databricks:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries, including batch operations::
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
| Name | Type | Description |
| CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
| Name | Type | Description |
| CatalogName | String | The database name. |
| SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
| Name | Type | Description |
| CatalogName | String | The database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view. |
| TableType | String | The table type (table or view). |
| Description | String | A description of the table or view. |
| IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the [CData].[Sample].Customers table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Customers' AND CatalogName='CData' AND SchemaName='Sample'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view containing the column. |
| ColumnName | String | The column name. |
| DataTypeName | String | The data type name. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The storage size of the column. |
| DisplaySize | Int32 | The designated column's normal maximum width in characters. |
| NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
| IsNullable | Boolean | Whether the column can contain null. |
| Description | String | A brief description of the column. |
| Ordinal | Int32 | The sequence number of the column. |
| IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
| IsGeneratedColumn | String | Whether the column is generated. |
| IsHidden | Boolean | Whether the column is hidden. |
| IsArray | Boolean | Whether the column is an array. |
| IsReadOnly | Boolean | Whether the column is read-only. |
| IsKey | Boolean | Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
| ColumnType | String | The role or classification of the column in the schema. Possible values include SYSTEM, LINKEDCOLUMN, NAVIGATIONKEY, REFERENCECOLUMN, and NAVIGATIONPARENTCOLUMN. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
| Name | Type | Description |
| CatalogName | String | The database containing the stored procedure. |
| SchemaName | String | The schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure. |
| Description | String | A description of the stored procedure. |
| ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the SearchSuppliers stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'SearchSuppliers' AND Direction = 1 OR Direction = 2
To include result set columns in addition to the parameters, set the IncludeResultColumns pseudo column to True:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'SearchSuppliers' AND IncludeResultColumns='True'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the stored procedure. |
| SchemaName | String | The name of the schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure containing the parameter. |
| ColumnName | String | The name of the stored procedure parameter. |
| Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| DataTypeName | String | The name of the data type. |
| NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
| Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
| NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
| IsNullable | Boolean | Whether the parameter can contain null. |
| IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
| IsArray | Boolean | Whether the parameter is an array. |
| Description | String | The description of the parameter. |
| Ordinal | Int32 | The index of the parameter. |
| Values | String | The values you can set in this parameter are limited to those shown in this column. Possible values are comma-separated. |
| SupportsStreams | Boolean | Whether the parameter represents a file that you can pass as either a file path or a stream. |
| IsPath | Boolean | Whether the parameter is a target path for a schema creation operation. |
| Default | String | The value used for this parameter when no value is specified. |
| SpecificName | String | A label that, when multiple stored procedures have the same name, uniquely identifies each identically-named stored procedure. If there's only one procedure with a given name, its name is simply reflected here. |
| IsCDataProvided | Boolean | Whether the procedure is added/implemented by CData, as opposed to being a native Databricks procedure. |
| Name | Type | Description |
| IncludeResultColumns | Boolean | Whether the output should include columns from the result set in addition to parameters. Defaults to False. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the [CData].[Sample].Customers table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Customers' AND CatalogName='CData' AND SchemaName='Sample'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
| IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
| ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| KeySeq | String | The sequence number of the primary key. |
| KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the index. |
| SchemaName | String | The name of the schema containing the index. |
| TableName | String | The name of the table containing the index. |
| IndexName | String | The index name. |
| ColumnName | String | The name of the column associated with the index. |
| IsUnique | Boolean | True if the index is unique. False otherwise. |
| IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
| Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
| SortOrder | String | The sort order: A for ascending or D for descending. |
| OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
| Name | Type | Description |
| Name | String | The name of the connection property. |
| ShortDescription | String | A brief description. |
| Type | String | The data type of the connection property. |
| Default | String | The default value if one is not explicitly set. |
| Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
| Value | String | The value you set or a preconfigured default. |
| Required | Boolean | Whether the property is required to connect. |
| Category | String | The category of the connection property. |
| IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
| Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
| PropertyName | String | A camel-cased truncated form of the connection property name. |
| Ordinal | Int32 | The index of the parameter. |
| CatOrdinal | Int32 | The index of the parameter category. |
| Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
| Visible | Boolean | Informs whether the property is visible in the connection UI. |
| ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
| AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
| COUNT | Whether COUNT function is supported. | YES, NO |
| IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
| IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
| SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
| GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
| OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
| OUTER_JOINS | Whether outer joins are supported. | YES, NO |
| SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
| STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
| NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
| TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
| REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
| REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
| IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
| SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
| DIALECT | Indicates the SQL dialect to use. | |
| KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
| SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
| SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
| DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
| DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
| SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
| SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
| SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
| PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
| ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
| PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
| MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
| REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
| REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
| REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
| REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
| REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
| IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
| CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
| CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.
| Name | Type | Description |
| NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
| VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
| Name | Type | Description |
| Id | String | The database-generated Id returned from a data modification operation. |
| Batch | String | An identifier for the batch. 1 for a single operation. |
| Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
| Message | String | SUCCESS or an error message if the update in the batch failed. |
Describes the available system information.
The following query retrieves all columns:
SELECT * FROM sys_information
| Name | Type | Description |
| Product | String | The name of the product. |
| Version | String | The version number of the product. |
| Datasource | String | The name of the datasource the product connects to. |
| NodeId | String | The unique identifier of the machine where the product is installed. |
| HelpURL | String | The URL to the product's help documentation. |
| License | String | The license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.) |
| Location | String | The file path location where the product's library is stored. |
| Environment | String | The version of the environment or rumtine the product is currently running under. |
| DataSyncVersion | String | The tier of CData Sync required to use this connector. |
| DataSyncCategory | String | The category of CData Sync functionality (e.g., Source, Destination). |
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| AuthScheme | Specifies the authentication scheme that the provider uses to connect to Databricks. |
| Server | Specifies the host name of your Databricks workspace or SQL warehouse endpoint. |
| ProtocolVersion | Specifies the protocol version used by the provider when authenticating and exchanging data with Databricks. |
| Database | Specifies the name of the database in Databricks that the provider uses for the connection. |
| HTTPPath | Specifies the HTTP path for the compute resource in Databricks that the provider connects to. |
| Token | Specifies the personal access token used by the provider to authenticate to Databricks. |
| Property | Description |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSS3Bucket | Specifies the name of the AWS S3 bucket that the provider uses for staging or transferring data when connecting to Databricks. |
| Property | Description |
| AzureStorageAccount | The name of your Azure storage account. |
| AzureAccessKey | The storage key associated with your Azure account. |
| AzureTenant | Identifies the Databricks tenant being used to access data. Accepts either the tenant's domain name (for example, contoso.onmicrosoft.com ) or its directory (tenant) ID. |
| AzureBlobContainer | Specifies the name of the Azure Blob Storage container that the provider uses for staging or transferring data when connecting to Databricks. |
AzureServicePrincipal Authentication
| Property | Description |
| AzureTenantId | Specifies the directory (tenant) ID of your Microsoft Entra ID that the provider uses to authenticate to Databricks. |
| AzureClientId | Specifies the application (client) ID of the Microsoft Entra ID application that the provider uses to authenticate to Databricks. |
| AzureClientSecret | Specifies the client secret for the Microsoft Entra ID application that the provider uses to authenticate to Databricks. |
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| Scope | Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created. |
| OAuthLevel | Specifies whether the provider performs OAuth authentication at the workspace level or at the account level in Databricks. |
| DatabricksAccountId | Specifies the unique account ID for your Databricks account. |
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Catalog | Specifies the default catalog that the provider uses when connecting to Databricks. |
| PrimaryKeyIdentifiers | Specifies primary keys for tables in the hive_metastore catalog. |
| Property | Description |
| CloudStorageType | Specifies which cloud storage service the provider uses for staging or transferring data. |
| StoreTableInCloud | Specifies whether the Databricks server creates and stores new tables in external cloud storage instead of the Databricks File System (DBFS). |
| QueryTableDetails | Specifies whether the provider uses the DESCRIBE FORMATTED command in Databricks to retrieve detailed table metadata. |
| UseUploadApi | Specifies whether the provider uses the Databricks Upload API to optimize Bulk INSERT operations. |
| UseCloudFetch | Specifies whether the provider uses CloudFetch to optimize data transfer for large query result sets in Databricks. |
| SupportMultiCatalog | Specifies whether the provider enables multi-catalog support for Unity Catalog in Databricks. |
| QueryAllMetadata | Specifies whether the provider retrieves metadata from all available catalogs and schemas in Databricks or only from those specified in the connection. |
| CheckSQLWarehouseAvailability | Specifies whether the provider checks the availability of the SQL Warehouse in Databricks before establishing a connection. |
| Property | Description |
| AllowPreparedStatement | Specifies whether the provider prepares SQL statements before executing them to improve performance on repeated queries. |
| ConnectRetryWaitTime | Specifies the number of seconds the provider waits before retrying a connection request. |
| ApplicationName | Specifies the application name that the provider includes in the HTTP User-Agent header when connecting to Databricks. |
| AsyncQueryTimeout | Specifies the number of seconds the provider waits for asynchronous requests that retrieve large result sets before timing out. |
| DefaultColumnSize | Specifies the default length, in characters, of string fields when the provider cannot determine a size from metadata. |
| DescribeCommand | Specifies which command the provider uses to retrieve metadata from Databricks. |
| DetectView | Specifies whether the provider uses the DESCRIBE FORMATTED command to determine whether an object in Databricks is a table or a view. |
| IncludeSystemSchemas | Specifies whether to include the system catalog 'system' and system schema 'information_schema'. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| ServerConfigurations | Specifies configuration variables to override the default settings on the Databricks server. |
| ServerTimeZone | Specifies how the provider interprets datetime values returned from Databricks. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| UseDescTableQuery | Specifies whether the provider retrieves table column metadata using a DESC TABLE query instead of the Thrift API. |
| UseInsertSelectSyntax | DEPRECATED. This property is no longer supported, and should not be used. It will be removed in a future release. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | Specifies the authentication scheme that the provider uses to connect to Databricks. |
| Server | Specifies the host name of your Databricks workspace or SQL warehouse endpoint. |
| ProtocolVersion | Specifies the protocol version used by the provider when authenticating and exchanging data with Databricks. |
| Database | Specifies the name of the database in Databricks that the provider uses for the connection. |
| HTTPPath | Specifies the HTTP path for the compute resource in Databricks that the provider connects to. |
| Token | Specifies the personal access token used by the provider to authenticate to Databricks. |
Specifies the authentication scheme that the provider uses to connect to Databricks.
string
"PersonalAccessToken"
The AuthScheme property determines which authentication flow the Cloud uses when connecting to Databricks. Each option requires a different set of supporting connection properties.
When set to PersonalAccessToken, the Cloud authenticates by using a personal access token from Databricks.
When set to OAuthU2M, the Cloud uses the OAuth user-to-machine (U2M) flow. Set OAuthLevel, DatabricksAccountId (optional), OAuthClientId, and CallbackURL.
When set to OAuthM2M, the Cloud uses the OAuth machine-to-machine (M2M) flow. Set OAuthLevel, DatabricksAccountId (optional), OAuthClientId, and OAuthClientSecret. The client ID and secret can be generated by creating a Databricks service principal.
When this property is set to AzureServicePrincipal, the Cloud authenticates with an Azure service principal. Set AzureTenantId, AzureClientId, and AzureClientSecret. Follow the instructions in Get Microsoft Entra ID tokens for service principals to register an Azure AD application, and in Assign Azure roles using the Azure portal to assign appropriate roles in Azure.
When set to AzureAD, the Cloud authenticates through Azure Active Directory OAuth. Set AzureTenantId, OAuthClientId, OAuthClientSecret (optional), and CallbackURL. Follow the instructions in Configure an app in Azure portal to register an Azure AD application. The client secret is required only if the platform type in your Azure app is "Web".
When set to AzureMSI, the Cloud automatically obtains Azure Managed Service Identity credentials when running on an Azure VM.
Specifies the host name of your Databricks workspace or SQL warehouse endpoint.
string
""
The Server property identifies the host name used by the Cloud to connect to Databricks. This is typically the domain portion of your workspace URL, for example: acme.cloud.databricks.com.
You can find this value in your browser’s address bar when signed in to your Databricks workspace.
This property is required for all connection types and determines which workspace or endpoint the Cloud connects to.
Specifies the protocol version used by the provider when authenticating and exchanging data with Databricks.
string
"8"
The ProtocolVersion property defines the protocol version used between the Cloud and Databricks. This setting determines how requests and responses are encoded during authentication and query execution.
The default value of 8 represents the latest supported protocol version in Databricks.
Changing this value is typically only necessary when connecting to legacy environments or troubleshooting protocol compatibility issues.
Specifies the name of the database in Databricks that the provider uses for the connection.
string
""
A database in Databricks is a logical container for schemas and tables within a catalog. In Unity Catalog environments, each database belongs to a specific catalog, such as catalog_name.database_name.
Set this property to the database that you want the Cloud to query by default. If no value is provided, the default database for the configured catalog is used.
This property is useful for targeting a specific database when working with multiple data environments or Unity Catalog configurations in Databricks.
Specifies the HTTP path for the compute resource in Databricks that the provider connects to.
string
""
The HTTPPath property identifies the path component of the JDBC or ODBC endpoint for your compute resource in Databricks. This value directs the Cloud to the correct SQL Warehouse or cluster.
You can find the HTTP path in the Databricks workspace under Compute > your SQL Warehouse or Cluster > Connection details (or Advanced options) > JDBC/ODBC > HTTP Path.
An example format is sql/protocolv1/o/123456789/1234-123456-1ab2cdef.
This property is required for all authenticated connections to Databricks and is useful for ensuring the Cloud targets the correct compute resource.
Specifies the personal access token used by the provider to authenticate to Databricks.
string
""
The Token property provides the authentication token that the Cloud uses to access your Databricks workspace or SQL warehouse when AuthScheme is set to PersonalAccessToken.
You can generate a new token in the Databricks UI by navigating to User Settings > Access Tokens.
This token grants access according to your user permissions in Databricks and should be stored securely.
This property is required when using the PersonalAccessToken authentication scheme.
This section provides a complete list of the AWS Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSS3Bucket | Specifies the name of the AWS S3 bucket that the provider uses for staging or transferring data when connecting to Databricks. |
Specifies your AWS account access key. This value is accessible from your AWS security credentials page.
string
""
To find your AWS account access key:
Your AWS account secret key. This value is accessible from your AWS security credentials page.
string
""
Your AWS account secret key. This value is accessible from your AWS security credentials page:
The hosting region for your Amazon Web Services.
string
"NORTHERNVIRGINIA"
The hosting region for your Amazon Web Services. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, TAIPEI, HYDERABAD, JAKARTA, MALAYSIA, MELBOURNE, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, THAILAND, TOKYO, CENTRAL, CALGARY, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, SPAIN, STOCKHOLM, ZURICH, TELAVIV, MEXICOCENTRAL, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST, ISOLATEDUSEAST, ISOLATEDUSEASTB, ISOLATEDUSEASTF, ISOLATEDUSSOUTHF, ISOLATEDUSWEST and ISOLATEDEUWEST.
Specifies the name of the AWS S3 bucket that the provider uses for staging or transferring data when connecting to Databricks.
string
""
An Amazon S3 bucket is a top-level container for storing objects in AWS. The AWSS3Bucket property identifies the specific bucket the Cloud accesses for uploading, downloading, or staging data.
Enter the bucket name exactly as it appears in the AWS Management Console.
This property is useful when your Databricks connection relies on AWS storage for temporary data operations or large result transfers.
This section provides a complete list of the Azure Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AzureStorageAccount | The name of your Azure storage account. |
| AzureAccessKey | The storage key associated with your Azure account. |
| AzureTenant | Identifies the Databricks tenant being used to access data. Accepts either the tenant's domain name (for example, contoso.onmicrosoft.com ) or its directory (tenant) ID. |
| AzureBlobContainer | Specifies the name of the Azure Blob Storage container that the provider uses for staging or transferring data when connecting to Databricks. |
The name of your Azure storage account.
string
""
The name of your Azure storage account.
The storage key associated with your Azure account.
string
""
The storage key associated with your Databricks account. You can retrieve it as follows:
Identifies the Databricks tenant being used to access data. Accepts either the tenant's domain name (for example, contoso.onmicrosoft.com ) or its directory (tenant) ID.
string
""
A tenant is a digital container for your organization's users and resources, managed through Microsoft Entra ID (formerly Azure AD). Each tenant is associated with a unique directory ID, and often with a custom domain (for example, microsoft.com or contoso.onmicrosoft.com).
To find the directory (tenant) ID in the Microsoft Entra Admin Center, navigate to Microsoft Entra ID > Properties and copy the value labeled "Directory (tenant) ID".
This property is required in the following cases:
You can provide the tenant value in one of two formats:
Specifying the tenant explicitly ensures that the authentication request is routed to the correct directory, which is especially important when a user belongs to multiple tenants or when using service principal–based authentication.
If this value is omitted when required, authentication may fail or connect to the wrong tenant. This can result in errors such as unauthorized or resource not found.
Specifies the name of the Azure Blob Storage container that the provider uses for staging or transferring data when connecting to Databricks.
string
""
An Azure Blob Storage container is a logical grouping of blobs (files) within a storage account. The AzureBlobContainer property identifies the container that the Cloud accesses for uploading, downloading, or staging data.
Enter the container name as it appears in your Azure Storage account.
This property is useful when your Databricks connection relies on Azure storage for temporary data operations or large result transfers.
This section provides a complete list of the AzureServicePrincipal Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AzureTenantId | Specifies the directory (tenant) ID of your Microsoft Entra ID that the provider uses to authenticate to Databricks. |
| AzureClientId | Specifies the application (client) ID of the Microsoft Entra ID application that the provider uses to authenticate to Databricks. |
| AzureClientSecret | Specifies the client secret for the Microsoft Entra ID application that the provider uses to authenticate to Databricks. |
Specifies the directory (tenant) ID of your Microsoft Entra ID that the provider uses to authenticate to Databricks.
string
""
The AzureTenantId property identifies the Microsoft Entra ID (formerly Azure Active Directory) tenant that owns the registered application used for authentication.
You can find the tenant ID in the Azure portal under Microsoft Entra ID > Overview > Tenant ID.
This property is required when the AuthScheme is set to AzureServicePrincipal and may also be required for AzureAD authentication, depending on your OAuth configuration.
Specifies the application (client) ID of the Microsoft Entra ID application that the provider uses to authenticate to Databricks.
string
""
The AzureClientId property identifies the registered application in Microsoft Entra ID (formerly Azure Active Directory) that represents your service principal.
You can find the application (client) ID in the Azure portal under Microsoft Entra ID > App registrations > Your application.
This property is required when the AuthScheme is set to AzureServicePrincipal or AzureAD.
Specifies the client secret for the Microsoft Entra ID application that the provider uses to authenticate to Databricks.
string
""
The AzureClientSecret property stores the client secret associated with your Microsoft Entra ID (formerly Azure Active Directory) application. The secret acts as a password that the Cloud uses when authenticating through an Azure service principal.
You can create and view client secrets in the Azure portal under Microsoft Entra ID > App registrations > Your application > Certificates & secrets.
This property is required when the AuthScheme is set to AzureServicePrincipal and may also be required for AzureAD if the registered platform type in your Azure app is "Web".
This section provides a complete list of the OAuth properties you can configure in the connection string for this provider.
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| Scope | Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created. |
| OAuthLevel | Specifies whether the provider performs OAuth authentication at the workspace level or at the account level in Databricks. |
| DatabricksAccountId | Specifies the unique account ID for your Databricks account. |
Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication.
string
""
This property is required in two cases:
(When the driver provides embedded OAuth credentials, this value may already be provided by the Cloud and thus not require manual entry.)
OAuthClientId is generally used alongside other OAuth-related properties such as OAuthClientSecret and OAuthSettingsLocation when configuring an authenticated connection.
OAuthClientId is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can usually find this value in your identity provider’s application registration settings. Look for a field labeled Client ID, Application ID, or Consumer Key.
While the client ID is not considered a confidential value like a client secret, it is still part of your application's identity and should be handled carefully. Avoid exposing it in public repositories or shared configuration files.
For more information on how this property is used when configuring a connection, see Establishing a Connection.
Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.).
string
""
This property (sometimes called the application secret or consumer secret) is required when using a custom OAuth application in any flow that requires secure client authentication, such as web-based OAuth, service-based connections, or certificate-based authorization flows. It is not required when using an embedded OAuth application.
The client secret is used during the token exchange step of the OAuth flow, when the driver requests an access token from the authorization server. If this value is missing or incorrect, authentication fails with either an invalid_client or an unauthorized_client error.
OAuthClientSecret is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can obtain this value from your identity provider when registering the OAuth application.
Notes:
For more information on how this property is used when configuring a connection, see Establishing a Connection
Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created.
string
""
Scopes are set to define what kind of access the authenticating user will have; for example, read, read and write, restricted access to sensitive information. System administrators can use scopes to selectively enable access by functionality or security clearance.
When InitiateOAuth is set to GETANDREFRESH, you must use this property if you want to change which scopes are requested.
When InitiateOAuth is set to either REFRESH or OFF, you can change which scopes are requested using either this property or the Scope input.
Specifies whether the provider performs OAuth authentication at the workspace level or at the account level in Databricks.
string
"WorkspaceLevel"
The OAuthLevel property determines the scope of OAuth authentication when connecting to Databricks. When this property is set to WorkspaceLevel, the Cloud authenticates to a specific workspace and obtains an OAuth token that grants access only within that workspace.
When this property is set to AccountLevel, the Cloud authenticates through the account console and obtains a token that grants access to account-level resources. This level is required for features such as Unity Catalog or user and permission management across multiple workspaces.
This property is useful for choosing the OAuth token scope that matches your intended access level and deployment configuration in Databricks.
Specifies the unique account ID for your Databricks account.
string
""
The DatabricksAccountId property identifies your Databricks account for account-level authentication flows. Each account ID is unique and distinct from workspace identifiers.
You can find the account ID in the Databricks Account Console by selecting your username in the upper-right corner and viewing the Account ID in the drop-down menu.
This value is only visible in the account console and does not appear within individual workspaces.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If you are using a TLS/SSL connection, use this property to specify the TLS/SSL certificate to be accepted from the server. If you specify a value for this property, all other certificates that are not trusted by the machine are rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space- or colon-separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space- or colon-separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
Note: It is possible to use '*' to signify that all certificates should be accepted, but due to security concerns this is not recommended.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.
string
"1"
This property defines the level of detail the Cloud includes in the log file. Higher verbosity levels increase the detail of the logged information, but may also result in larger log files and slower performance due to the additional data being captured.
The default verbosity level is 1, which is recommended for regular operation. Higher verbosity levels are primarily intended for debugging purposes. For more information on each level, refer to Logging.
When combined with the LogModules property, Verbosity can refine logging to specific categories of information.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Catalog | Specifies the default catalog that the provider uses when connecting to Databricks. |
| PrimaryKeyIdentifiers | Specifies primary keys for tables in the hive_metastore catalog. |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
string
""
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
Specifies the default catalog that the provider uses when connecting to Databricks.
string
"hive_metastore"
A catalog in Databricks is a top-level data namespace that organizes schemas and tables. Unity Catalog environments can contain multiple catalogs, while legacy workspaces use the default Hive metastore.
When the SupportMultiCatalog property is set to true, the Catalog property must also be set to specify which catalog to use by default.
In most cases, this value should remain hive_metastore unless your workspace uses Unity Catalog and you want to connect to a different catalog.
This property is useful for defining the default data namespace when working with multiple catalogs or Unity Catalog configurations in Databricks.
Specifies primary keys for tables in the hive_metastore catalog.
string
""
The PrimaryKeyIdentifiers property defines primary keys for tables that do not natively include them in Databricks. This property only applies to the hive_metastore catalog.
Since Databricks does not enforce primary keys, defining them manually can enable DML operations such as UPDATE or DELETE and improve compatibility with tools that expect primary key metadata.
Primary keys are defined as a list of rules, where each rule maps tables to one or more key columns. Multiple rules are separated by semicolons.
For example:
PrimaryKeyIdentifiers="*=my_key;my_table=my_key2,my_key3;my_nokeys_table=;"
This example defines three rules:
Table identifiers in each rule can include the table name only, the schema and table, or the catalog, schema, and table. Any name can be quoted using SQL-style delimiters:
/* Rules with just table names use the default connection Catalog and Schema. All these rules refer to the same table when Catalog=someCatalog and Schema=someSchema */ someTable=a,b,c someSchema.someTable=a,b,c someCatalog.someSchema.someTable=a,b,c /* Any table or column name may be quoted */ `someCatalog`."someSchema".[someTable]=`a`,[b],"c"
This property is useful for defining primary key metadata in Databricks when working with tools or operations that rely on key-based updates or synchronization logic.
This section provides a complete list of the Databricks properties you can configure in the connection string for this provider.
| Property | Description |
| CloudStorageType | Specifies which cloud storage service the provider uses for staging or transferring data. |
| StoreTableInCloud | Specifies whether the Databricks server creates and stores new tables in external cloud storage instead of the Databricks File System (DBFS). |
| QueryTableDetails | Specifies whether the provider uses the DESCRIBE FORMATTED command in Databricks to retrieve detailed table metadata. |
| UseUploadApi | Specifies whether the provider uses the Databricks Upload API to optimize Bulk INSERT operations. |
| UseCloudFetch | Specifies whether the provider uses CloudFetch to optimize data transfer for large query result sets in Databricks. |
| SupportMultiCatalog | Specifies whether the provider enables multi-catalog support for Unity Catalog in Databricks. |
| QueryAllMetadata | Specifies whether the provider retrieves metadata from all available catalogs and schemas in Databricks or only from those specified in the connection. |
| CheckSQLWarehouseAvailability | Specifies whether the provider checks the availability of the SQL Warehouse in Databricks before establishing a connection. |
Specifies which cloud storage service the provider uses for staging or transferring data.
string
"DBFS"
The CloudStorageType property determines the storage system used by the Cloud when reading or writing large result sets or temporary files.
When this property is set to DBFS, the Cloud uses the internal Databricks File System provided by Databricks.
When this property is set to Azure Blob storage, the following properties are required: AzureStorageAccount, AzureAccessKey, and AzureBlobContainer.
When this property is set to AWS S3, the following properties are required: AWSAccessKey, AWSSecretKey, AWSS3Bucket, and AWSRegion.
Specifies whether the Databricks server creates and stores new tables in external cloud storage instead of the Databricks File System (DBFS).
bool
false
When this property is set to true, the Databricks server stores created tables in an external cloud storage location based on the value of CloudStorageType. In this case, CloudStorageType cannot be set to DBFS.
When this property is set to false, new tables are stored in the default Databricks File System.
This property is useful for directing table storage to managed cloud environments such as AWS S3 or Azure Blob Storage for scalability, durability, or data governance purposes.
Specifies whether the provider uses the DESCRIBE FORMATTED command in Databricks to retrieve detailed table metadata.
bool
false
When this property is set to true, the Cloud runs a DESCRIBE FORMATTED query for each table to collect detailed metadata such as file format, table location, and partitioning information.
This can provide a more complete view of table structure, but may significantly increase metadata query times, especially in large environments.
When this property is set to false, only standard table metadata is queried which improves performance.
This property is useful when applications or workflows require extended table metadata beyond standard schema information.
Specifies whether the provider uses the Databricks Upload API to optimize Bulk INSERT operations.
bool
false
When this property is set to true, the Cloud uses the Databricks Upload API to stage and load data in bulk, improving performance for large insert operations by reducing network overhead.
When this property is set to false, data is inserted directly over the active connection, which can be slower for large datasets but may be preferable for smaller or transactional workloads.
This property is useful for accelerating high-volume data ingestion scenarios in Databricks.
Specifies whether the provider uses CloudFetch to optimize data transfer for large query result sets in Databricks.
bool
false
When this property is set to true, the Cloud uses CloudFetch, a mechanism in Databricks that improves performance for large query results by temporarily storing and transferring data through cloud storage rather than the active connection.
When this property is set to false, the Cloud streams query results directly over the connection.
CloudFetch is most beneficial for queries that return very large result sets. For example, more than one million rows. Enabling it can reduce memory usage and improve data retrieval speed for large-scale operations.
Specifies whether the provider enables multi-catalog support for Unity Catalog in Databricks.
bool
true
When this property is set to true, the Cloud supports multiple catalogs through Unity Catalog, allowing access to both Unity-managed and legacy catalogs such as hive_metastore.
When this property is set to false, multi-catalog support is disabled, and only a single catalog named CData is available for all operations.
This property is useful for connecting to environments that use Unity Catalog for cross-workspace data governance or for simplifying metadata discovery in single-catalog configurations.
Specifies whether the provider retrieves metadata from all available catalogs and schemas in Databricks or only from those specified in the connection.
bool
false
When this property is set to true, the Cloud queries metadata from all catalogs and schemas that the connection can access in Databricks.
When this property is set to false, the scope of metadata discovery is limited based on the Catalog and Database properties:
This property is useful for optimizing connection startup time and reducing the amount of metadata retrieved from Databricks when working with large or complex environments.
Specifies whether the provider checks the availability of the SQL Warehouse in Databricks before establishing a connection.
bool
true
When this property is set to true, the Cloud performs a preliminary check to verify that the target SQL Warehouse in Databricks is active before completing the connection. This helps prevent query failures caused by inactive or paused warehouses.
When this property is set to false, the Cloud skips the availability check and attempts to connect immediately. This can shorten connection time but may result in an error if the warehouse is not running.
This property is useful for ensuring connection stability in environments where SQL Warehouses may pause due to idle time or cost-saving configurations.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| AllowPreparedStatement | Specifies whether the provider prepares SQL statements before executing them to improve performance on repeated queries. |
| ConnectRetryWaitTime | Specifies the number of seconds the provider waits before retrying a connection request. |
| ApplicationName | Specifies the application name that the provider includes in the HTTP User-Agent header when connecting to Databricks. |
| AsyncQueryTimeout | Specifies the number of seconds the provider waits for asynchronous requests that retrieve large result sets before timing out. |
| DefaultColumnSize | Specifies the default length, in characters, of string fields when the provider cannot determine a size from metadata. |
| DescribeCommand | Specifies which command the provider uses to retrieve metadata from Databricks. |
| DetectView | Specifies whether the provider uses the DESCRIBE FORMATTED command to determine whether an object in Databricks is a table or a view. |
| IncludeSystemSchemas | Specifies whether to include the system catalog 'system' and system schema 'information_schema'. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| ServerConfigurations | Specifies configuration variables to override the default settings on the Databricks server. |
| ServerTimeZone | Specifies how the provider interprets datetime values returned from Databricks. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| UseDescTableQuery | Specifies whether the provider retrieves table column metadata using a DESC TABLE query instead of the Thrift API. |
| UseInsertSelectSyntax | DEPRECATED. This property is no longer supported, and should not be used. It will be removed in a future release. |
Specifies whether the provider prepares SQL statements before executing them to improve performance on repeated queries.
bool
true
Prepared statements precompile SQL queries so they can be executed multiple times without being re-parsed or recompiled.
When this property is set to true, the Cloud prepares and caches SQL statements before execution. This reduces parsing and compilation overhead for queries that run multiple times.
When this property is set to false, statements are parsed and executed each time without precompilation. This can improve performance when running many unique or one-time queries.
Prepared statements require the Cloud to maintain an active connection while the statement is prepared.
This property is useful for tuning performance based on whether your workloads reuse queries or generate new SQL dynamically.
Specifies the number of seconds the provider waits before retrying a connection request.
string
"-1"
The ConnectRetryWaitTime property controls how long the Cloud waits before retrying when Databricks responds that a cluster is starting or temporarily unavailable.
When this property is set to a positive integer, the Cloud waits that number of seconds before each retry attempt. Typical values range from 30 to 60 seconds.
When this property is set to -1, the retry feature is disabled.
This property is useful for improving connection reliability when clusters may take time to start or resume in Databricks.
Specifies the application name that the provider includes in the HTTP User-Agent header when connecting to Databricks.
string
""
This property identifies the client application in HTTP requests sent by the Cloud. The value appears in the User-Agent header and and can help Databricks identify your client in request logs.
For Databricks Runtime (formerly known as Databricks Cluster), use the following format: [isv-name+product-name]/[product-version] [comment]
For Databricks SQL Warehouse, only the optional comment field can be set, using the format: [comment]
Each segment must avoid spaces, parentheses, commas, or new lines. Nested comments are not supported.
This property is useful for identifying your application in Databricks logs and API request tracking.
Specifies the number of seconds the provider waits for asynchronous requests that retrieve large result sets before timing out.
int
300
The AsyncQueryTimeout property defines how long the Cloud allows asynchronous operations to run before cancelling them. It applies to the total execution time of the operation, rather than individual requests.
When this property is set to 0, asynchronous operations do not time out. They continue until they complete successfully or encounter an error condition.
When this property is set to any other value, the Cloud raises an error if the asynchronous request has not finished within the specified number of seconds.
This property is distinct from Timeout, which applies to synchronous operations. It is useful for controlling how long large or complex queries can run before timing out.
Specifies the default length, in characters, of string fields when the provider cannot determine a size from metadata.
string
"1048576"
Sets the default length of a string field for a provider. If not set by the provider, the value will be 2000.
The DefaultColumnSize property defines the length assigned to string-type columns when the Cloud cannot obtain this information from Databricks metadata.
When this property is not set, the default value of 1048576 is used.
This property is useful when working with systems or queries that do not return explicit string length information, ensuring consistent column definitions across environments.
Specifies which command the provider uses to retrieve metadata from Databricks.
string
"DESCRIBE"
The DescribeCommand property determines whether the Cloud uses the full DESCRIBE keyword or its shorthand DESC when issuing metadata queries to the Hive-compatible endpoint in Databricks.
When this property is set to DESCRIBE, the Cloud uses the full standard Hive SQL syntax.
When this property is set to DESC, the Cloud uses the shorthand form, which may be required for compatibility with certain engines or query parsers.
This property is useful for adjusting SQL compatibility when retrieving schema details from Databricks or Hive-based systems.
Specifies whether the provider uses the DESCRIBE FORMATTED command to determine whether an object in Databricks is a table or a view.
bool
false
When this property is set to true, the Cloud issues a DESCRIBE FORMATTED command to retrieve detailed metadata and identify whether a specified object in Databricks is a table or a view.
When this property is set to false, the Cloud skips this additional check, which can improve performance but may prevent accurate detection of views.
This property is useful when working with mixed environments that contain both tables and views, ensuring that object types are correctly identified during metadata discovery.
Specifies whether to include the system catalog 'system' and system schema 'information_schema'.
bool
false
When this property is set to true, the Cloud queries and exposes system-defined schemas alongside user-defined schemas. These schemas typically contain metadata tables, built-in views, and internal objects managed by Databricks.
When this property is set to false, the Cloud omits these internal schemas, simplifying the schema view for most users.
This property is useful for advanced users who need access to system metadata or diagnostic information in Databricks.
Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY.
int
-1
The default value for this property, -1, means that no row limit is enforced unless the query explicitly includes a LIMIT clause. (When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting.)
Setting MaxRows to a whole number greater than 0 ensures that queries do not return excessively large result sets by default.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'.
string
""
This property allows you to define which pseudocolumns the Cloud exposes as table columns.
To specify individual pseudocolumns, use the following format:
Table1=Column1;Table1=Column2;Table2=Column3
To include all pseudocolumns for all tables use:
*=*
Specifies configuration variables to override the default settings on the Databricks server.
string
""
The ServerConfigurations property accepts a comma-separated list of configuration variables defined as name-value pairs. Each pair is sent to the Databricks server to override its default values for the current session.
For example: hive.enforce.bucketing=true,hive.enforce.sorting=true
These settings can be used to adjust query behavior, enforce specific runtime constraints, or fine-tune optimization parameters for your Databricks session.
Specifies how the provider interprets datetime values returned from Databricks.
string
"UTC"
The ServerTimeZone property controls how datetime values are interpreted and converted between the Databricks server and the local system.
When this property is set to UTC, the Cloud assumes the server stores datetime values in Coordinated Universal Time and converts them to the local time zone.
When this property is set to LOCAL, datetime values are interpreted as local time with no conversion applied.
This property is useful for ensuring consistent timestamp handling across environments with different regional or daylight-saving settings.
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error.
int
60
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.
Timeout is set to 60 seconds by default. To disable timeouts, set this property to 0.
Disabling the timeout allows operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server.
Note: Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
Specifies whether the provider retrieves table column metadata using a DESC TABLE query instead of the Thrift API.
bool
true
When this property is set to true, the Cloud issues a DESC TABLE query in Databricks to obtain column information for each table.
When this property is set to false, the Cloud retrieves column metadata using the Thrift API call GetColumns, which is supported in Apache Spark 3.0.0 and later.
Using the Thrift API can improve metadata query performance, but requires a compatible Spark version. The DESC TABLE query provides broader compatibility across Databricks environments.
DEPRECATED. This property is no longer supported, and should not be used. It will be removed in a future release.
bool
false
When set to true, an INSERT INTO SELECT statement will be used when executing insert statements. When set to false, an INSERT INTO VALUES statement will be used.
Unless explicitly specified, this option will be configured accordingly based on the Databricks version.
LZMA from 7Zip LZMA SDK
LZMA SDK is placed in the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original LZMA SDK code, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
LZMA2 from XZ SDK
Version 1.9 and older are in the public domain.
Xamarin.Forms
Xamarin SDK
The MIT License (MIT)
Copyright (c) .NET Foundation Contributors
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NSIS 3.10
Copyright (C) 1999-2025 Contributors THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
1. DEFINITIONS
"Contribution" means:
a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and b) in the case of each subsequent Contributor:
i) changes to the Program, and
ii) additions to the Program;
where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
"Contributor" means any person or entity that distributes the Program.
"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
"Program" means the Contributions distributed in accordance with this Agreement.
"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
2. GRANT OF RIGHTS
a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
3. REQUIREMENTS
A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
a) it complies with the terms and conditions of this Agreement; and
b) its license agreement:
i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
When the Program is made available in source code form:
a) it must be made available under this Agreement; and
b) a copy of this Agreement must be included with each copy of the Program.
Contributors may not remove or alter any copyright notices contained within the Program.
Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
4. COMMERCIAL DISTRIBUTION
Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
5. NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
6. DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. GENERAL
If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.
Apache Thrift Client v. 0.10.0
Copyright (c) 2006 - 2019, The Apache Software Foundation
The Apache License
Version 2.0, January 2004
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
-------------------------------------------------- SOFTWARE DISTRIBUTED WITH THRIFT:
The Apache Thrift software includes a number of subcomponents with separate copyright notices and license terms. Your use of the source code for the these subcomponents is subject to the terms and conditions of the following licenses.
-------------------------------------------------- Portions of the following files are licensed under the MIT License:
lib/erl/src/Makefile.am
Please see doc/otp-base-license.txt for the full terms of this license.
-------------------------------------------------- For the aclocal/ax_boost_base.m4 and contrib/fb303/aclocal/ax_boost_base.m4 components:
# Copyright (c) 2007 Thomas Porschberg <[email protected]> # # Copying and distribution of this file, with or without # modification, are permitted in any medium without royalty provided # the copyright notice and this notice are preserved.
-------------------------------------------------- For the lib/nodejs/lib/thrift/json_parse.js:
/* json_parse.js 2015-05-02 Public Domain. NO WARRANTY EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.
*/ (By Douglas Crockford <[email protected]>) --------------------------------------------------