CData Cloud offers access to Azure Cosmos DB across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a SQL Server database can connect to Azure Cosmos DB through CData Cloud.
CData Cloud allows you to standardize and configure connections to Azure Cosmos DB as though it were any other OData endpoint or standard SQL Server.
You can either create your own custom role definitions, or assign one of the built-in role definitions:
You must also set the scope of the role assignment, where "/" means that the identity has access to all the databases.
For details, see Configure role-based access control for your Azure Cosmos DB account with Azure AD.
This page provides a guide to Establishing a Connection to Azure Cosmos DB in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to Azure Cosmos DB and configure any necessary connection properties to create a database in CData Cloud
Accessing data from Azure Cosmos DB through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to Azure Cosmos DB by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
Log in to the Azure Portal, select Azure Cosmos DB, and select your account.
Set the following to authenticate:
Note: Microsoft has rebranded Azure AD as Entra ID. In topics that require the user to interact with the Entra ID Admin site, we use the same names Microsoft does. However, there are still CData connection properties whose names or values reference "Azure AD".
Microsoft Entra ID is a multi-tenant, cloud-based identity and access management platform. It supports OAuth-based authentication flows that enable the driver to access Azure Cosmos DB endpoints securely.
Authentication to Entra ID via a web application always requires that you first create and register a custom OAuth application. This enables your application to define its own redirect URI, manage credential scope, and comply with organization-specific security policies.
For full instructions on how to create and register a custom OAuth application, see Creating an Entra ID (Azure AD) Application.
After setting AuthScheme to AzureAD, the steps to authenticate vary, depending on the environment. For details on how to connect from desktop applications, web-based workflows, or headless systems, see the following sections.
Note: Microsoft has rebranded Azure AD as Entra ID. In topics that require the user to interact with the Entra ID Admin site, we use the same names Microsoft does. However, there are still CData connection properties whose names or values reference "Azure AD".
Azure Service Principal is role-based application-based authentication. This means that authentication is done per application, rather than per user.
All tasks taken on by the application are executed without a default user context, but based on the assigned roles.
The application access to the resources is controlled through the assigned roles' permissions.
For information about how to set up Azure Service Principal authentication, see Creating a Service Principal App in Entra ID (Azure AD).
You can use the following properties to gain greater control over Azure Cosmos DB API features and the strategies the Cloud uses to surface them:
GenerateSchemaFiles: This property enables you to persist table metadata in static schema files that are easy to customize, to persist your changes to column data types, for example.
You can set this property to "OnStart" to generate schema files for all tables in your database at connection. Or, you can generate schemas as you execute SELECT queries to tables.
The resulting schemas are based on the connection properties you use to configure Automatic Schema Discovery
To use the resulting schema files, set the Location property to the folder containing the schemas.
Just as described in the SQL Compliance the Cloud supports batch CUD (Create, Update, Delete) operations. Batch processing is achieved by issuing multiple requests simultaneously. Even though this method greatly improves the performance for write operations, the cost of these operations is relatively high, thus the Request Units (RU) budget per second for a certain container or database may be exceeded. Depending on your Azure Cosmos DB Service Quotas, exceeding the RU budgets may incur in extra costs, or it may even temporary throttle or interrupt the Azure Cosmos DB usage for other workloads.
In order to avoid exceeding the RU budget per second, the Cloud dynamically adjusts the number of concurrent requests per second depending on the set WriteThroughputBudget and the constantly adjusted average RU cost per statement. The user can utilize the WriteThroughputBudget connection property to define the RU budged per second, that batch write operations should not exceed. Another important factor in batch write operations is the MaxThreads connection property, which specifies the maximum number of concurrent requests. If using a low MaxThreads value, the Cloud might not be able to efficiently use the available budget.
Since the requests throttling logic is applied client-side, in a few cases the RU/s budged may be exceeded by a relatively small amount. These cases include inserting, updating and deleting records with highly variable column count and input value length per column.
Note: By default, the WriteThroughputBudget property is set 1000 RU/s and the MaxThreads property is set to 200 threads.
Azure Cosmos DB is a schemaless, document database that provides high performance, availability, and scalability. These features are not necessarily incompatible with a standards-compliant query language like SQL-92. In this section we will show various schemes that the Cloud offers to bridge the gap with relational SQL and a document database.
The Cloud models the schemaless Azure Cosmos DB objects into relational tables and translates SQL queries into Azure Cosmos DB queries to get the requested data. See Query Mapping (Sql API) for more details on how various Azure Cosmos DB operations are represented as SQL.
The Automatic Schema Discovery scheme automatically finds the data types in a Azure Cosmos DB object by scanning a configured number of rows of the object. You can use RowScanDepth, FlattenArrays, and FlattenObjects to control the relational representation of the collections in Azure Cosmos DB. You can also write Free-Form Queries not tied to the schema.
Optionally, you can use Custom Schema Definitions to project your chosen relational structure on top of a Azure Cosmos DB object. This allows you to define your chosen names of columns, their data types, and the location of their values in the collection.
Set GenerateSchemaFiles to save the detected schemas as simple configuration files that are easy to extend. You can persist schemas for all collections in the database or for the results of SELECT queries.
If the TypeDetectionScheme is set to RawValue, the Cloud will push each document as single aggregate value on a column named JsonData, along with its resource identifier on the separate Primary Key column. The JSON documents are not processed, and as a result, the below functionalities are NOT supported with this configuration.
The Cloud automatically infers a relational schema by inspecting a series of Azure Cosmos DB documents in a collection. You can use the RowScanDepth property to define the number of documents the Cloud will scan to do so. The columns identified during the discovery process depend on the FlattenArrays and FlattenObjects properties.
If FlattenObjects is set, all nested objects will be flattened into a series of columns. For example, consider the following document:
{
id: 12,
name: "Lohia Manufacturers Inc.",
address: {street: "Main Street", city: "Chapel Hill", state: "NC"},
offices: ["Chapel Hill", "London", "New York"],
annual_revenue: 35,600,000
}
This document will be represented by the following columns:
| Column Name | Data Type | Example Value |
| id | Integer | 12 |
| name | String | Lohia Manufacturers Inc. |
| address.street | String | Main Street |
| address.city | String | Chapel Hill |
| address.state | String | NC |
| offices | String | ["Chapel Hill", "London", "New York"] |
| annual_revenue | Double | 35,600,000 |
If FlattenObjects is not set, then the address.street, address.city, and address.state columns will not be broken apart. The address column of type string will instead represent the entire object. Its value would be {street: "Main Street", city: "Chapel Hill", state: "NC"}. See JSON Functions for more details on working with JSON aggregates.
You can change the separator character in the column name from a dot by setting SeparatorCharacter.
The FlattenArrays property can be used to flatten array values into columns of their own. This is only recommended for arrays that are expected to be short, for example the coordinates below:
"coord": [ -73.856077, 40.848447 ]The FlattenArrays property can be set to 2 to represent the array above as follows:
| Column Name | Data Type | Example Value |
| coord.0 | Float | -73.856077 |
| coord.1 | Float | 40.848447 |
It is best to leave other unbounded arrays as they are and piece out the data for them as needed using JSON Functions.
As discussed in Automatic Schema Discovery, intuited table schemas enable SQL access to unstructured Azure Cosmos DB data. JSON Functions enable you to use standard JSON functions to summarize Azure Cosmos DB data and extract values from any nested structures. Custom Schema Definitions enable you to define static tables and give you more granular control over the relational view of your data; for example, you can write schemas defining parent/child tables or fact/dimension tables. However, you are not limited to these schemes.
After connecting you can query any nested structure without flattening the data. Any relations that you can access with FlattenArrays and FlattenObjects can also be accessed with an ad hoc SQL query.
Let's consider an example document from the following Restaurant data set:
{
"address": {
"building": "1007",
"coord": [
-73.856077,
40.848447
],
"street": "Morris Park Ave",
"zipcode": "10462"
},
"borough": "Bronx",
"cuisine": "Bakery",
"grades": [
{
"grade": "A",
"score": 2,
"date": {
"$date": "1393804800000"
}
},
{
"date": {
"$date": "1378857600000"
},
"grade": "B",
"score": 6
},
{
"score": 10,
"date": {
"$date": "1358985600000"
},
"grade": "C"
}
],
"name": "Morris Park Bake Shop",
"restaurant_id": "30075445"
}
You can access any nested structure in this document as a column. Use the dot notation to drill down to the values you want to access as shown in the query below. Note that arrays have a zero-based index. For example, the following query retrieves the second grade for the restaurant in the example:
SELECT [address.building], [grades.1.grade] FROM restaurants WHERE restaurant_id = '30075445'The preceding query returns the following results:
| Column Name | Data Type | Example Value |
| address.building | String | 1007 |
| grades.1.grade | String | A |
It is possible to retrieve an array of documents as if it were a separate table. Take the following JSON structure from the restaurants collection for example:
{
"_id" : ObjectId("568c37b748ddf53c5ed98932"),
"address" : {
"building" : "1007",
"coord" : [-73.856077, 40.848447],
"street" : "Morris Park Ave",
"zipcode" : "10462"
},
"borough" : "Bronx",
"cuisine" : "Bakery",
"grades" : [{
"date" : ISODate("2014-03-03T00:00:00Z"),
"grade" : "A",
"score" : 2
}, {
"date" : ISODate("2013-09-11T00:00:00Z"),
"grade" : "A",
"score" : 6
}, {
"date" : ISODate("2013-01-24T00:00:00Z"),
"grade" : "A",
"score" : 10
}, {
"date" : ISODate("2011-11-23T00:00:00Z"),
"grade" : "A",
"score" : 9
}, {
"date" : ISODate("2011-03-10T00:00:00Z"),
"grade" : "B",
"score" : 14
}],
"name" : "Morris Park Bake Shop",
"restaurant_id" : "30075445"
}
Vertical flattening will allow you to retrieve the grades array as a separate table:
SELECT * FROM [restaurants.grades]This query returns the following data set:
| date | grade | score | P_id | _index |
| 2014-03-03T00:00:00.000Z | A | 2 | 568c37b748ddf53c5ed98932 | 1 |
| 2013-09-11T00:00:00.000Z | A | 6 | 568c37b748ddf53c5ed98932 | 2 |
| 2013-01-24T00:00:00.000Z | A | 10 | 568c37b748ddf53c5ed98932 | 3 |
SELECT [restaurants].[restaurant_id], [restaurants.grades].* FROM [restaurants.grades] JOIN [restaurants] WHERE [restaurants].name = 'Morris Park Bake Shop'This query returns the following data set:
| restaurant_id | date | grade | score | P_id | _index |
| 30075445 | 2014-03-03T00:00:00.000Z | A | 2 | 568c37b748ddf53c5ed98932 | 1 |
| 30075445 | 2013-09-11T00:00:00.000Z | A | 6 | 568c37b748ddf53c5ed98932 | 2 |
| 30075445 | 2013-01-24T00:00:00.000Z | A | 10 | 568c37b748ddf53c5ed98932 | 3 |
| 30075445 | 2011-11-23T00:00:00.000Z | A | 9 | 568c37b748ddf53c5ed98932 | 4 |
| 30075445 | 2011-03-10T00:00:00.000Z | B | 14 | 568c37b748ddf53c5ed98932 | 5 |
The Cloud can return JSON structures as column values. The Cloud enables you to use standard SQL functions to work with these JSON structures. The examples in this section use the following array:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
SELECT Name, JSON_EXTRACT(grades,'[0].grade') AS Grade, JSON_EXTRACT(grades,'[0].score') AS Score FROM Students;
| Column Name | Example Value |
| Grade | A |
| Score | 2 |
SELECT Name, JSON_COUNT(grades,'[x]') AS NumberOfGrades FROM Students;
| Column Name | Example Value |
| NumberOfGrades | 5 |
SELECT Name, JSON_SUM(score,'[x].score') AS TotalScore FROM Students;
| Column Name | Example Value |
| TotalScore | 41 |
SELECT Name, JSON_MIN(score,'[x].score') AS LowestScore FROM Students;
| Column Name | Example Value |
| LowestScore | 2 |
SELECT Name, JSON_MAX(score,'[x].score') AS HighestScore FROM Students;
| Column Name | Example Value |
| HighestScore | 14 |
The DOCUMENT function can be used to retrieve the entire document as a JSON string. See the following query and its result as an example:
SELECT DOCUMENT(*) FROM Customers;The query above will return the entire document as shown.
{ "id": 12, "name": "Lohia Manufacturers Inc.", "address": { "street": "Main Street", "city": "Chapel Hill", "state": "NC"}, "offices": [ "Chapel Hill", "London", "New York" ], "annual_revenue": 35,600,000 }
Cosmos DB also supports a number of built-in functions for common operations, that can be used inside queries. Here are some example of how can be used as part of select columns or the WHERE clause:
Use Built-in functions as part of SELECT columns
SELECT IS_NUMBER(user_id) AS ISN_ATTR, IS_NUMBER(id) AS ISN_ID FROM [users] SELECT POWER(user_id, 2) AS POWERSSS, LENGTH(id) AS LENGTH_ID, PI() AS JustThePI FROM [users]
Use Built-in functions as part of WHERE clause
SELECT * FROM [users] WHERE STARTSWITH(middle_name, 'G') SELECT * FROM [users] WHERE REPLACE(middle_name, 'Chr', '___') = '___istopher'
| Function group | Operations |
| Mathematical functions | ABS, CEILING, EXP, FLOOR, LOG, LOG10, POWER, ROUND, SIGN, SQRT, SQUARE, TRUNC, ACOS, ASIN, ATAN, ATN2, COS, COT, DEGREES, PI, RADIANS, SIN, and TAN |
| Type checking functions | IS_ARRAY, IS_BOOL, IS_NULL, IS_NUMBER, IS_OBJECT, IS_STRING, IS_DEFINED, and IS_PRIMITIVE |
| String functions | ARRAY, CONCAT, CONTAINS, ENDSWITH, INDEX_OF, LEFT, LENGTH, LOWER, LTRIM, REPLACE, REPLICATE, REVERSE, RIGHT, RTRIM, STARTSWITH, SUBSTRING, and UPPER |
| Array functions | ARRAY_CONCAT, ARRAY_CONTAINS, ARRAY_LENGTH, and ARRAY_SLICE |
The mathematical functions each perform a calculation, based on input values that are provided as arguments, and return a numeric value. Here's a table of supported built-in mathematical functions.
| Usage | Description |
| ABS (num_expr) | Returns the absolute (positive) value of the specified numeric expression. |
| CEILING (num_expr) | Returns the smallest integer value greater than, or equal to, the specified numeric expression. |
| FLOOR (num_expr) | Returns the largest integer less than or equal to the specified numeric expression. |
| EXP (num_expr) | Returns the exponent of the specified numeric expression. |
| LOG (num_expr [,base]) | Returns the natural logarithm of the specified numeric expression, or the logarithm using the specified base |
| LOG10 (num_expr) | Returns the base-10 logarithmic value of the specified numeric expression. |
| ROUND (num_expr) | Returns a numeric value, rounded to the closest integer value. |
| TRUNC (num_expr) | Returns a numeric value, truncated to the closest integer value. |
| SQRT (num_expr) | Returns the square root of the specified numeric expression. |
| SQUARE (num_expr) | Returns the square of the specified numeric expression. |
| POWER (num_expr, num_expr) | Returns the power of the specified numeric expression to the value specified. |
| SIGN (num_expr) | Returns the sign value (-1, 0, 1) of the specified numeric expression. |
| ACOS (num_expr) | Returns the angle, in radians, whose cosine is the specified numeric expression; also called arccosine. |
| ASIN (num_expr) | Returns the angle, in radians, whose sine is the specified numeric expression. This is also called arcsine. |
| ATAN (num_expr) | Returns the angle, in radians, whose tangent is the specified numeric expression. This is also called arctangent. |
| ATN2 (num_expr) | Returns the angle, in radians, between the positive x-axis and the ray from the origin to the point (y, x), where x and y are the values of the two specified float expressions. |
| COS (num_expr) | Returns the trigonometric cosine of the specified angle, in radians, in the specified expression. |
| COT (num_expr) | Returns the trigonometric cotangent of the specified angle, in radians, in the specified numeric expression. |
| DEGREES (num_expr) | Returns the corresponding angle in degrees for an angle specified in radians. |
| PI () | Returns the constant value of PI. |
| RADIANS (num_expr) | Returns radians when a numeric expression, in degrees, is entered. |
| SIN (num_expr) | Returns the trigonometric sine of the specified angle, in radians, in the specified expression. |
| TAN (num_expr) | Returns the tangent of the input expression, in the specified expression. |
The type checking functions allow you to check the type of an expression within SQL queries. Type checking functions can be used to determine the type of properties within documents dynamically when it is variable or unknown. Here's a table of supported built-in type checking functions.
| Usage | Description |
| IS_ARRAY (expr) | Returns a Boolean indicating if the type of the value is an array. |
| IS_BOOL (expr) | Returns a Boolean indicating if the type of the value is a Boolean. |
| IS_NULL (expr) | Returns a Boolean indicating if the type of the value is null. |
| IS_NUMBER (expr) | Returns a Boolean indicating if the type of the value is a number. |
| IS_OBJECT (expr) | Returns a Boolean indicating if the type of the value is a JSON object. |
| IS_STRING (expr) | Returns a Boolean indicating if the type of the value is a string. |
| IS_DEFINED (expr) | Returns a Boolean indicating if the property has been assigned a value. |
| IS_PRIMITIVE (expr) | Returns a Boolean indicating if the type of the value is a string, number, Boolean or null. |
The following scalar functions perform an operation on a string input value and return a string, numeric or Boolean value. Here's a table of built-in string functions:
| Usage | Description |
| ARRAY (str_expr) | Project the results of the specified query as an array. |
| LENGTH (str_expr) | Returns the number of characters of the specified string expression |
| CONCAT (str_expr, str_expr [, str_expr]) | Returns a string that is the result of concatenating two or more string values. |
| SUBSTRING (str_expr, num_expr, num_expr) | Returns part of a string expression. |
| STARTSWITH (str_expr, str_expr, bool_expr) | Returns a Boolean indicating whether the first string expression starts with the second. By default, this is case-insensitive. Setting bool_expr to false makes STARTSWITH case-sensitive. |
| ENDSWITH (str_expr, str_expr, bool_expr) | Returns a Boolean indicating whether the first string expression ends with the second. By default, this is case-insensitive. Setting bool_expr to false makes ENDSWITH case-sensitive. |
| CONTAINS (str_expr, str_expr, bool_expr) | Returns a Boolean indicating whether the first string expression contains the second. By default, this is case-insensitive. Setting bool_expr to false makes CONTAINS case-sensitive. |
| INDEX_OF (str_expr, str_expr) | Returns the starting position of the first occurrence of the second string expression within the first specified string expression, or -1 if the string is not found. |
| LEFT (str_expr, num_expr) | Returns the left part of a string with the specified number of characters. |
| RIGHT (str_expr, num_expr) | Returns the right part of a string with the specified number of characters. |
| LTRIM (str_expr) | Returns a string expression after it removes leading blanks. |
| RTRIM (str_expr) | Returns a string expression after truncating all trailing blanks. |
| LOWER (str_expr) | Returns a string expression after converting uppercase character data to lowercase. |
| UPPER (str_expr) | Returns a string expression after converting lowercase character data to uppercase. |
| REPLACE (str_expr, str_expr, str_expr) | Replaces all occurrences of a specified string value with another string value. |
| REPLICATE (str_expr, num_expr) | Repeats a string value a specified number of times. |
| REVERSE (str_expr) | Returns the reverse order of a string value. |
The following scalar functions perform an operation on an array input value and return numeric, Boolean or array value. Here's a table of built-in array functions:
| Usage | Description |
| ARRAY_LENGTH (arr_expr) | Returns the number of elements of the specified array expression. |
| ARRAY_CONCAT (arr_expr, arr_expr [, arr_expr]) | Returns an array that is the result of concatenating two or more array values. |
| ARRAY_CONTAINS (arr_expr, expr [, bool_expr]) | Returns a Boolean indicating whether the array contains the specified value. Can specify if the match is full or partial. |
| ARRAY_SLICE (arr_expr, num_expr [, num_expr]) | Returns part of an array expression. |
You can also perform nested built-in functions, which are processed server side as well:
i.e. SELECT TOP 10 CONCAT(SUBSTRING(UPPER(cuisine), 0, 3), '-cuisine') FROM [restaurants]
The GROUP BY clause divides the query's results according to the values of one or more specified properties. This operation is partially done server-side because of some API limitations. We still need to operate a client-side grouping.
SELECT COUNT(*) AS CNT, gender FROM [users] GROUP BY gender SELECT COUNT(*) AS CNT, gender, doc_type FROM [users] GROUP BY gender, doc_type
Cosmos DB's SQL API supports a special type of join operation called JOIN IN, which is specifically designed for working with nested arrays within documents. Unlike traditional SQL joins that combine data from separate tables, JOIN IN allows you to "flatten" and query nested array elements within a single document.
{
"id": "3",
"name": "DEV Park Bake Shop",
"cuisine": "Bakery",
"grades": [
{
"date": 1393804800000,
"grade": "D",
"score": 2
},
{
"date": 1378857600000,
"grade": "A",
"score": 6
}
]
}
SELECT c.Id, g.grade, g.score, g.date
FROM restaurants c
JOIN g IN c.grades
WHERE c.[name] = 'DEV Park Bake Shop'
SELECT c["Id"], g["grade"], g["score"], g["date"]
FROM C AS c
JOIN g IN c.grades
WHERE c["name"] = "DEV Park Bake Shop"
The Cloud maps SQL queries into the corresponding Azure Cosmos DB SQL API queries. A detailed description of all the transformations is out of scope, but we will describe some of the common elements that are used. The Cloud takes advantage of SQL API features such as the aggregation framework to compute the desired results.
| SQL Query | Sql API Query |
SELECT id, name FROM Users | SELECT C.id, C.name FROM C |
SELECT * FROM Users WHERE name = 'A' | SELECT * FROM C WHERE C.name = 'A' |
SELECT * FROM Users WHERE name = 'A' OR email = '[email protected]' | SELECT * FROM C WHERE C.name = 'A' OR C.email = '[email protected]' |
SELECT id, grantamt FROM WorldBank WHERE grantamt IN (4500000, 85400000) OR grantamt = 16200000 | SELECT C.id, C.grantamt FROM C WHERE C.grantamt IN (4500000, 85400000) OR C.grantamt = 16200000 |
SELECT * FROM WorldBank WHERE CountryCode = 'A' ORDER BY TotalCommAmt ASC | SELECT * FROM C WHERE C.countrycode = 'AL' ORDER BY C.totalcommamt ASC |
SELECT * FROM WorldBank WHERE CountryCode = 'A' ORDER BY TotalCommAmt DESC | SELECT * FROM C WHERE C.countrycode = 'AL' ORDER BY C.totalcommamt DESC |
| SQL Query | Sql API Query |
SELECT COUNT(grantamt) AS COUNT_GRAMT FROM WorldBank | SELECT COUNT(C.grantamt) AS COUNT_GRAMT FROM C |
SELECT SUM(grantamt) AS SUM_GRAMT FROM WorldBank | SELECT SUM(C.grantamt) AS SUM_GRAMT FROM C |
| SQL Query | Sql API Query |
SELECT IS_NUMBER(grantamt) AS ISN_ATTR, IS_NUMBER(id) AS ISN_ID FROM WorldBank | SELECT IS_NUMBER(C.grantamt) AS ISN_ATTR, IS_NUMBER(C.id) AS ISN_ID FROM C |
SELECT POWER(totalamt, 2) AS POWERS_A, LENGTH(id) AS LENGTH_ID, PI() AS ThePI FROM WorldBank | SELECT POWER(C.totalamt, 2) AS POWERS_A, LENGTH(C.id) AS LENGTH_ID, PI() AS ThePI FROM C |
You can extend the table schemas created with Automatic Schema Discovery by saving them into schema files. The schema files have a simple format that makes the schemas to edit.
Set GenerateSchemaFiles to "OnStart" to persist schemas for all tables when you connect. You can also generate table schemas as needed: Set GenerateSchemaFiles to "OnUse" and execute a SELECT query to the table.
For example, consider a schema for the restaurants data set. This is a sample data set provided by Azure Cosmos DB.
Below is an example document from the collection:
{
"address":{
"building":"461",
"coord":[
-74.138492,
40.631136
],
"street":"Port Richmond Ave",
"zipcode":"10302"
},
"borough":"Staten Island",
"cuisine":"Other",
"name":"Indian Oven",
"restaurant_id":"50018994"
}
When GenerateSchemaFiles is set, the Cloud saves schemas into the folder specified by the Location property. You can then change column behavior in the resulting schema.
The following schema uses the other:bsonpath property to define where in the collection to retrieve the data for a particular column. Using this model you can flatten arbitrary levels of hierarchy.
Below are the corresponding column definitions for the restaurants data set. In Custom Schema Example, you will find the complete schema.
<rsb:script xmlns:rsb="http://www.rssbus.com/ns/rsbscript/2">
<rsb:info title="StaticRestaurants" description="Custom Schema for the restaurants data set.">
<!-- Column definitions -->
<attr name="_rid" xs:type="string" key="true" other:collrid="hWdRAKRi3Pg=" other:dbrid="hWdRAA==" other:partitionpath="/name" />
<attr name="borough" xs:type="string" />
<attr name="cuisine" xs:type="string" />
<attr name="address.building" xs:type="string" />
<attr name="address.street" xs:type="string" />
<attr name="address.coord.0" xs:type="double" />
<attr name="address.coord.1" xs:type="double" />
<input name="rows@next" desc="Internal attribute used for paging through data." />
</rsb:info>
<rsb:set attr="collection" value="restaurants"/>
</rsb:script>
This section contains a complete schema. The info section enables a relational view of a Azure Cosmos DB object. For more details, see Custom Schema Definitions. The table below allows the SELECT, INSERT, UPDATE, and DELETE commands as implemented in the GET, POST, MERGE, and DELETE sections of the schema below.
Copy the rows@next input as-is into your schema. The operations, such as cosmosdbadoSysData, are internal implementations and can also be copied as is.
Set the Location property to the file directory that will contain the schema file.
When, creating custom schemas, the attr for _rid, shown below, is required.
Also required are three properties for the _rid column definition:
<rsb:script xmlns:rsb="http://www.rssbus.com/ns/rsbscript/2">
<rsb:info title="StaticRestaurants" description="Custom Schema for the restaurants data set.">
<!-- Column definitions -->
<attr name="_rid" xs:type="string" key="true" other:collrid="hWdRAKRi3Pg=" other:dbrid="hWdRAA==" other:partitionpath="/name" />
<attr name="borough" xs:type="string" />
<attr name="cuisine" xs:type="string" />
<attr name="address.building" xs:type="string" />
<attr name="address.street" xs:type="string" />
<attr name="address.coord.0" xs:type="double" />
<attr name="address.coord.1" xs:type="double" />
<input name="rows@next" desc="Internal attribute used for paging through data." />
</rsb:info>
<rsb:script method="GET">
<rsb:call op="cosmosdbadoSysData">
<rsb:push />
</rsb:call>
</rsb:script>
<rsb:script method="POST">
<rsb:call op="cosmosdbadoSysData">
<rsb:push />
</rsb:call>
</rsb:script>
<rsb:script method="MERGE">
<rsb:call op="cosmosdbadoSysData">
<rsb:push />
</rsb:call>
</rsb:script>
<rsb:script method="DELETE">
<rsb:call op="cosmosdbadoSysData">
<rsb:push />
</rsb:call>
</rsb:script>
</rsb:script>
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for Azure Cosmos DB:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries, including batch operations::
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
| Name | Type | Description |
| CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
| Name | Type | Description |
| CatalogName | String | The database name. |
| SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
| Name | Type | Description |
| CatalogName | String | The database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view. |
| TableType | String | The table type (table or view). |
| Description | String | A description of the table or view. |
| IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the [CData].[Entities].Customers table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Customers' AND CatalogName='CData' AND SchemaName='Entities'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view containing the column. |
| ColumnName | String | The column name. |
| DataTypeName | String | The data type name. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The storage size of the column. |
| DisplaySize | Int32 | The designated column's normal maximum width in characters. |
| NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
| IsNullable | Boolean | Whether the column can contain null. |
| Description | String | A brief description of the column. |
| Ordinal | Int32 | The sequence number of the column. |
| IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
| IsGeneratedColumn | String | Whether the column is generated. |
| IsHidden | Boolean | Whether the column is hidden. |
| IsArray | Boolean | Whether the column is an array. |
| IsReadOnly | Boolean | Whether the column is read-only. |
| IsKey | Boolean | Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
| ColumnType | String | The role or classification of the column in the schema. Possible values include SYSTEM, LINKEDCOLUMN, NAVIGATIONKEY, REFERENCECOLUMN, and NAVIGATIONPARENTCOLUMN. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
| Name | Type | Description |
| CatalogName | String | The database containing the stored procedure. |
| SchemaName | String | The schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure. |
| Description | String | A description of the stored procedure. |
| ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the EVAL stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'EVAL' AND Direction = 1 OR Direction = 2
To include result set columns in addition to the parameters, set the IncludeResultColumns pseudo column to True:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'EVAL' AND IncludeResultColumns='True'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the stored procedure. |
| SchemaName | String | The name of the schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure containing the parameter. |
| ColumnName | String | The name of the stored procedure parameter. |
| Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| DataTypeName | String | The name of the data type. |
| NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
| Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
| NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
| IsNullable | Boolean | Whether the parameter can contain null. |
| IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
| IsArray | Boolean | Whether the parameter is an array. |
| Description | String | The description of the parameter. |
| Ordinal | Int32 | The index of the parameter. |
| Values | String | The values you can set in this parameter are limited to those shown in this column. Possible values are comma-separated. |
| SupportsStreams | Boolean | Whether the parameter represents a file that you can pass as either a file path or a stream. |
| IsPath | Boolean | Whether the parameter is a target path for a schema creation operation. |
| Default | String | The value used for this parameter when no value is specified. |
| SpecificName | String | A label that, when multiple stored procedures have the same name, uniquely identifies each identically-named stored procedure. If there's only one procedure with a given name, its name is simply reflected here. |
| IsCDataProvided | Boolean | Whether the procedure is added/implemented by CData, as opposed to being a native Azure Cosmos DB procedure. |
| Name | Type | Description |
| IncludeResultColumns | Boolean | Whether the output should include columns from the result set in addition to parameters. Defaults to False. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the [CData].[Entities].Customers table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Customers' AND CatalogName='CData' AND SchemaName='Entities'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
| IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
| ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| KeySeq | String | The sequence number of the primary key. |
| KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the index. |
| SchemaName | String | The name of the schema containing the index. |
| TableName | String | The name of the table containing the index. |
| IndexName | String | The index name. |
| ColumnName | String | The name of the column associated with the index. |
| IsUnique | Boolean | True if the index is unique. False otherwise. |
| IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
| Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
| SortOrder | String | The sort order: A for ascending or D for descending. |
| OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
| Name | Type | Description |
| Name | String | The name of the connection property. |
| ShortDescription | String | A brief description. |
| Type | String | The data type of the connection property. |
| Default | String | The default value if one is not explicitly set. |
| Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
| Value | String | The value you set or a preconfigured default. |
| Required | Boolean | Whether the property is required to connect. |
| Category | String | The category of the connection property. |
| IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
| Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
| PropertyName | String | A camel-cased truncated form of the connection property name. |
| Ordinal | Int32 | The index of the parameter. |
| CatOrdinal | Int32 | The index of the parameter category. |
| Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
| Visible | Boolean | Informs whether the property is visible in the connection UI. |
| ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
| AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
| COUNT | Whether COUNT function is supported. | YES, NO |
| IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
| IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
| SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
| GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
| OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
| OUTER_JOINS | Whether outer joins are supported. | YES, NO |
| SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
| STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
| NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
| TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
| REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
| REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
| IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
| SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
| DIALECT | Indicates the SQL dialect to use. | |
| KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
| SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
| SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
| DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
| DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
| SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
| SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
| SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
| PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
| ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
| PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
| MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
| REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
| REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
| REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
| REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
| REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
| IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
| CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
| CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the NoSQL Database section for more information.
| Name | Type | Description |
| NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
| VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
| Name | Type | Description |
| Id | String | The database-generated Id returned from a data modification operation. |
| Batch | String | An identifier for the batch. 1 for a single operation. |
| Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
| Message | String | SUCCESS or an error message if the update in the batch failed. |
Describes the available system information.
The following query retrieves all columns:
SELECT * FROM sys_information
| Name | Type | Description |
| Product | String | The name of the product. |
| Version | String | The version number of the product. |
| Datasource | String | The name of the datasource the product connects to. |
| NodeId | String | The unique identifier of the machine where the product is installed. |
| HelpURL | String | The URL to the product's help documentation. |
| License | String | The license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.) |
| Location | String | The file path location where the product's library is stored. |
| Environment | String | The version of the environment or rumtine the product is currently running under. |
| DataSyncVersion | String | The tier of CData Sync required to use this connector. |
| DataSyncCategory | String | The category of CData Sync functionality (e.g., Source, Destination). |
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with Azure Cosmos DB.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Azure Cosmos DB, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
| AddDocument | Insert entire JSON string to CosmosDB. |
Insert entire JSON string to CosmosDB.
| Name | Type | Description |
| Database | String | Name of the database. |
| Table | String | Name of the table. |
| PartitionKey | String | Partition key value of the table. |
| Document | String | The JSON string to be inserted. |
| Name | Type | Description |
| Success | String | Returns true if the operation is successful. |
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| AuthScheme | The type of authentication to use when connecting to Azure Cosmos DB. |
| AccountEndpoint | The value should be the Cosmos DB account URL from the Keys blade of the Cosmos DB account. |
| AccountKey | A master key token or a resource token for connecting to the Azure Cosmos DB REST API. |
| TokenType | Denotes the type of token: master or resource. |
| Property | Description |
| AzureTenant | Identifies the Azure Cosmos DB tenant being used to access data. Accepts either the tenant's domain name (for example, contoso.onmicrosoft.com ) or its directory (tenant) ID. |
| AzureEnvironment | Specifies the Azure network environment to which you will connect. Must be the same network to which your Azure account was added. |
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| Scope | Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created. |
| Property | Description |
| OAuthJWTCert | Supplies the name of the client certificate's JWT Certificate store. |
| OAuthJWTCertType | Identifies the type of key store containing the JWT Certificate. |
| OAuthJWTCertPassword | Provides the password for the OAuth JWT certificate used to access a password-protected certificate store. If the certificate store does not require a password, leave this property blank. |
| OAuthJWTCertSubject | Identifies the subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate. |
| Property | Description |
| SSLClientCert | Specifies the TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). This property works in conjunction with other SSL-related properties to establish a secure connection. |
| SSLClientCertType | Specifies the type of key store containing the TLS/SSL client certificate for SSL Client Authentication. Choose from a variety of key store formats depending on your platform and certificate source. |
| SSLClientCertPassword | Specifes the password required to access the TLS/SSL client certificate store. Use this property if the selected certificate store type requires a password for access. |
| SSLClientCertSubject | Specifes the subject of the TLS/SSL client certificate to locate it in the certificate store. Use a comma-separated list of distinguished name fields, such as CN=www.server.com, C=US. The wildcard * selects the first certificate in the store. |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Schema | Specify the Azure Cosmos DB database you want to work with. |
| Property | Description |
| CalculateAggregates | Specifies whether will return the calculated value of the aggregates or grouped by partiton range. |
| ConsistencyLevel | Denotes the type of token: master or resource. |
| FlattenArrays | By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays. |
| FlattenObjects | Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. |
| ForceQueryOnNonIndexedContainers | Force the use of an index scan to process the query if indexing is disabled or the right index path is not available. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| MaxThreads | Specifies the maximum number of concurrent requests for Batch CUD (Create, Update, Delete) operations. |
| MultiThreadCount | Aggregate queries in partitioned collections will require parallel requests for different partition ranges. Set this to the number of parallel request to be issued in the same time. |
| Pagesize | Specifies the maximum number of records per page the provider returns when requesting data from Azure Cosmos DB. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| RequestPriorityLevel | Specifies the priority level for requests sent to Azure Cosmos DB when the number of requests exceeds the configured RU/s within a second. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| SeparatorCharacter | The character or characters used to denote hierarchy. |
| SetPartitionKeyAsPK | Whether or not to use the collection's Partition Key field as part of composite Primary Key for the corresponding exposed table. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| TypeDetectionScheme | Comma-separated options for how the provider will scan the data to determine the fields and datatypes in each document collection. |
| UseRidAsPk | Set this property to false to switch using the id column as primary key instead the default _rid. |
| WriteThroughputBudget | Defines the Requests Units (RU) budget per Second that the Batch CUD (Create, Update, Delete) operations should not exceed. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | The type of authentication to use when connecting to Azure Cosmos DB. |
| AccountEndpoint | The value should be the Cosmos DB account URL from the Keys blade of the Cosmos DB account. |
| AccountKey | A master key token or a resource token for connecting to the Azure Cosmos DB REST API. |
| TokenType | Denotes the type of token: master or resource. |
The type of authentication to use when connecting to Azure Cosmos DB.
string
"AccountKey"
The value should be the Cosmos DB account URL from the Keys blade of the Cosmos DB account.
string
""
The value should be the Cosmos DB account URL from the Keys blade of the Cosmos DB account.
A master key token or a resource token for connecting to the Azure Cosmos DB REST API.
string
""
In the Azure portal, navigate to the Cosmos DB service and select your Azure Cosmos DB account. From the resource menu, go to the Keys page. Find the PRIMARY KEY value and set Token to this value.
Denotes the type of token: master or resource.
string
"master"
The master key is created during the creation of an account. There are two sets of master keys, the primary key and the secondary key. The administrator of the account can then exercise key rotation using the secondary key. In addition, the account administrator can also regenerate the keys as needed.
Resource tokens are created when users in a database are set up with access permissions for precise access control on a resource, also known as a permission resource. A permission resource contains a hash resource token constructed with the information regarding the resource path and access type a user has access to. The permission resource token is time bound and the validity period can be overridden. When a permission resource is acted upon on (POST, GET, PUT), a new resource token is generated.
This section provides a complete list of the Azure Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AzureTenant | Identifies the Azure Cosmos DB tenant being used to access data. Accepts either the tenant's domain name (for example, contoso.onmicrosoft.com ) or its directory (tenant) ID. |
| AzureEnvironment | Specifies the Azure network environment to which you will connect. Must be the same network to which your Azure account was added. |
Identifies the Azure Cosmos DB tenant being used to access data. Accepts either the tenant's domain name (for example, contoso.onmicrosoft.com ) or its directory (tenant) ID.
string
""
A tenant is a digital container for your organization's users and resources, managed through Microsoft Entra ID (formerly Azure AD). Each tenant is associated with a unique directory ID, and often with a custom domain (for example, microsoft.com or contoso.onmicrosoft.com).
To find the directory (tenant) ID in the Microsoft Entra Admin Center, navigate to Microsoft Entra ID > Properties and copy the value labeled "Directory (tenant) ID".
This property is required in the following cases:
You can provide the tenant value in one of two formats:
Specifying the tenant explicitly ensures that the authentication request is routed to the correct directory, which is especially important when a user belongs to multiple tenants or when using service principal–based authentication.
If this value is omitted when required, authentication may fail or connect to the wrong tenant. This can result in errors such as unauthorized or resource not found.
A tenant is a digital container for your organization's users and resources, managed through Microsoft Entra ID (formerly Azure AD). Each tenant is associated with a unique directory ID, and often with a custom domain (for example, microsoft.com or contoso.onmicrosoft.com).
To find the directory (tenant) ID in the Microsoft Entra Admin Center, navigate to Microsoft Entra ID > Properties and copy the value labeled "Directory (tenant) ID".
This property is required in the following cases:
You can provide the tenant value in one of two formats:
Specifying the tenant explicitly ensures that the authentication request is routed to the correct directory, which is especially important when a user belongs to multiple tenants or when using service principal–based authentication.
If this value is omitted when required, authentication may fail or connect to the wrong tenant. This can result in errors such as unauthorized or resource not found.
Specifies the Azure network environment to which you will connect. Must be the same network to which your Azure account was added.
string
"GLOBAL"
Required if your Azure account is part of a different network than the Global network, such as China, USGOVT, or USGOVTDOD.
This section provides a complete list of the OAuth properties you can configure in the connection string for this provider.
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| Scope | Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created. |
Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication.
string
""
This property is required in two cases:
(When the driver provides embedded OAuth credentials, this value may already be provided by the Cloud and thus not require manual entry.)
OAuthClientId is generally used alongside other OAuth-related properties such as OAuthClientSecret and OAuthSettingsLocation when configuring an authenticated connection.
OAuthClientId is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can usually find this value in your identity provider’s application registration settings. Look for a field labeled Client ID, Application ID, or Consumer Key.
While the client ID is not considered a confidential value like a client secret, it is still part of your application's identity and should be handled carefully. Avoid exposing it in public repositories or shared configuration files.
For more information on how this property is used when configuring a connection, see Establishing a Connection.
Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.).
string
""
This property (sometimes called the application secret or consumer secret) is required when using a custom OAuth application in any flow that requires secure client authentication, such as web-based OAuth, service-based connections, or certificate-based authorization flows. It is not required when using an embedded OAuth application.
The client secret is used during the token exchange step of the OAuth flow, when the driver requests an access token from the authorization server. If this value is missing or incorrect, authentication fails with either an invalid_client or an unauthorized_client error.
OAuthClientSecret is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can obtain this value from your identity provider when registering the OAuth application.
Notes:
For more information on how this property is used when configuring a connection, see Establishing a Connection
Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created.
string
""
Scopes are set to define what kind of access the authenticating user will have; for example, read, read and write, restricted access to sensitive information. System administrators can use scopes to selectively enable access by functionality or security clearance.
When InitiateOAuth is set to GETANDREFRESH, you must use this property if you want to change which scopes are requested.
When InitiateOAuth is set to either REFRESH or OFF, you can change which scopes are requested using either this property or the Scope input.
This section provides a complete list of the JWT OAuth properties you can configure in the connection string for this provider.
| Property | Description |
| OAuthJWTCert | Supplies the name of the client certificate's JWT Certificate store. |
| OAuthJWTCertType | Identifies the type of key store containing the JWT Certificate. |
| OAuthJWTCertPassword | Provides the password for the OAuth JWT certificate used to access a password-protected certificate store. If the certificate store does not require a password, leave this property blank. |
| OAuthJWTCertSubject | Identifies the subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate. |
Supplies the name of the client certificate's JWT Certificate store.
string
""
The OAuthJWTCertType field specifies the type of the certificate store specified in OAuthJWTCert. If the store is password-protected, use OAuthJWTCertPassword to supply the password..
OAuthJWTCert is used in conjunction with the OAuthJWTCertSubject field in order to specify client certificates. If OAuthJWTCert has a value, and OAuthJWTCertSubject is set, the CData Cloud initiates a search for a certificate. For further information, see OAuthJWTCertSubject.
Designations of certificate stores are platform-dependent.
Notes
Identifies the type of key store containing the JWT Certificate.
string
"PEMKEY_BLOB"
| Value | Description | Notes |
| USER | A certificate store owned by the current user. | Only available in Windows. |
| MACHINE | A machine store. | Not available in Java or other non-Windows environments. |
| PFXFILE | A PFX (PKCS12) file containing certificates. | |
| PFXBLOB | A string (base-64-encoded) representing a certificate store in PFX (PKCS12) format. | |
| JKSFILE | A Java key store (JKS) file containing certificates. | Only available in Java. |
| JKSBLOB | A string (base-64-encoded) representing a certificate store in Java key store (JKS) format. | Only available in Java. |
| PEMKEY_FILE | A PEM-encoded file that contains a private key and an optional certificate. | |
| PEMKEY_BLOB | A string (base64-encoded) that contains a private key and an optional certificate. | |
| PUBLIC_KEY_FILE | A file that contains a PEM- or DER-encoded public key certificate. | |
| PUBLIC_KEY_BLOB | A string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate. | |
| SSHPUBLIC_KEY_FILE | A file that contains an SSH-style public key. | |
| SSHPUBLIC_KEY_BLOB | A string (base-64-encoded) that contains an SSH-style public key. | |
| P7BFILE | A PKCS7 file containing certificates. | |
| PPKFILE | A file that contains a PPK (PuTTY Private Key). | |
| XMLFILE | A file that contains a certificate in XML format. | |
| XMLBLOB | Astring that contains a certificate in XML format. | |
| BCFKSFILE | A file that contains an Bouncy Castle keystore. | |
| BCFKSBLOB | A string (base-64-encoded) that contains a Bouncy Castle keystore. |
Provides the password for the OAuth JWT certificate used to access a password-protected certificate store. If the certificate store does not require a password, leave this property blank.
string
""
This property specifies the password needed to open a password-protected certificate store. To determine if a password is necessary, refer to the documentation or configuration for your specific certificate store.
Identifies the subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate.
string
"*"
The value of this property is used to locate a matching certificate in the store. The search process works as follows:
You can set the value to '*' to automatically select the first certificate in the store. The certificate subject is a comma-separated list of distinguished name fields and values. For example: CN=www.server.com, OU=test, C=US, [email protected].
Common fields include:
| Field | Meaning |
| CN | Common Name. This is commonly a host name like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma, enclose it in quotes. For example: "O=ACME, Inc.".
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLClientCert | Specifies the TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). This property works in conjunction with other SSL-related properties to establish a secure connection. |
| SSLClientCertType | Specifies the type of key store containing the TLS/SSL client certificate for SSL Client Authentication. Choose from a variety of key store formats depending on your platform and certificate source. |
| SSLClientCertPassword | Specifes the password required to access the TLS/SSL client certificate store. Use this property if the selected certificate store type requires a password for access. |
| SSLClientCertSubject | Specifes the subject of the TLS/SSL client certificate to locate it in the certificate store. Use a comma-separated list of distinguished name fields, such as CN=www.server.com, C=US. The wildcard * selects the first certificate in the store. |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
Specifies the TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). This property works in conjunction with other SSL-related properties to establish a secure connection.
string
""
This property specifies the client certificate store for SSL Client Authentication. Use this property alongside SSLClientCertType, which defines the type of the certificate store, and SSLClientCertPassword, which specifies the password for password-protected stores. When SSLClientCert is set and SSLClientCertSubject is configured, the driver searches for a certificate matching the specified subject.
Certificate store designations vary by platform. On Windows, certificate stores are identified by names such as MY (personal certificates), while in Java, the certificate store is typically a file containing certificates and optional private keys.
The following are designations of the most common User and Machine certificate stores in Windows:
| MY | A certificate store holding personal certificates with their associated private keys. |
| CA | Certifying authority certificates. |
| ROOT | Root certificates. |
| SPC | Software publisher certificates. |
For PFXFile types, set this property to the filename. For PFXBlob types, set this property to the binary contents of the file in PKCS12 format.
Specifies the type of key store containing the TLS/SSL client certificate for SSL Client Authentication. Choose from a variety of key store formats depending on your platform and certificate source.
string
"PEMKEY_BLOB"
This property determines the format and location of the key store used to provide the client certificate. Supported values include platform-specific and universal key store formats. The available values and their usage are:
| USER - default | For Windows, this specifies that the certificate store is a certificate store owned by the current user. Note that this store type is not available in Java. |
| MACHINE | For Windows, this specifies that the certificate store is a machine store. Note that this store type is not available in Java. |
| PFXFILE | The certificate store is the name of a PFX (PKCS12) file containing certificates. |
| PFXBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in PFX (PKCS12) format. |
| JKSFILE | The certificate store is the name of a Java key store (JKS) file containing certificates. Note that this store type is only available in Java. |
| JKSBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in JKS format. Note that this store type is only available in Java. |
| PEMKEY_FILE | The certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate. |
| PEMKEY_BLOB | The certificate store is a string (base64-encoded) that contains a private key and an optional certificate. |
| PUBLIC_KEY_FILE | The certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate. |
| PUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate. |
| SSHPUBLIC_KEY_FILE | The certificate store is the name of a file that contains an SSH-style public key. |
| SSHPUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains an SSH-style public key. |
| P7BFILE | The certificate store is the name of a PKCS7 file containing certificates. |
| PPKFILE | The certificate store is the name of a file that contains a PuTTY Private Key (PPK). |
| XMLFILE | The certificate store is the name of a file that contains a certificate in XML format. |
| XMLBLOB | The certificate store is a string that contains a certificate in XML format. |
| BCFKSFILE | The certificate store is the name of a file that contains an Bouncy Castle keystore. |
| BCFKSBLOB | The certificate store is a string (base-64-encoded) that contains a Bouncy Castle keystore. |
Specifes the password required to access the TLS/SSL client certificate store. Use this property if the selected certificate store type requires a password for access.
string
""
This property provides the password needed to open a password-protected certificate store. This property is necessary when using certificate stores that require a password for decryption, as is often recommended for PFX or JKS type stores.
If the certificate store type does not require a password, for example USER or MACHINE on Windows, this property can be left blank. Ensure that the password matches the one associated with the specified certificate store to avoid authentication errors.
Specifes the subject of the TLS/SSL client certificate to locate it in the certificate store. Use a comma-separated list of distinguished name fields, such as CN=www.server.com, C=US. The wildcard * selects the first certificate in the store.
string
"*"
This property determines which client certificate to load based on its subject. The Cloud searches for a certificate that exactly matches the specified subject. If no exact match is found, the Cloud looks for certificates containing the value of the subject. If no match is found, no certificate is selected.
The subject should follow the standard format of a comma-separated list of distinguished name fields and values. For example, CN=www.server.com, OU=Test, C=US. Common fields include the following:
| Field | Meaning |
| CN | Common Name. This is commonly a host name like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
Note: If any field contains special characters, such as commas, the value must be quoted. For example: CN="Example, Inc.", C=US.
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If you are using a TLS/SSL connection, use this property to specify the TLS/SSL certificate to be accepted from the server. If you specify a value for this property, all other certificates that are not trusted by the machine are rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space- or colon-separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space- or colon-separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
Note: It is possible to use '*' to signify that all certificates should be accepted, but due to security concerns this is not recommended.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.
string
"1"
This property defines the level of detail the Cloud includes in the log file. Higher verbosity levels increase the detail of the logged information, but may also result in larger log files and slower performance due to the additional data being captured.
The default verbosity level is 1, which is recommended for regular operation. Higher verbosity levels are primarily intended for debugging purposes. For more information on each level, refer to Logging.
When combined with the LogModules property, Verbosity can refine logging to specific categories of information.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Schema | Specify the Azure Cosmos DB database you want to work with. |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
string
""
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
Specify the Azure Cosmos DB database you want to work with.
string
""
Specify the Azure Cosmos DB database you want to work with.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| CalculateAggregates | Specifies whether will return the calculated value of the aggregates or grouped by partiton range. |
| ConsistencyLevel | Denotes the type of token: master or resource. |
| FlattenArrays | By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays. |
| FlattenObjects | Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. |
| ForceQueryOnNonIndexedContainers | Force the use of an index scan to process the query if indexing is disabled or the right index path is not available. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| MaxThreads | Specifies the maximum number of concurrent requests for Batch CUD (Create, Update, Delete) operations. |
| MultiThreadCount | Aggregate queries in partitioned collections will require parallel requests for different partition ranges. Set this to the number of parallel request to be issued in the same time. |
| Pagesize | Specifies the maximum number of records per page the provider returns when requesting data from Azure Cosmos DB. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| RequestPriorityLevel | Specifies the priority level for requests sent to Azure Cosmos DB when the number of requests exceeds the configured RU/s within a second. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| SeparatorCharacter | The character or characters used to denote hierarchy. |
| SetPartitionKeyAsPK | Whether or not to use the collection's Partition Key field as part of composite Primary Key for the corresponding exposed table. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| TypeDetectionScheme | Comma-separated options for how the provider will scan the data to determine the fields and datatypes in each document collection. |
| UseRidAsPk | Set this property to false to switch using the id column as primary key instead the default _rid. |
| WriteThroughputBudget | Defines the Requests Units (RU) budget per Second that the Batch CUD (Create, Update, Delete) operations should not exceed. |
Specifies whether will return the calculated value of the aggregates or grouped by partiton range.
bool
true
Specifies whether will return the calculated value of the aggregates or grouped by partiton range.
Denotes the type of token: master or resource.
string
"SESSION"
The consistency level override for read options against documents and attachments. The valid values are: Strong, Bounded, Session, or Eventual (in order of strongest to weakest). The override must be the same or weaker than the account's configured consistency level.
The consistency level override for read options against documents and attachments. The valid values are: Strong, Bounded, Session, or Eventual (in order of strongest to weakest). The override must be the same or weaker than the account's configured consistency level.
By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays.
string
"0"
By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. This is only recommended for arrays that are expected to be short.
Set FlattenArrays to the number of elements you want to return from nested arrays. The specified elements are returned as columns. The zero-based index is concatenated to the column name. Other elements are ignored.
For example, you can return an arbitrary number of elements from an array of strings:
["FLOW-MATIC","LISP","COBOL"]When FlattenArrays is set to 1, the preceding array is flattened into the following table:
| Column Name | Column Value |
| languages.0 | FLOW-MATIC |
Setting FlattenArrays to -1 will flatten all the elements of nested arrays.
Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.
bool
true
Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. The property name is concatenated onto the object name with a dot to generate the column name.
For example, you can flatten the nested objects below at connection time:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
When FlattenObjects is set to true and FlattenArrays is set to 1, the preceding array is flattened into the following table:
| Column Name | Column Value |
| grades.0.grade | A |
| grades.0.score | 2 |
Force the use of an index scan to process the query if indexing is disabled or the right index path is not available.
bool
false
Queries against containers where indexing is disabled or paths are excluded may fail. Set this property to true to force the use of indexing on the server so the query is processed successfully. By default, queries that require the use of indexing on containers where IndexingMode=None are handled client-side.
Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY.
int
-1
The default value for this property, -1, means that no row limit is enforced unless the query explicitly includes a LIMIT clause. (When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting.)
Setting MaxRows to a whole number greater than 0 ensures that queries do not return excessively large result sets by default.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
Specifies the maximum number of concurrent requests for Batch CUD (Create, Update, Delete) operations.
int
200
This property should be used in conjunction with the WriteThroughputBudget connection property. The Cloud may execute less parallel requests than the configured MaxThreads value, since it always aims to not exceed the WriteThroughputBudget limit. The number of concurrent requests will also depend on the running machine's resources.
Note: This property is applicable only when executing batch CUD operations.
Aggregate queries in partitioned collections will require parallel requests for different partition ranges. Set this to the number of parallel request to be issued in the same time.
string
"5"
Aggregate queries in partitioned collections will require parallel requests for different partition ranges. Set this to the number of parallel request to be issued in the same time.
Specifies the maximum number of records per page the provider returns when requesting data from Azure Cosmos DB.
int
1000
When processing a query, instead of requesting all of the queried data at once from Azure Cosmos DB, the Cloud can request the queried data in pieces called pages.
This connection property determines the maximum number of results that the Cloud requests per page.
Note: Setting large page sizes may improve overall query execution time, but doing so causes the Cloud to use more memory when executing queries and risks triggering a timeout.
Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'.
string
""
This property allows you to define which pseudocolumns the Cloud exposes as table columns.
To specify individual pseudocolumns, use the following format:
Table1=Column1;Table1=Column2;Table2=Column3
To include all pseudocolumns for all tables use:
*=*
Specifies the priority level for requests sent to Azure Cosmos DB when the number of requests exceeds the configured RU/s within a second.
string
"None"
The maximum number of rows to scan to look for the columns available in a table.
int
100
The columns in a table must be determined by scanning table rows. This value determines the maximum number of rows that will be scanned.
Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.
The character or characters used to denote hierarchy.
string
"."
In order to flatten out hierarchical structures, the Cloud needs some specifier that states the path to a column through the hierarchy. If this value is "." and a column comes back with the name address.city, this indicates that there is a mapped attribute with a child called city. If your data has columns that already use a single period within the attribute name, set the SeparatorCharacter to a different character or characters.
Whether or not to use the collection's Partition Key field as part of composite Primary Key for the corresponding exposed table.
bool
true
By default, this is set to TRUE, and the collection's Partition Key is used as part of the table's composite Primary Key along with the _rid column. If this is set to FALSE, only the _rid column will serve as the Primary Key for the exposed table.
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error.
int
60
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.
Timeout is set to 60 seconds by default. To disable timeouts, set this property to 0.
Disabling the timeout allows operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server.
Note: Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
Comma-separated options for how the provider will scan the data to determine the fields and datatypes in each document collection.
string
"RowScan,Recent"
| None | Setting TypeDetectionScheme to None will return all columns as a string type. Cannot be combined with other options. |
| RowScan | Setting TypeDetectionScheme to RowScan will scan rows to heuristically determine the data type. The RowScanDepth determines the number of rows to be scanned. Can be used with Recent. |
| Recent | Setting TypeDetectionScheme to Recent will determine whether RowScan is executed on the most recent documents in the collection. Can be used with RowScan. |
| RawValue | Setting TypeDetectionScheme to RawValue will push each document as single aggregate on a column named JsonData, along with its resource identifier on the separate Primary Key column. Cannot be combined with other options. |
Set this property to false to switch using the id column as primary key instead the default _rid.
bool
true
Since CosmosDB allows you to use both _rid and id fields as unique values for retrieving resource data, you can set this property to false to switch using the id column as primary key instead the default _rid.
Defines the Requests Units (RU) budget per Second that the Batch CUD (Create, Update, Delete) operations should not exceed.
int
1000
The Cloud will dynamically adjust the maximum number of requests per second depending on the configured RU budget. Although the Cloud always aims to not exceed the RU budget, since the requests throttling logic is applied client-side, it may be exceeded by a relatively small amount in a few cases. These cases include inserting, updating and deleting records with highly variable column count and input value length per column.
Note: This property is applicable only when executing batch CUD operations.
LZMA from 7Zip LZMA SDK
LZMA SDK is placed in the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original LZMA SDK code, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
LZMA2 from XZ SDK
Version 1.9 and older are in the public domain.
Xamarin.Forms
Xamarin SDK
The MIT License (MIT)
Copyright (c) .NET Foundation Contributors
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NSIS 3.10
Copyright (C) 1999-2025 Contributors THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
1. DEFINITIONS
"Contribution" means:
a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and b) in the case of each subsequent Contributor:
i) changes to the Program, and
ii) additions to the Program;
where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
"Contributor" means any person or entity that distributes the Program.
"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
"Program" means the Contributions distributed in accordance with this Agreement.
"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
2. GRANT OF RIGHTS
a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
3. REQUIREMENTS
A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
a) it complies with the terms and conditions of this Agreement; and
b) its license agreement:
i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
When the Program is made available in source code form:
a) it must be made available under this Agreement; and
b) a copy of this Agreement must be included with each copy of the Program.
Contributors may not remove or alter any copyright notices contained within the Program.
Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
4. COMMERCIAL DISTRIBUTION
Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
5. NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
6. DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. GENERAL
If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.