CData Cloud offers access to MongoDB across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a SQL Server database can connect to MongoDB through CData Cloud.
CData Cloud allows you to standardize and configure connections to MongoDB as though it were any other OData endpoint or standard SQL Server.
This page provides a guide to Establishing a Connection to MongoDB in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to MongoDB and configure any necessary connection properties to create a database in CData Cloud
Accessing data from MongoDB through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to MongoDB by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
Set the following connection properties to connect to a single MongoDB instance:
To connect to a replica set, set the following in addition to the preceding connection properties:
You can set UseSSL to negotiate SSL/TLS encryption when you connect.
Supported AuthScheme types (MONGODB-CR,SCRAM-SHA-1,SCRAM-SHA-256,PLAIN,GSSAPI) are challenge-response authentication and LDAP.
In challenge-response authentication, the User and Password properties correspond to a username and password stored in a MongoDB database. If you want to connect to data from one database and authenticate to another database, set both Database and AuthDatabase.
To use LDAP authentication, set AuthDatabase to "$external" and set AuthScheme to PLAIN. This value specifies the SASL PLAIN mechanism; note that this mechanism transmits credentials over plaintext, so it is not suitable for use without TLS/SSL on untrusted networks.
Set AuthScheme to X509 to use X.509 certificate authentication.
Before you can connect to Amazon DocumentDB, you will first need to, ensure your Amazon DocumentDB cluster and the EC2 instance containing the mongo shell are currently running.
Next, configure an SSH tunnel to the EC2 instance as follows.
Specify the following to connect to the DocumentDB cluster.
To obtain the connection string needed to connect to a Cosmos DB account using the MongoDB API, log in to the Azure Portal, select Azure Cosmos DB, and select your account. In the Settings section, click Connection String and set the following values.
When you connect to Atlas, ObjectRocket, or another database-as-a-service provider, there typically are a few variations on the procedure outlined in Establishing a Connection. The following sections show how to obtain the necessary connection properties for several popular services.
You can authenticate to MongoDB Atlas with a MongoDB user or an LDAP user. The following sections show how to map Atlas connection strings to Cloud connection properties. To obtain the Atlas connection string, follow the steps below:
In addition to creating a MongoDB user and/or setting up LDAP, your Atlas project's white-list must include the IP address of the machine the Cloud is connecting from. To add an IP address to the white-list, select the Security tab in the Clusters view and then click IP Whitelist -> Add IP Address.
Below is an example connection string providing a MongoDB user's credentials.
mongodb://USERNAME:[email protected]:27017,cluster0-shard-00-01.mongodb.net:27017,cluster0-shard-00-02.mongodb.net:27017/test?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin
Below are the corresponding Cloud connection properties:
cluster0-shard-00-00.mongodb.netmycluster0-shard-00-01.mongodb.net:27017,mycluster0-shard-00-02.mongodb.net:27017User: The username of a MongoDB user you added to your MongoDB project.
Password: The password of the MongoDB user.
The following list shows the MongoDB Atlas requirements for authenticating with an LDAP user.
Below is an example command to connect with the mongo client:
mongo "mongodb://cluster0-shard-00-00.mongodb.net:27017,cluster0-shard-00-01.mongodb.net:27017,cluster0-shard-00-02.mongodb.net:27017/test?ssl=true&replicaSet=Cluster0-shard-0&authSource=$external" --authenticationMechanism PLAIN --username cn=rob,cn=Users,dc=atlas-ldaps-01,dc=myteam,dc=com
Server: The first server in the replica set. Or, you can specify another primary or secondary server here (the Cloud queries the servers in Server and ReplicaSet to find the primary).
For example:
cluster0-shard-00-00.mongodb.net
mycluster0-shard-00-01.mongodb.net:27017,mycluster0-shard-00-02.mongodb.net:27017AuthScheme: PLAIN in LDAP authentication.
Database: The database you want to read from and write to.
AuthDatabase: "$external" to authenticate with an LDAP user.
User: The full Distinguished Name (DN) of a user in your LDAP server as the Atlas username. For example:
cn=rob,cn=Users,dc=atlas-ldaps-01,dc=myteam,dc=com
Password: The password of the LDAP user.
UseSSL: true. Atlas requires TLS/SSL.
To connect to ObjectRocket, you authenticate with the credentials for a database user. You can obtain the necessary connection properties from the control panel: On the Instances page, select your instance and then select the Connect menu to display a MongoDB connection string.
In addition to adding a user for your database, you also need to allow access to the IP address for the machine the Cloud is connecting from. You can configure this by selecting your instance on the Instances page and then clicking Add ACL.
mongodb://YOUR_USERNAME:[email protected]:52826,abc123-d4-2.mongo.objectrocket.com:52826,abc123-d4-1.mongo.objectrocket.com:52826/YOUR_DATABASE_NAME?replicaSet=89c04c5db2cf403097d8f2e8ca871a1c
Below are the corresponding Cloud connection properties:
abc123-d4-0.mongo.objectrocket.comabc123-d4-2.mongo.objectrocket.com:52826,abc123-d4-1.mongo.objectrocket.com:52826MongoDB is a schemaless, document database that provides high performance, availability, and scalability. These features are not necessarily incompatible with a standards-compliant query language like SQL-92. In this section we will show various schemes that the Cloud offers to bridge the gap with relational SQL and a document database.
The Cloud models the schemaless MongoDB objects into relational tables and translates SQL queries into MongoDB queries to get the requested data. See Query Mapping for more details on how various MongoDB operations are represented as SQL.
The Automatic Schema Discovery scheme automatically finds the data types in a MongoDB object by scanning a configured number of rows of the object. You can use RowScanDepth, FlattenArrays, and FlattenObjects to control the relational representation of the collections in MongoDB. You can also write Free-Form Queries not tied to the schema.
The Cloud automatically infers a relational schema by inspecting a series of MongoDB documents in a collection. You can use the RowScanDepth property to define the number of documents the Cloud will scan to do so. The columns identified during the discovery process depend on the FlattenArrays and FlattenObjects properties.
If FlattenObjects is set, all nested objects will be flattened into a series of columns. For example, consider the following document:
{
id: 12,
name: "Lohia Manufacturers Inc.",
address: {street: "Main Street", city: "Chapel Hill", state: "NC"},
offices: ["Chapel Hill", "London", "New York"],
annual_revenue: 35,600,000
}
This document will be represented by the following columns:
| Column Name | Data Type | Example Value |
| id | Integer | 12 |
| name | String | Lohia Manufacturers Inc. |
| address.street | String | Main Street |
| address.city | String | Chapel Hill |
| address.state | String | NC |
| offices | String | ["Chapel Hill", "London", "New York"] |
| annual_revenue | Double | 35,600,000 |
If FlattenObjects is not set, then the address.street, address.city, and address.state columns will not be broken apart. The address column of type string will instead represent the entire object. Its value would be {street: "Main Street", city: "Chapel Hill", state: "NC"}. See JSON Functions for more details on working with JSON aggregates.
The FlattenArrays property can be used to flatten array values into columns of their own. This is only recommended for arrays that are expected to be short, for example the coordinates below:
"coord": [ -73.856077, 40.848447 ]The FlattenArrays property can be set to 2 to represent the array above as follows:
| Column Name | Data Type | Example Value |
| coord.0 | Float | -73.856077 |
| coord.1 | Float | 40.848447 |
It is best to leave other unbounded arrays as they are and piece out the data for them as needed using JSON Functions.
As discussed in Automatic Schema Discovery, intuited table schemas enable SQL access to unstructured MongoDB data. JSON Functions enable you to use standard JSON functions to summarize MongoDB data and extract values from any nested structures. Custom Schema Definitions enable you to define static tables and give you more granular control over the relational view of your data; for example, you can write schemas defining parent/child tables or fact/dimension tables. However, you are not limited to these schemes.
After connecting you can query any nested structure without flattening the data. Any relations that you can access with FlattenArrays and FlattenObjects can also be accessed with an ad hoc SQL query.
Let's consider an example document from the following Restaurant data set:
{
"address": {
"building": "1007",
"coord": [
-73.856077,
40.848447
],
"street": "Morris Park Ave",
"zipcode": "10462"
},
"borough": "Bronx",
"cuisine": "Bakery",
"grades": [
{
"grade": "A",
"score": 2,
"date": {
"$date": "1393804800000"
}
},
{
"date": {
"$date": "1378857600000"
},
"grade": "B",
"score": 6
},
{
"score": 10,
"date": {
"$date": "1358985600000"
},
"grade": "C"
}
],
"name": "Morris Park Bake Shop",
"restaurant_id": "30075445"
}
You can access any nested structure in this document as a column. Use the dot notation to drill down to the values you want to access as shown in the query below. Note that arrays have a zero-based index. For example, the following query retrieves the second grade for the restaurant in the example:
SELECT [address.building], [grades.1.grade] FROM restaurants WHERE restaurant_id = '30075445'The preceding query returns the following results:
| Column Name | Data Type | Example Value |
| address.building | String | 1007 |
| grades.1.grade | String | A |
It is possible to retrieve an array of documents as if it were a separate table. Take the following JSON structure from the restaurants collection for example:
{
"_id" : ObjectId("568c37b748ddf53c5ed98932"),
"address" : {
"building" : "1007",
"coord" : [-73.856077, 40.848447],
"street" : "Morris Park Ave",
"zipcode" : "10462"
},
"borough" : "Bronx",
"cuisine" : "Bakery",
"grades" : [{
"date" : ISODate("2014-03-03T00:00:00Z"),
"grade" : "A",
"score" : 2
}, {
"date" : ISODate("2013-09-11T00:00:00Z"),
"grade" : "A",
"score" : 6
}, {
"date" : ISODate("2013-01-24T00:00:00Z"),
"grade" : "A",
"score" : 10
}, {
"date" : ISODate("2011-11-23T00:00:00Z"),
"grade" : "A",
"score" : 9
}, {
"date" : ISODate("2011-03-10T00:00:00Z"),
"grade" : "B",
"score" : 14
}],
"name" : "Morris Park Bake Shop",
"restaurant_id" : "30075445"
}
Vertical flattening will allow you to retrieve the grades array as a separate table:
SELECT * FROM [restaurants.grades]This query returns the following data set:
| date | grade | score | P_id | _index |
| 2014-03-03T00:00:00.000Z | A | 2 | 568c37b748ddf53c5ed98932 | 1 |
| 2013-09-11T00:00:00.000Z | A | 6 | 568c37b748ddf53c5ed98932 | 2 |
| 2013-01-24T00:00:00.000Z | A | 10 | 568c37b748ddf53c5ed98932 | 3 |
SELECT [restaurants].[restaurant_id], [restaurants.grades].* FROM [restaurants.grades] JOIN [restaurants] WHERE [restaurants].name = 'Morris Park Bake Shop'This query returns the following data set:
| restaurant_id | date | grade | score | P_id | _index |
| 30075445 | 2014-03-03T00:00:00.000Z | A | 2 | 568c37b748ddf53c5ed98932 | 1 |
| 30075445 | 2013-09-11T00:00:00.000Z | A | 6 | 568c37b748ddf53c5ed98932 | 2 |
| 30075445 | 2013-01-24T00:00:00.000Z | A | 10 | 568c37b748ddf53c5ed98932 | 3 |
| 30075445 | 2011-11-23T00:00:00.000Z | A | 9 | 568c37b748ddf53c5ed98932 | 4 |
| 30075445 | 2011-03-10T00:00:00.000Z | B | 14 | 568c37b748ddf53c5ed98932 | 5 |
It's also possible to build queries targeting arrays within other arrays.
Consider this sample Inventory collection:
{
"_id": {
"$oid": "xxxxxxxxxxxxxxxxxxxxxx"
},
"Company Branch": "Main Branch",
"ItemList": [
{
"item": "journal",
"instock": [
{
"warehouse": "A",
"qty": 15
},
{
"warehouse": "B",
"qty": 45
}
]
},
{
"item": "paper",
"instock": [
{
"warehouse": "A",
"qty": 50
},
{
"warehouse": "B",
"qty": 5
}
]
}
]
}
Insert data into the nested arrays using the syntax of <parent array>.<index>.<child array>, as follows:
INSERT INTO [Inventory.ItemList] (p_id, item, [instock.0.warehouse], [instock.0.qty], [instock.0.price]) VALUES ('xxxxxxxxxxxxxxxxxxxxxx', 'NoteBook', 'B', 20, '5$')
The Inventory collection after executing the INSERT statement:
{
"_id": {
"$oid": "xxxxxxxxxxxxxxxxxxxxxx"
},
"Company Branch": "Main Branch",
"ItemList": [
{
"item": "journal",
"instock": [
{
"warehouse": "A",
"qty": 15
},
{
"warehouse": "B",
"qty": 45
}
]
},
{
"item": "paper",
"instock": [
{
"warehouse": "A",
"qty": 50
},
{
"warehouse": "B",
"qty": 5
}
]
},
{
"item": "NoteBook",
"instock": [
{
"warehouse": "B",
"qty": 20,
"price": "5$"
}
]
}
]
}
The Cloud can return JSON structures as column values. The Cloud enables you to use standard SQL functions to work with these JSON structures. The examples in this section use the following array:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
SELECT Name, JSON_EXTRACT(grades,'[0].grade') AS Grade, JSON_EXTRACT(grades,'[0].score') AS Score FROM Students;
| Column Name | Example Value |
| Grade | A |
| Score | 2 |
SELECT Name, JSON_COUNT(grades,'[x]') AS NumberOfGrades FROM Students;
| Column Name | Example Value |
| NumberOfGrades | 5 |
SELECT Name, JSON_SUM(score,'[x].score') AS TotalScore FROM Students;
| Column Name | Example Value |
| TotalScore | 41 |
SELECT Name, JSON_MIN(score,'[x].score') AS LowestScore FROM Students;
| Column Name | Example Value |
| LowestScore | 2 |
SELECT Name, JSON_MAX(score,'[x].score') AS HighestScore FROM Students;
| Column Name | Example Value |
| HighestScore | 14 |
The DOCUMENT function can be used to retrieve the entire document as a JSON string. See the following query and its result as an example:
SELECT DOCUMENT(*) FROM Customers;The query above will return the entire document as shown.
{ "id": 12, "name": "Lohia Manufacturers Inc.", "address": { "street": "Main Street", "city": "Chapel Hill", "state": "NC"}, "offices": [ "Chapel Hill", "London", "New York" ], "annual_revenue": 35,600,000 }
The Cloud maps SQL queries into the corresponding MongoDB queries. A detailed description of all the transformations is out of scope, but we will describe some of the common elements that are used. The Cloud takes advantage of MongoDB features such as the aggregation framework to compute the desired results.
| SQL Query | MongoDB Query |
SELECT * FROM Users | db.users.find() |
SELECT user_id, status FROM Users | db.users.find(
{},
{ user_id: 1, status: 1, _id: 0 }
) |
SELECT * FROM Users WHERE status = 'A' | db.users.find(
{ status: "A" }
) |
SELECT * FROM Users WHERE status = 'A' OR age=50 | db.users.find(
{ $or: [ { status: "A" },
{ age: 50 } ] }
) |
SELECT * FROM Users WHERE name LIKE 'A%' | db.users.find(
{name: /^a/}
) |
SELECT * FROM Users WHERE status = 'A' ORDER BY user_id ASC | db.users.find( { status: "A" }.sort( { user_id: 1 } ) |
SELECT * FROM Users WHERE status = 'A' ORDER BY user_id DESC | db.users.find( {status: "A" }.sort( {user_id: -1} ) |
| SQL Query | MongoDB Query |
SELECT Count(*) As Count FROM Orders | db.orders.aggregate( [
{
$group: {
_id: null,
count: { $sum: 1 }
}
}
] ) |
SELECT Sum(price) As Total FROM Orders | db.orders.aggregate( [
{
$group: {
_id: null,
total: { $sum: "$price" }
}
}
] ) |
SELECT cust_id, Sum(price) As total FROM Orders GROUP BY cust_id ORDER BY total | db.orders.aggregate( [
{
$group: {
_id: "$cust_id",
total: { $sum: "$price" }
}
} ,
{ $sort: {total: 1 } }
] ) |
SELECT cust_id, ord_date, Sum(price) As total FROM Orders GROUP BY cust_id, ord_date HAVING total > 250 |
db.orders.aggregate( [
{
$group: {
_id: {
cust_id: "$cust_id",
ord_date: {
month: { $month: "$ord_date" },
day: { $dayOfMonth: "$ord_date" },
year: { $year: "$ord_date"}
}
},
total: { $sum: "$price" }
}
},
{ $match: { total: { $gt: 250 } } }
] ) |
| SQL Query | MongoDB Query |
INSERT INTO users (user_id, age, status, [address.city], [address.postalcode])
VALUES ('bcd001', 45, 'A', 'Chapel Hill', 27517) | db.users.insert(
{ user_id: "bcd001", age: 45, status: "A", address:{ city:"Chapel Hill", postalCode:27514} }
) |
INSERT INTO t1 ("c1") VALUES (('a1', 'a2', 'a3')) | db.users.insert({"c1": ['a1', 'a2', 'a3']}) |
INSERT INTO t1 ("c1") VALUES (()) | db.users.insert({"c1": []}) |
INSERT INTO t1 ("a.b.c.c1") VALUES (('a1', 'a2', 'a3')) | db.users.insert("a":{"b":{"c":{"c1":['a1','a2', 'a3']}}}) |
| SQL Query | MongoDB Query |
UPDATE users SET status = 'C', [address.postalcode] = 90210 WHERE age > 25 | db.users.update(
{ age: { $gt: 25 } },
{ $set: { status: "C", address.postalCode: 90210 },
{ multi: true }
) |
| SQL Query | MongoDB Query |
DELETE FROM users WHERE status = 'D' | db.users.remove( { status: "D" } ) |
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for MongoDB:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries, including batch operations::
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
| Name | Type | Description |
| CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
| Name | Type | Description |
| CatalogName | String | The database name. |
| SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
| Name | Type | Description |
| CatalogName | String | The database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view. |
| TableType | String | The table type (table or view). |
| Description | String | A description of the table or view. |
| IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the [CData].[Sample].Customers table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Customers' AND CatalogName='CData' AND SchemaName='Sample'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view containing the column. |
| ColumnName | String | The column name. |
| DataTypeName | String | The data type name. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The storage size of the column. |
| DisplaySize | Int32 | The designated column's normal maximum width in characters. |
| NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
| IsNullable | Boolean | Whether the column can contain null. |
| Description | String | A brief description of the column. |
| Ordinal | Int32 | The sequence number of the column. |
| IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
| IsGeneratedColumn | String | Whether the column is generated. |
| IsHidden | Boolean | Whether the column is hidden. |
| IsArray | Boolean | Whether the column is an array. |
| IsReadOnly | Boolean | Whether the column is read-only. |
| IsKey | Boolean | Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
| ColumnType | String | The role or classification of the column in the schema. Possible values include SYSTEM, LINKEDCOLUMN, NAVIGATIONKEY, REFERENCECOLUMN, and NAVIGATIONPARENTCOLUMN. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
| Name | Type | Description |
| CatalogName | String | The database containing the stored procedure. |
| SchemaName | String | The schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure. |
| Description | String | A description of the stored procedure. |
| ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the AddDocument stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'AddDocument' AND Direction = 1 OR Direction = 2
To include result set columns in addition to the parameters, set the IncludeResultColumns pseudo column to True:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'AddDocument' AND IncludeResultColumns='True'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the stored procedure. |
| SchemaName | String | The name of the schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure containing the parameter. |
| ColumnName | String | The name of the stored procedure parameter. |
| Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| DataTypeName | String | The name of the data type. |
| NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
| Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
| NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
| IsNullable | Boolean | Whether the parameter can contain null. |
| IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
| IsArray | Boolean | Whether the parameter is an array. |
| Description | String | The description of the parameter. |
| Ordinal | Int32 | The index of the parameter. |
| Values | String | The values you can set in this parameter are limited to those shown in this column. Possible values are comma-separated. |
| SupportsStreams | Boolean | Whether the parameter represents a file that you can pass as either a file path or a stream. |
| IsPath | Boolean | Whether the parameter is a target path for a schema creation operation. |
| Default | String | The value used for this parameter when no value is specified. |
| SpecificName | String | A label that, when multiple stored procedures have the same name, uniquely identifies each identically-named stored procedure. If there's only one procedure with a given name, its name is simply reflected here. |
| IsCDataProvided | Boolean | Whether the procedure is added/implemented by CData, as opposed to being a native MongoDB procedure. |
| Name | Type | Description |
| IncludeResultColumns | Boolean | Whether the output should include columns from the result set in addition to parameters. Defaults to False. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the [CData].[Sample].Customers table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Customers' AND CatalogName='CData' AND SchemaName='Sample'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
| IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
| ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| KeySeq | String | The sequence number of the primary key. |
| KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the index. |
| SchemaName | String | The name of the schema containing the index. |
| TableName | String | The name of the table containing the index. |
| IndexName | String | The index name. |
| ColumnName | String | The name of the column associated with the index. |
| IsUnique | Boolean | True if the index is unique. False otherwise. |
| IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
| Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
| SortOrder | String | The sort order: A for ascending or D for descending. |
| OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
| Name | Type | Description |
| Name | String | The name of the connection property. |
| ShortDescription | String | A brief description. |
| Type | String | The data type of the connection property. |
| Default | String | The default value if one is not explicitly set. |
| Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
| Value | String | The value you set or a preconfigured default. |
| Required | Boolean | Whether the property is required to connect. |
| Category | String | The category of the connection property. |
| IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
| Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
| PropertyName | String | A camel-cased truncated form of the connection property name. |
| Ordinal | Int32 | The index of the parameter. |
| CatOrdinal | Int32 | The index of the parameter category. |
| Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
| Visible | Boolean | Informs whether the property is visible in the connection UI. |
| ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
| AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
| COUNT | Whether COUNT function is supported. | YES, NO |
| IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
| IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
| SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
| GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
| OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
| OUTER_JOINS | Whether outer joins are supported. | YES, NO |
| SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
| STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
| NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
| TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
| REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
| REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
| IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
| SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
| DIALECT | Indicates the SQL dialect to use. | |
| KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
| SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
| SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
| DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
| DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
| SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
| SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
| SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
| PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
| ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
| PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
| MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
| REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
| REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
| REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
| REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
| REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
| IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
| CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
| CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the NoSQL Database section for more information.
| Name | Type | Description |
| NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
| VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
| Name | Type | Description |
| Id | String | The database-generated Id returned from a data modification operation. |
| Batch | String | An identifier for the batch. 1 for a single operation. |
| Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
| Message | String | SUCCESS or an error message if the update in the batch failed. |
Describes the available system information.
The following query retrieves all columns:
SELECT * FROM sys_information
| Name | Type | Description |
| Product | String | The name of the product. |
| Version | String | The version number of the product. |
| Datasource | String | The name of the datasource the product connects to. |
| NodeId | String | The unique identifier of the machine where the product is installed. |
| HelpURL | String | The URL to the product's help documentation. |
| License | String | The license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.) |
| Location | String | The file path location where the product's library is stored. |
| Environment | String | The version of the environment or rumtine the product is currently running under. |
| DataSyncVersion | String | The tier of CData Sync required to use this connector. |
| DataSyncCategory | String | The category of CData Sync functionality (e.g., Source, Destination). |
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with MongoDB.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from MongoDB, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
| AddDocument | Inserts a JSON document into a MongoDB collection without modification, preserving its original structure. |
| CreateUserTable | Creates a schema definition for a MongoDB collection, mapping document structure to a tabular format. |
| GetDocument | Executes a pass-through query to retrieve specific documents from a MongoDB collection, allowing for advanced filtering and projection. |
| SearchDocument | Retrieves an entire MongoDB document as a JSON-formatted string, maintaining its native structure. |
Inserts a JSON document into a MongoDB collection without modification, preserving its original structure.
| Name | Type | Description |
| Collection | String | The name of the MongoDB collection where the document will be inserted. |
| Name | Type | Description |
| Success | String | Indicates whether the insertion was successful. Returns 'true' if the operation completed without errors; otherwise, an exception is returned. |
Creates a schema definition for a MongoDB collection, mapping document structure to a tabular format.
| Name | Type | Description |
| CatalogName | String | The catalog that contains the MongoDB collection. |
| SchemaName | String | The schema associated with the MongoDB collection. |
| TableName | String | The name of the MongoDB collection for which a schema definition is being created. |
| Location | String | The file path where the generated schema definition will be saved. |
| ColumnNames# | String | A list of column names to be included in the schema. |
| ColumnDataTypes# | String | Specifies the data type for each column in the schema. |
| ColumnSizes# | String | Defines the maximum size allowed for each column where applicable. |
| ColumnScales# | String | Specifies the number of decimal places for numeric columns. |
| ColumnIsKeys# | String | Indicates whether a column is a primary key ('true' for key columns, 'false' otherwise). |
| ColumnIsNulls# | String | Defines whether a column allows null values ('true' for nullable columns, 'false' otherwise). |
| ColumnDefaults# | String | Specifies default values assigned to columns if no value is provided during data insertion. |
| ColumnAutoIncrements# | String | Indicates whether a column uses auto-increment functionality ('true' for auto-increment columns, 'false' otherwise). |
| Name | Type | Description |
| AffectedTables | String | Indicates the number of tables created. Returns '1' if the schema was successfully created, otherwise '0'. |
Executes a pass-through query to retrieve specific documents from a MongoDB collection, allowing for advanced filtering and projection.
| Name | Type | Description |
| Collection | String | The name of the MongoDB collection from which to retrieve documents. |
| Query | String | A JSON-formatted query used to filter documents in the specified collection. Supports MongoDB query syntax. |
| Projection | String | A JSON-formatted projection specifying which fields to include or exclude in the query results. |
| Name | Type | Description |
| * | String | Returns documents that match the query criteria. The structure of the output varies depending on the collection's schema and the fields included in the projection. |
Retrieves an entire MongoDB document as a JSON-formatted string, maintaining its native structure.
| Name | Type | Description |
| Collection | String | The name of the MongoDB collection to search within. |
| _id | String | The unique identifier (_id) of the document to retrieve from the collection. |
| Name | Type | Description |
| Document | String | Returns the full JSON document as a string, preserving its original structure. |
To enable TLS, set UseSSL to True.
With this configuration, the Cloud attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
The MongoDB Cloud also supports setting client certificates. Set the following to connect using a client certificate.
Set the following properties:
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| AuthScheme | The authentication mechanism that MongoDB will use to authenticate the connection. |
| Server | The host name or IP address of the server hosting the MongoDB database. |
| Port | The port for the MongoDB database. |
| User | Specifies the authenticating user's user ID. |
| Password | Specifies the authenticating user's password. |
| Database | The name of the MongoDB database. |
| UseSSL | This field sets whether SSL is enabled. |
| AuthDatabase | The name of the MongoDB database for authentication. |
| ReplicaSet | This property allows you to specify multiple servers in addition to the one configured in Server and Port . Specify both a server name and port; separate servers with a comma. |
| DNSServer | Specify the DNS server when resolving MongoDB seed list. |
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| SSHAuthMode | The authentication method used when establishing an SSH Tunnel to the service. |
| SSHClientCert | A certificate to be used for authenticating the SSHUser. |
| SSHClientCertPassword | The password of the SSHClientCert key if it has one. |
| SSHClientCertSubject | The subject of the SSH client certificate. |
| SSHClientCertType | The type of SSHClientCert private key. |
| SSHServer | The SSH server. |
| SSHPort | The SSH port. |
| SSHUser | The SSH user. |
| SSHPassword | The SSH password. |
| SSHServerFingerprint | The SSH server fingerprint. |
| UseSSH | Whether to tunnel the MongoDB connection over SSH. Use SSH. |
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Property | Description |
| BuiltInColumnMapping | A comprehensive list detailing the mappings of column names for the built-in fields used in MongoDB. |
| Compression | Specifies the compression method used for network communication between the client and the MongoDB server. |
| DataModel | By default, the provider will not automatically discover the metadata for a child table as its own distinct table. To enable this functionality, set DataModel to Relational . |
| DatetimeFormat | Determines the format of datetime values returned by the Document function. This property only takes effect when StrictMode=true. |
| FlattenArrays | This property specifies whether nested array elements are flattened into individual columns. By default, nested arrays are returned as JSON strings. Set this property to the number of elements to extract from nested arrays. |
| FlattenObjects | This property specifies whether the attributes of objects are flattened into separate columns. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| NoCursorTimeout | The server typically terminates idle cursors after 30 minutes of inactivity to prevent excessive memory usage. Set this option to true to avoid automatic timeouts and keep your cursors active. |
| Pagesize | Specifies the maximum number of records per page the provider returns when requesting data from MongoDB. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| ReadPreference | Set this to a strategy for reading from a replica set. Accepted values are primary, primaryPreferred, secondary, secondaryPreferred, and nearest. |
| ReadPreferenceTags | This property is used to identify and interact with one or more members of a replica set that are linked to specific tags. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| ServiceKind | Specifies the type of service the provider can interact with. |
| SlaveOK | Determines the provider's capability to read data from secondary (slave) servers. It controls whether the provider can access and retrieve information from these backup systems. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| TypeDetectionScheme | Specifies the method for detecting metadata discovery. |
| UpdateScheme | Specifies the strategy that can be used when executing an update statement. |
| UseFindAPI | Specifies whether MongoDB queries using the method db.collection.find(), allow retrieval of documents from a specific collection based on defined criteria. |
| WriteConcern | Determines the level of acknowledgment requested for write operations in MongoDB, applicable to standalone mongod, replica sets, or sharded clusters. |
| WriteConcernJournaled | Determines whether write operations can be recorded in the on-disk journal before being acknowledged as successful. |
| WriteConcernTimeout | The WriteConcernTimeout property specifies the maximum time (in milliseconds) that the server should wait for a write concern to be acknowledged before returning an error. |
| WriteScheme | Sets whether the object type for inserted or updated objects is determined from the existing column metadata or the input value type. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | The authentication mechanism that MongoDB will use to authenticate the connection. |
| Server | The host name or IP address of the server hosting the MongoDB database. |
| Port | The port for the MongoDB database. |
| User | Specifies the authenticating user's user ID. |
| Password | Specifies the authenticating user's password. |
| Database | The name of the MongoDB database. |
| UseSSL | This field sets whether SSL is enabled. |
| AuthDatabase | The name of the MongoDB database for authentication. |
| ReplicaSet | This property allows you to specify multiple servers in addition to the one configured in Server and Port . Specify both a server name and port; separate servers with a comma. |
| DNSServer | Specify the DNS server when resolving MongoDB seed list. |
The authentication mechanism that MongoDB will use to authenticate the connection.
string
"SCRAM-SHA-1"
Accepted values are MONGODB-CR, SCRAM-SHA-1, SCRAM-SHA-256, GSSAPI, PLAIN, and NONE. The following authentication types correspond to the authentication values.
Generally, this property does not need to be set for this authentication type, as the Cloud uses different challenge-response mechanisms by default to authenticate a user to different versions of MongoDB.
Set AuthScheme to PLAIN to use LDAP authentication. This value specifies the SASL PLAIN mechanism; note that this mechanism transmits credentials over plain-text, so it is not suitable for use without TLS/SSL on untrusted networks.
Set AuthScheme to GSSAPI to use Kerberos authentication. Additionally configure the following properties as configured for the MongoDB environment:
| KerberosKDC | The FQDN of the domain controller. |
| KerberosRealm | The Kerberos Realm (for Windows this will be the AD domain). |
| KerberosSPN | The assigned service principle name for the user. |
| AuthDatabase | This value should be set to '$external'. |
| User | The user created in the $external database. |
| Password | The corresponding User's password. |
Set AuthScheme to X509 to use X.509 certificate authentication.
The host name or IP address of the server hosting the MongoDB database.
string
""
The host name or IP address of the server hosting the MongoDB database. If you choose to connect using DNS seed lists, set this option to "mongodb+srv://" + the name of the server your MongoDB instance is running on.
If connecting through MongoDB Atlas, set the Server connection property to the shard value of the primary cluster (ex: cluster0-shard-00-00-test.mongodb.net). More information about sharding can be found here: MongoDB Sharding.
The port for the MongoDB database.
string
"27017"
The port for the MongoDB database.
Specifies the authenticating user's user ID.
string
""
The authenticating server requires both User and Password to validate the user's identity.
Specifies the authenticating user's password.
string
""
The authenticating server requires both User and Password to validate the user's identity.
The name of the MongoDB database.
string
""
The name of the MongoDB database.
This field sets whether SSL is enabled.
bool
true
This field sets whether the Cloud will attempt to negotiate TLS/SSL connections to the server. By default, the Cloud checks the server's certificate against the system's trusted certificate store. To specify another certificate, set SSLServerCert.
The name of the MongoDB database for authentication.
string
""
The name of the MongoDB database for authentication. Only needed if the authentication database is different from the database to retrieve data from.
This property allows you to specify multiple servers in addition to the one configured in Server and Port . Specify both a server name and port; separate servers with a comma.
string
""
This property allows you to specify the other servers in the replica set in addition to the one configured in Server and Port. You must specify all servers in the replica set using ReplicaSet, Server, and Port.
Specify both a server name and port in ReplicaSet; separate servers with a comma. For example:
Server=localhost;Port=27017;ReplicaSet=localhost:27018,localhost:27019;
To find the primary server, the Cloud queries the servers in ReplicaSet and the server specified by Server and Port.
Note that only the primary server in a replica set is writable. Secondaries can be readable if the SlaveOK setting allows it. To configure a strategy executing SELECT queries to secondaries, see ReadPreference.
Specify the DNS server when resolving MongoDB seed list.
string
""
Specify the DNS server when resolving MongoDB seed list.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If you are using a TLS/SSL connection, use this property to specify the TLS/SSL certificate to be accepted from the server. If you specify a value for this property, all other certificates that are not trusted by the machine are rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space- or colon-separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space- or colon-separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
Note: It is possible to use '*' to signify that all certificates should be accepted, but due to security concerns this is not recommended.
This section provides a complete list of the SSH properties you can configure in the connection string for this provider.
| Property | Description |
| SSHAuthMode | The authentication method used when establishing an SSH Tunnel to the service. |
| SSHClientCert | A certificate to be used for authenticating the SSHUser. |
| SSHClientCertPassword | The password of the SSHClientCert key if it has one. |
| SSHClientCertSubject | The subject of the SSH client certificate. |
| SSHClientCertType | The type of SSHClientCert private key. |
| SSHServer | The SSH server. |
| SSHPort | The SSH port. |
| SSHUser | The SSH user. |
| SSHPassword | The SSH password. |
| SSHServerFingerprint | The SSH server fingerprint. |
| UseSSH | Whether to tunnel the MongoDB connection over SSH. Use SSH. |
The authentication method used when establishing an SSH Tunnel to the service.
string
"Password"
A certificate to be used for authenticating the SSHUser.
string
""
SSHClientCert must contain a valid private key in order to use public key authentication. A public key is optional, if one is not included then the Cloud generates it from the private key. The Cloud sends the public key to the server and the connection is allowed if the user has authorized the public key.
The SSHClientCertType field specifies the type of the key store specified by SSHClientCert. If the store is password protected, specify the password in SSHClientCertPassword.
Some types of key stores are containers which may include multiple keys. By default the Cloud will select the first key in the store, but you can specify a specific key using SSHClientCertSubject.
The password of the SSHClientCert key if it has one.
string
""
This property is required for SSH tunneling when using certificate-based authentication. If the SSH certificate is in a password-protected key store, provide the password using this property to access the certificate.
The subject of the SSH client certificate.
string
"*"
When loading a certificate the subject is used to locate the certificate in the store.
If an exact match is not found, the store is searched for subjects containing the value of the property.
If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks the first certificate in the certificate store.
The certificate subject is a comma separated list of distinguished name fields and values. For instance "CN=www.server.com, OU=test, C=US, [email protected]". Common fields and their meanings are displayed below.
| Field | Meaning |
| CN | Common Name. This is commonly a host name like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma it must be quoted.
The type of SSHClientCert private key.
string
"PEMKEY_BLOB"
This property can take one of the following values:
| Types | Description | Allowed Blob Values |
| MACHINE/USER | Blob values are not supported. | |
| JKSFILE/JKSBLOB | base64-only | |
| PFXFILE/PFXBLOB | A PKCS12-format (.pfx) file. Must contain both a certificate and a private key. | base64-only |
| PEMKEY_FILE/PEMKEY_BLOB | A PEM-format file. Must contain an RSA, DSA, or OPENSSH private key. Can optionally contain a certificate matching the private key. | base64 or plain text. |
| PPKFILE/PPKBLOB | A PuTTY-format private key created using the puttygen tool. | base64-only |
| XMLFILE/XMLBLOB | An XML key in the format generated by the .NET RSA class: RSA.ToXmlString(true). | base64 or plain text. |
The SSH server.
string
""
The SSH server.
The SSH port.
string
"22"
The SSH port.
The SSH user.
string
""
The SSH user.
The SSH password.
string
""
The SSH password.
The SSH server fingerprint.
string
""
The SSH server fingerprint.
Whether to tunnel the MongoDB connection over SSH. Use SSH.
bool
false
By default the Cloud will attempt to connect directly to MongoDB. When this option is enabled, the Cloud will instead establish an SSH connection with the SSHServer and tunnel the connection to MongoDB through it.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.
string
"1"
This property defines the level of detail the Cloud includes in the log file. Higher verbosity levels increase the detail of the logged information, but may also result in larger log files and slower performance due to the additional data being captured.
The default verbosity level is 1, which is recommended for regular operation. Higher verbosity levels are primarily intended for debugging purposes. For more information on each level, refer to Logging.
When combined with the LogModules property, Verbosity can refine logging to specific categories of information.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
string
""
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| BuiltInColumnMapping | A comprehensive list detailing the mappings of column names for the built-in fields used in MongoDB. |
| Compression | Specifies the compression method used for network communication between the client and the MongoDB server. |
| DataModel | By default, the provider will not automatically discover the metadata for a child table as its own distinct table. To enable this functionality, set DataModel to Relational . |
| DatetimeFormat | Determines the format of datetime values returned by the Document function. This property only takes effect when StrictMode=true. |
| FlattenArrays | This property specifies whether nested array elements are flattened into individual columns. By default, nested arrays are returned as JSON strings. Set this property to the number of elements to extract from nested arrays. |
| FlattenObjects | This property specifies whether the attributes of objects are flattened into separate columns. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| NoCursorTimeout | The server typically terminates idle cursors after 30 minutes of inactivity to prevent excessive memory usage. Set this option to true to avoid automatic timeouts and keep your cursors active. |
| Pagesize | Specifies the maximum number of records per page the provider returns when requesting data from MongoDB. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| ReadPreference | Set this to a strategy for reading from a replica set. Accepted values are primary, primaryPreferred, secondary, secondaryPreferred, and nearest. |
| ReadPreferenceTags | This property is used to identify and interact with one or more members of a replica set that are linked to specific tags. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| ServiceKind | Specifies the type of service the provider can interact with. |
| SlaveOK | Determines the provider's capability to read data from secondary (slave) servers. It controls whether the provider can access and retrieve information from these backup systems. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| TypeDetectionScheme | Specifies the method for detecting metadata discovery. |
| UpdateScheme | Specifies the strategy that can be used when executing an update statement. |
| UseFindAPI | Specifies whether MongoDB queries using the method db.collection.find(), allow retrieval of documents from a specific collection based on defined criteria. |
| WriteConcern | Determines the level of acknowledgment requested for write operations in MongoDB, applicable to standalone mongod, replica sets, or sharded clusters. |
| WriteConcernJournaled | Determines whether write operations can be recorded in the on-disk journal before being acknowledged as successful. |
| WriteConcernTimeout | The WriteConcernTimeout property specifies the maximum time (in milliseconds) that the server should wait for a write concern to be acknowledged before returning an error. |
| WriteScheme | Sets whether the object type for inserted or updated objects is determined from the existing column metadata or the input value type. |
A comprehensive list detailing the mappings of column names for the built-in fields used in MongoDB.
string
""
This property allows users to input a list of MongoDB column names, separated by commas, and maps these built-in columns to newly defined names. If this property is defined, it directs the Cloud to utilize a predefined set of mappings between MongoDB's document fields and the SQL columns.
The remappable built-in columns are "_index", "P_id", "_id" and "parent_id".
For example:
_index=BuiltInIndex,P_id=Root_Id,_id=My_Id,parent_id=My_Parent_id
Remapping these columns is important, particularly in addressing common issues such as "column names must be unique" errors. These conflicts often occur when the Cloud encounters extra columns labeled "_index", "P_id", "_id" or "parent_id" in addition to the standard built-in columns.
This property is useful for modifying reserved names, offering flexibility in database design, and avoiding conflicts.
Specifies the compression method used for network communication between the client and the MongoDB server.
string
"None"
This property enables compression and decompression of messages between the application and MongoDB, thereby reducing the total amount of data transmitted over the network.
This property helps improve performance when working with large MongoDB documents or tables.
By default, the provider will not automatically discover the metadata for a child table as its own distinct table. To enable this functionality, set DataModel to Relational .
string
"DOCUMENT"
When setting DataModel to Relational, the discovery of child tables extends to root-level elements and those found within top-level array elements. Additionally, the provider exposes _id and parent_id columns to enable JOIN operations between parent and child tables. The _id column acts as a primary key for the flattened table, while the parent_id column identifies the parent document.
Determines the format of datetime values returned by the Document function. This property only takes effect when StrictMode=true.
string
"Canonical"
This property specifies whether nested array elements are flattened into individual columns. By default, nested arrays are returned as JSON strings. Set this property to the number of elements to extract from nested arrays.
string
""
By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. This is only recommended for arrays that are expected to be short.
Set FlattenArrays to the number of elements you want to return from nested arrays. The specified elements are returned as columns. The zero-based index is concatenated to the column name. Other elements are ignored.
For example, you can return an arbitrary number of elements from an array of strings:
["FLOW-MATIC","LISP","COBOL"]When FlattenArrays is set to 1, the preceding array is flattened into the following table:
| Column Name | Column Value |
| languages.0 | FLOW-MATIC |
Setting FlattenArrays to -1 will flatten all the elements of nested arrays.
This property specifies whether the attributes of objects are flattened into separate columns.
bool
true
Set FlattenObjects to true to flatten the properties of objects into individual columns. If set to false, nested properties can remain nested and can be returned as JSON strings.
The Cloud generates the column name by concatenating the property name with the object name, separated by a dot.
For example, you can flatten the nested objects below at connection time:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
When FlattenObjects is set to true and FlattenArrays is set to 1, the preceding array is flattened into the following table:
| Column Name | Column Value |
| grades.0.grade | A |
| grades.0.score | 2 |
Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY.
int
-1
The default value for this property, -1, means that no row limit is enforced unless the query explicitly includes a LIMIT clause. (When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting.)
Setting MaxRows to a whole number greater than 0 ensures that queries do not return excessively large result sets by default.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
The server typically terminates idle cursors after 30 minutes of inactivity to prevent excessive memory usage. Set this option to true to avoid automatic timeouts and keep your cursors active.
bool
false
By default, the MongoDB server automatically closes idle cursors associated with the session after 30 minutes of inactivity to free up resources. The session refreshes with each new document batch request. If processing takes longer than 30 minutes, the session can expire and close. When NoCursorTimeout is set to true, the cursor can not time out due to inactivity. It remains open until it is explicitly closed by the application or the cursor has exhausted all results.
This property is useful in controlling whether a cursor automatically times out after a period of inactivity.
Specifies the maximum number of records per page the provider returns when requesting data from MongoDB.
int
4096
When processing a query, instead of requesting all of the queried data at once from MongoDB, the Cloud can request the queried data in pieces called pages.
This connection property determines the maximum number of results that the Cloud requests per page.
Note: Setting large page sizes may improve overall query execution time, but doing so causes the Cloud to use more memory when executing queries and risks triggering a timeout.
Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'.
string
""
This property allows you to define which pseudocolumns the Cloud exposes as table columns.
To specify individual pseudocolumns, use the following format:
Table1=Column1;Table1=Column2;Table2=Column3
To include all pseudocolumns for all tables use:
*=*
Set this to a strategy for reading from a replica set. Accepted values are primary, primaryPreferred, secondary, secondaryPreferred, and nearest.
string
"primary"
This property enables you to execute queries to a member in a replica set other other than the primary member. Accepted values are the following:
When this property is set, query results may not reflect the latest changes if a write operation has not yet been replicated to a secondary machine. You can use ReadPreference to accomplish the following, with some risk that the Cloud will return stale data:
When directing the Cloud to execute SELECT statements to a secondary server, SlaveOK must also be set. Otherwise, the Cloud will return an error response.
This property is used to identify and interact with one or more members of a replica set that are linked to specific tags.
string
""
To use the ReadPreferenceTags property, it is necessary to configure the ReadPreference to a value other than the default 'primary' value. The required format consists of a list of semicolon-separated tag sets, where each tag set includes key-value pairs separated by commas.
For example:
The maximum number of rows to scan to look for the columns available in a table.
int
100
The columns in a table must be determined by scanning table rows. This value determines the maximum number of rows that will be scanned.
Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.
Setting to a value of -1 causes the Cloud to scan an arbitrary number of rows until it reaches the final row.
Specifies the type of service the provider can interact with.
string
"MongoDB"
The ServiceKind property informs the Cloud of the type of MongoDB service to which it is connecting. This can affect how the connection operates or which features are accessible. Typical values include MongoDB for standard MongoDB deployments. This is the default option.
This property is useful for tools that support various MongoDB services.
Determines the provider's capability to read data from secondary (slave) servers. It controls whether the provider can access and retrieve information from these backup systems.
bool
false
The SlaveOK property allows read operations on secondary servers in a replica set. This connection property is deprecated. The recommended option is ReadPreference for version 4.2 or above.
When set to true, it enables reading from secondary replica set servers, in addition to the primary server. This property is useful for configuring how the driver queries secondary servers using the ReadPreference setting.
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error.
int
60
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.
Timeout is set to 60 seconds by default. To disable timeouts, set this property to 0.
Disabling the timeout allows operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server.
Note: Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
Specifies the method for detecting metadata discovery.
string
"RowScan"
A detailed list of enumerated options outlines the methods the provider uses to examine the data. This property helps identify different fields and their data types in each document collection.
| None | Setting TypeDetectionScheme to None will return all columns as a string type. It cannot be combined with other options. |
| RowScan | Setting TypeDetectionScheme to RowScan will scan rows to determine the data type heuristically. The RowScanDepth determines the number of rows to be scanned. It can be used in conjunction with Recent. |
| Recent | Setting TypeDetectionScheme to Recent will instead execute the RowScan on the most recently inserted documents into the collection. This operation is more expensive and may take considerably longer to complete when dealing with large datasets. |
Specifies the strategy that can be used when executing an update statement.
string
"Default"
When updating a target document with fields that are replaced or merged, an update statement is executed. If the default value is set to Default, the Cloud replaces the entire original document with a new one. However, if the value is set to Merge, only specific fields in the target document can be updated.
This property helps trigger the system to identify which document needs modifications.
For example, if you have a collection 'classySample' as below.
{
"_id": "1",
"message": {
"component_items": [{"locked": true}],
"id":1
}
}
UPDATE [classySample] SET [message.component_items.0.locked] = false WHERE [message.id] = 1
In the query above, the 'message' document will be replaced with new document constructed with SET clause, the collection after updating looks like
{
"_id": "1",
"message": {
"component_items": [
{
"locked": false
}
]
}
}
But when using Merge, only the 'locked' field in 'component_items' will be updated, the collection becomes
{
"_id": "1",
"message": {
"component_items": [
{
"locked": false
}
],
"id": 1
}
}
Specifies whether MongoDB queries using the method db.collection.find(), allow retrieval of documents from a specific collection based on defined criteria.
bool
true
When UseFindAPI is set to true, the Cloud uses the new Find Command API instead of the older OP_QUERY interface. Therefore, this must be set to true in order to query DocumentDB clusters using db.collection.find(). If set to false, the Cloud can revert to the legacy find operation, such as OP_QUERY, particularly when working with older versions of MongoDB servers.
This property is useful for filtering, sorting, and manipulating data in MongoDB's flexible document structure.
Determines the level of acknowledgment requested for write operations in MongoDB, applicable to standalone mongod, replica sets, or sharded clusters.
string
"0"
The WriteConcern property in MongoDB defines the acknowledgment level required for write operations, determining how confident MongoDB must be about the success of a write before confirming it. The default value is { w: 1 }, meaning the primary node must acknowledge the write operation before returning success to the client.
This property is useful for balancing data safety and performance.
Determines whether write operations can be recorded in the on-disk journal before being acknowledged as successful.
bool
true
The WriteConcernJournaled property in MongoDB controls whether write operations must be written to the on-disk journal before being acknowledged as successful.
When set to True, MongoDB acknowledges a write operation only after the data has been committed to the on-disk journal. If the option is set to false, a write operation is acknowledged without waiting for journaling.
The WriteConcernTimeout property specifies the maximum time (in milliseconds) that the server should wait for a write concern to be acknowledged before returning an error.
string
"0"
This property specifies the level of acknowledgment requested from MongoDB for write operations, such as INSERT, UPDATE, and DELETE. If a timeout is set for a write operation in MongoDB, it can wait to confirm the write on secondary nodes. If it times out, a write concern error occurs, but the write can still succeed on the primary node.
Sets whether the object type for inserted or updated objects is determined from the existing column metadata or the input value type.
string
"Metadata"
Sets whether the object type for inserted or updated objects is determined from the existing column metadata or the input value type. When the default value Metadata is used, the Cloud uses the data type as determined by the TypeDetectionScheme for objects pushed to MongoDB. When the value is set to RawValue, the type of the object in the INSERT determines what type is used for MongoDB.
For example, if you have a field 'c1' in MongoDB defined as String type, the metadata returns the column as String as well. In the following query, the resulting field in MongoDB is therefore defined as String when using WriteScheme=Metadata. But when using RawValue, the inserting field type is Date instead since the FROM_UNIXTIME() function returns an actual Date object:
INSERT INTO Table1 (c1) VALUES (FROM_UNIXTIME(1636910867039, 0))
INSERT INTO t1 ("c1") VALUES (())
This returns an empty array:
"c1":[]
LZMA from 7Zip LZMA SDK
LZMA SDK is placed in the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original LZMA SDK code, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
LZMA2 from XZ SDK
Version 1.9 and older are in the public domain.
Xamarin.Forms
Xamarin SDK
The MIT License (MIT)
Copyright (c) .NET Foundation Contributors
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NSIS 3.10
Copyright (C) 1999-2025 Contributors THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
1. DEFINITIONS
"Contribution" means:
a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and b) in the case of each subsequent Contributor:
i) changes to the Program, and
ii) additions to the Program;
where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
"Contributor" means any person or entity that distributes the Program.
"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
"Program" means the Contributions distributed in accordance with this Agreement.
"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
2. GRANT OF RIGHTS
a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
3. REQUIREMENTS
A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
a) it complies with the terms and conditions of this Agreement; and
b) its license agreement:
i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
When the Program is made available in source code form:
a) it must be made available under this Agreement; and
b) a copy of this Agreement must be included with each copy of the Program.
Contributors may not remove or alter any copyright notices contained within the Program.
Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
4. COMMERCIAL DISTRIBUTION
Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
5. NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
6. DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. GENERAL
If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.