The CData Sync App provides a straightforward way to continuously pipeline your MongoDB data to any database, data lake, or data warehouse, making it easily available for Analytics, Reporting, AI, and Machine Learning.
The MongoDB connector can be used from the CData Sync application to pull data from MongoDB and move it to any of the supported destinations.
The Sync App models MongoDB instances as relational databases and supports MongoDB versions 2.6 through 7.0. The Sync App leverages the MongoDB API, including the MongoDB aggregation framework, to enable bidirectional SQL access to MongoDB data. See the NoSQL Database chapter for SQL-to-MongoDB query mappings and more information about accessing unstructured data in MongoDB through SQL. See the DBaaS Connections page to connect to popular services such as Atlas and ObjectRocket.
For required properties, see the Settings tab.
For connection properties that are not typically required, see the Advanced tab.
Set the following connection properties to connect to a single MongoDB instance:
To connect to a replica set, set the following in addition to the preceding connection properties:
You can set UseSSL to negotiate SSL/TLS encryption when you connect.
Supported AuthScheme types (MONGODB-CR,SCRAM-SHA-1,SCRAM-SHA-256,PLAIN,GSSAPI) are challenge-response authentication and LDAP.
In challenge-response authentication, the User and Password properties correspond to a username and password stored in a MongoDB database. If you want to connect to data from one database and authenticate to another database, set both Database and AuthDatabase.
To use LDAP authentication, set AuthDatabase to "$external" and set AuthScheme to PLAIN. This value specifies the SASL PLAIN mechanism; note that this mechanism transmits credentials over plaintext, so it is not suitable for use without TLS/SSL on untrusted networks.
Set AuthScheme to X509 to use X.509 certificate authentication.
Before you can connect to Amazon DocumentDB, you will first need to, ensure your Amazon DocumentDB cluster and the EC2 instance containing the mongo shell are currently running.
Next, configure an SSH tunnel to the EC2 instance as follows.
Specify the following to connect to the DocumentDB cluster.
To obtain the connection string needed to connect to a Cosmos DB account using the MongoDB API, log in to the Azure Portal, select Azure Cosmos DB, and select your account. In the Settings section, click Connection String and set the following values.
When you connect to Atlas, ObjectRocket, or another database-as-a-service provider, there typically are a few variations on the procedure outlined in Establishing a Connection. The following sections show how to obtain the necessary connection properties for several popular services.
You can authenticate to MongoDB Atlas with a MongoDB user or an LDAP user. The following sections show how to map Atlas connection strings to Sync App connection properties. To obtain the Atlas connection string, follow the steps below:
In addition to creating a MongoDB user and/or setting up LDAP, your Atlas project's white-list must include the IP address of the machine the Sync App is connecting from. To add an IP address to the white-list, select the Security tab in the Clusters view and then click IP Whitelist -> Add IP Address.
Below is an example connection string providing a MongoDB user's credentials.
mongodb://USERNAME:[email protected]:27017,cluster0-shard-00-01.mongodb.net:27017,cluster0-shard-00-02.mongodb.net:27017/test?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin
Below are the corresponding Sync App connection properties:
cluster0-shard-00-00.mongodb.netmycluster0-shard-00-01.mongodb.net:27017,mycluster0-shard-00-02.mongodb.net:27017User: The username of a MongoDB user you added to your MongoDB project.
Password: The password of the MongoDB user.
The following list shows the MongoDB Atlas requirements for authenticating with an LDAP user.
Below is an example command to connect with the mongo client:
mongo "mongodb://cluster0-shard-00-00.mongodb.net:27017,cluster0-shard-00-01.mongodb.net:27017,cluster0-shard-00-02.mongodb.net:27017/test?ssl=true&replicaSet=Cluster0-shard-0&authSource=$external" --authenticationMechanism PLAIN --username cn=rob,cn=Users,dc=atlas-ldaps-01,dc=myteam,dc=com
Server: The first server in the replica set. Or, you can specify another primary or secondary server here (the Sync App queries the servers in Server and ReplicaSet to find the primary).
For example:
cluster0-shard-00-00.mongodb.net
mycluster0-shard-00-01.mongodb.net:27017,mycluster0-shard-00-02.mongodb.net:27017AuthScheme: PLAIN in LDAP authentication.
Database: The database you want to read from and write to.
AuthDatabase: "$external" to authenticate with an LDAP user.
User: The full Distinguished Name (DN) of a user in your LDAP server as the Atlas username. For example:
cn=rob,cn=Users,dc=atlas-ldaps-01,dc=myteam,dc=com
Password: The password of the LDAP user.
UseSSL: true. Atlas requires TLS/SSL.
To connect to ObjectRocket, you authenticate with the credentials for a database user. You can obtain the necessary connection properties from the control panel: On the Instances page, select your instance and then select the Connect menu to display a MongoDB connection string.
In addition to adding a user for your database, you also need to allow access to the IP address for the machine the Sync App is connecting from. You can configure this by selecting your instance on the Instances page and then clicking Add ACL.
mongodb://YOUR_USERNAME:[email protected]:52826,abc123-d4-2.mongo.objectrocket.com:52826,abc123-d4-1.mongo.objectrocket.com:52826/YOUR_DATABASE_NAME?replicaSet=89c04c5db2cf403097d8f2e8ca871a1c
Below are the corresponding Sync App connection properties:
abc123-d4-0.mongo.objectrocket.comabc123-d4-2.mongo.objectrocket.com:52826,abc123-d4-1.mongo.objectrocket.com:52826MongoDB is a schemaless, document database that provides high performance, availability, and scalability. These features are not necessarily incompatible with a standards-compliant query language like SQL-92. In this section we will show various schemes that the Sync App offers to bridge the gap with relational SQL and a document database.
The Sync App models the schemaless MongoDB objects into relational tables and translates SQL queries into MongoDB queries to get the requested data. See Query Mapping for more details on how various MongoDB operations are represented as SQL.
The Automatic Schema Discovery scheme automatically finds the data types in a MongoDB object by scanning a configured number of rows of the object. You can use RowScanDepth, FlattenArrays, and FlattenObjects to control the relational representation of the collections in MongoDB. You can also write Free-Form Queries not tied to the schema.
The Sync App automatically infers a relational schema by inspecting a series of MongoDB documents in a collection. You can use the RowScanDepth property to define the number of documents the Sync App will scan to do so. The columns identified during the discovery process depend on the FlattenArrays and FlattenObjects properties.
If FlattenObjects is set, all nested objects will be flattened into a series of columns. For example, consider the following document:
{
id: 12,
name: "Lohia Manufacturers Inc.",
address: {street: "Main Street", city: "Chapel Hill", state: "NC"},
offices: ["Chapel Hill", "London", "New York"],
annual_revenue: 35,600,000
}
This document will be represented by the following columns:
| Column Name | Data Type | Example Value |
| id | Integer | 12 |
| name | String | Lohia Manufacturers Inc. |
| address.street | String | Main Street |
| address.city | String | Chapel Hill |
| address.state | String | NC |
| offices | String | ["Chapel Hill", "London", "New York"] |
| annual_revenue | Double | 35,600,000 |
If FlattenObjects is not set, then the address.street, address.city, and address.state columns will not be broken apart. The address column of type string will instead represent the entire object. Its value would be {street: "Main Street", city: "Chapel Hill", state: "NC"}. See JSON Functions for more details on working with JSON aggregates.
The FlattenArrays property can be used to flatten array values into columns of their own. This is only recommended for arrays that are expected to be short, for example the coordinates below:
"coord": [ -73.856077, 40.848447 ]The FlattenArrays property can be set to 2 to represent the array above as follows:
| Column Name | Data Type | Example Value |
| coord.0 | Float | -73.856077 |
| coord.1 | Float | 40.848447 |
It is best to leave other unbounded arrays as they are and piece out the data for them as needed using JSON Functions.
As discussed in Automatic Schema Discovery, intuited table schemas enable SQL access to unstructured MongoDB data. JSON Functions enable you to use standard JSON functions to summarize MongoDB data and extract values from any nested structures. Custom Schema Definitions enable you to define static tables and give you more granular control over the relational view of your data; for example, you can write schemas defining parent/child tables or fact/dimension tables. However, you are not limited to these schemes.
After connecting you can query any nested structure without flattening the data. Any relations that you can access with FlattenArrays and FlattenObjects can also be accessed with an ad hoc SQL query.
Let's consider an example document from the following Restaurant data set:
{
"address": {
"building": "1007",
"coord": [
-73.856077,
40.848447
],
"street": "Morris Park Ave",
"zipcode": "10462"
},
"borough": "Bronx",
"cuisine": "Bakery",
"grades": [
{
"grade": "A",
"score": 2,
"date": {
"$date": "1393804800000"
}
},
{
"date": {
"$date": "1378857600000"
},
"grade": "B",
"score": 6
},
{
"score": 10,
"date": {
"$date": "1358985600000"
},
"grade": "C"
}
],
"name": "Morris Park Bake Shop",
"restaurant_id": "30075445"
}
You can access any nested structure in this document as a column. Use the dot notation to drill down to the values you want to access as shown in the query below. Note that arrays have a zero-based index. For example, the following query retrieves the second grade for the restaurant in the example:
SELECT [address.building], [grades.1.grade] FROM restaurants WHERE restaurant_id = '30075445'The preceding query returns the following results:
| Column Name | Data Type | Example Value |
| address.building | String | 1007 |
| grades.1.grade | String | A |
It is possible to retrieve an array of documents as if it were a separate table. Take the following JSON structure from the restaurants collection for example:
{
"_id" : ObjectId("568c37b748ddf53c5ed98932"),
"address" : {
"building" : "1007",
"coord" : [-73.856077, 40.848447],
"street" : "Morris Park Ave",
"zipcode" : "10462"
},
"borough" : "Bronx",
"cuisine" : "Bakery",
"grades" : [{
"date" : ISODate("2014-03-03T00:00:00Z"),
"grade" : "A",
"score" : 2
}, {
"date" : ISODate("2013-09-11T00:00:00Z"),
"grade" : "A",
"score" : 6
}, {
"date" : ISODate("2013-01-24T00:00:00Z"),
"grade" : "A",
"score" : 10
}, {
"date" : ISODate("2011-11-23T00:00:00Z"),
"grade" : "A",
"score" : 9
}, {
"date" : ISODate("2011-03-10T00:00:00Z"),
"grade" : "B",
"score" : 14
}],
"name" : "Morris Park Bake Shop",
"restaurant_id" : "30075445"
}
Vertical flattening will allow you to retrieve the grades array as a separate table:
SELECT * FROM [restaurants.grades]This query returns the following data set:
| date | grade | score | P_id | _index |
| 2014-03-03T00:00:00.000Z | A | 2 | 568c37b748ddf53c5ed98932 | 1 |
| 2013-09-11T00:00:00.000Z | A | 6 | 568c37b748ddf53c5ed98932 | 2 |
| 2013-01-24T00:00:00.000Z | A | 10 | 568c37b748ddf53c5ed98932 | 3 |
SELECT [restaurants].[restaurant_id], [restaurants.grades].* FROM [restaurants.grades] JOIN [restaurants] WHERE [restaurants].name = 'Morris Park Bake Shop'This query returns the following data set:
| restaurant_id | date | grade | score | P_id | _index |
| 30075445 | 2014-03-03T00:00:00.000Z | A | 2 | 568c37b748ddf53c5ed98932 | 1 |
| 30075445 | 2013-09-11T00:00:00.000Z | A | 6 | 568c37b748ddf53c5ed98932 | 2 |
| 30075445 | 2013-01-24T00:00:00.000Z | A | 10 | 568c37b748ddf53c5ed98932 | 3 |
| 30075445 | 2011-11-23T00:00:00.000Z | A | 9 | 568c37b748ddf53c5ed98932 | 4 |
| 30075445 | 2011-03-10T00:00:00.000Z | B | 14 | 568c37b748ddf53c5ed98932 | 5 |
It's also possible to build queries targeting arrays within other arrays.
Consider this sample Inventory collection:
{
"_id": {
"$oid": "xxxxxxxxxxxxxxxxxxxxxx"
},
"Company Branch": "Main Branch",
"ItemList": [
{
"item": "journal",
"instock": [
{
"warehouse": "A",
"qty": 15
},
{
"warehouse": "B",
"qty": 45
}
]
},
{
"item": "paper",
"instock": [
{
"warehouse": "A",
"qty": 50
},
{
"warehouse": "B",
"qty": 5
}
]
}
]
}
Insert data into the nested arrays using the syntax of <parent array>.<index>.<child array>, as follows:
INSERT INTO [Inventory.ItemList] (p_id, item, [instock.0.warehouse], [instock.0.qty], [instock.0.price]) VALUES ('xxxxxxxxxxxxxxxxxxxxxx', 'NoteBook', 'B', 20, '5$')
The Inventory collection after executing the INSERT statement:
{
"_id": {
"$oid": "xxxxxxxxxxxxxxxxxxxxxx"
},
"Company Branch": "Main Branch",
"ItemList": [
{
"item": "journal",
"instock": [
{
"warehouse": "A",
"qty": 15
},
{
"warehouse": "B",
"qty": 45
}
]
},
{
"item": "paper",
"instock": [
{
"warehouse": "A",
"qty": 50
},
{
"warehouse": "B",
"qty": 5
}
]
},
{
"item": "NoteBook",
"instock": [
{
"warehouse": "B",
"qty": 20,
"price": "5$"
}
]
}
]
}
The Sync App can return JSON structures as column values. The Sync App enables you to use standard SQL functions to work with these JSON structures. The examples in this section use the following array:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
SELECT Name, JSON_EXTRACT(grades,'[0].grade') AS Grade, JSON_EXTRACT(grades,'[0].score') AS Score FROM Students;
| Column Name | Example Value |
| Grade | A |
| Score | 2 |
SELECT Name, JSON_COUNT(grades,'[x]') AS NumberOfGrades FROM Students;
| Column Name | Example Value |
| NumberOfGrades | 5 |
SELECT Name, JSON_SUM(score,'[x].score') AS TotalScore FROM Students;
| Column Name | Example Value |
| TotalScore | 41 |
SELECT Name, JSON_MIN(score,'[x].score') AS LowestScore FROM Students;
| Column Name | Example Value |
| LowestScore | 2 |
SELECT Name, JSON_MAX(score,'[x].score') AS HighestScore FROM Students;
| Column Name | Example Value |
| HighestScore | 14 |
The DOCUMENT function can be used to retrieve the entire document as a JSON string. See the following query and its result as an example:
SELECT DOCUMENT(*) FROM Customers;The query above will return the entire document as shown.
{ "id": 12, "name": "Lohia Manufacturers Inc.", "address": { "street": "Main Street", "city": "Chapel Hill", "state": "NC"}, "offices": [ "Chapel Hill", "London", "New York" ], "annual_revenue": 35,600,000 }
The Sync App maps SQL queries into the corresponding MongoDB queries. A detailed description of all the transformations is out of scope, but we will describe some of the common elements that are used. The Sync App takes advantage of MongoDB features such as the aggregation framework to compute the desired results.
| SQL Query | MongoDB Query |
SELECT * FROM Users | db.users.find() |
SELECT user_id, status FROM Users | db.users.find(
{},
{ user_id: 1, status: 1, _id: 0 }
) |
SELECT * FROM Users WHERE status = 'A' | db.users.find(
{ status: "A" }
) |
SELECT * FROM Users WHERE status = 'A' OR age=50 | db.users.find(
{ $or: [ { status: "A" },
{ age: 50 } ] }
) |
SELECT * FROM Users WHERE name LIKE 'A%' | db.users.find(
{name: /^a/}
) |
SELECT * FROM Users WHERE status = 'A' ORDER BY user_id ASC | db.users.find( { status: "A" }.sort( { user_id: 1 } ) |
SELECT * FROM Users WHERE status = 'A' ORDER BY user_id DESC | db.users.find( {status: "A" }.sort( {user_id: -1} ) |
| SQL Query | MongoDB Query |
SELECT Count(*) As Count FROM Orders | db.orders.aggregate( [
{
$group: {
_id: null,
count: { $sum: 1 }
}
}
] ) |
SELECT Sum(price) As Total FROM Orders | db.orders.aggregate( [
{
$group: {
_id: null,
total: { $sum: "$price" }
}
}
] ) |
SELECT cust_id, Sum(price) As total FROM Orders GROUP BY cust_id ORDER BY total | db.orders.aggregate( [
{
$group: {
_id: "$cust_id",
total: { $sum: "$price" }
}
} ,
{ $sort: {total: 1 } }
] ) |
SELECT cust_id, ord_date, Sum(price) As total FROM Orders GROUP BY cust_id, ord_date HAVING total > 250 |
db.orders.aggregate( [
{
$group: {
_id: {
cust_id: "$cust_id",
ord_date: {
month: { $month: "$ord_date" },
day: { $dayOfMonth: "$ord_date" },
year: { $year: "$ord_date"}
}
},
total: { $sum: "$price" }
}
},
{ $match: { total: { $gt: 250 } } }
] ) |
| SQL Query | MongoDB Query |
INSERT INTO users (user_id, age, status, [address.city], [address.postalcode])
VALUES ('bcd001', 45, 'A', 'Chapel Hill', 27517) | db.users.insert(
{ user_id: "bcd001", age: 45, status: "A", address:{ city:"Chapel Hill", postalCode:27514} }
) |
INSERT INTO t1 ("c1") VALUES (('a1', 'a2', 'a3')) | db.users.insert({"c1": ['a1', 'a2', 'a3']}) |
INSERT INTO t1 ("c1") VALUES (()) | db.users.insert({"c1": []}) |
INSERT INTO t1 ("a.b.c.c1") VALUES (('a1', 'a2', 'a3')) | db.users.insert("a":{"b":{"c":{"c1":['a1','a2', 'a3']}}}) |
| SQL Query | MongoDB Query |
UPDATE users SET status = 'C', [address.postalcode] = 90210 WHERE age > 25 | db.users.update(
{ age: { $gt: 25 } },
{ $set: { status: "C", address.postalCode: 90210 },
{ multi: true }
) |
| SQL Query | MongoDB Query |
DELETE FROM users WHERE status = 'D' | db.users.remove( { status: "D" } ) |
You can extend the table schemas created with Automatic Schema Discovery by saving them into schema files. The schema files have a simple format that makes the schemas to edit.
Set GenerateSchemaFiles to "OnStart" to persist schemas for all tables when you connect. You can also generate table schemas as needed: Set GenerateSchemaFiles to "OnUse" and execute a SELECT query to the table.
For example, consider a schema for the restaurants data set. This is a sample data set provided by MongoDB. To download the data set, follow the Getting Started with MongoDB guide.
Below is an example document from the collection:
{
"address":{
"building":"461",
"coord":[
-74.138492,
40.631136
],
"street":"Port Richmond Ave",
"zipcode":"10302"
},
"borough":"Staten Island",
"cuisine":"Other",
"name":"Indian Oven",
"restaurant_id":"50018994"
}
You can use the mongoimport utility to import the data set:
mongoimport --db test --collection restaurants --drop --file dataset.json
When GenerateSchemaFiles is set, the Sync App saves schemas into the folder specified by the Location property. You can then change column behavior in the resulting schema.
The following schema uses the other:bsonpath property to define where in the collection to retrieve the data for a particular column. Using this model you can flatten arbitrary levels of hierarchy.
The collection attribute specifies the collection to parse. The collection attribute gives you the flexibility to use multiple schemas for the same collection. If collection is not specified, the filename determines the collection that is parsed.
Below are the column definitions and the collection to extract the column values from. In Custom Schema Example, you will find the complete schema.
<rsb:script xmlns:rsb="http://www.rssbus.com/ns/rsbscript/2">
<rsb:info title="StaticRestaurants" description="Custom Schema for the MongoDB restaurants data set.">
<!-- Column definitions -->
<attr name="borough" xs:type="string" other:bsonpath="$.borough" />
<attr name="cuisine" xs:type="string" other:bsonpath="$.cuisine" />
<attr name="building" xs:type="string" other:bsonpath="$.address.building" />
<attr name="street" xs:type="string" other:bsonpath="$.address.street" />
<attr name="latitude" xs:type="double" other:bsonpath="$.address.coord.0" />
<attr name="longitude" xs:type="double" other:bsonpath="$.address.coord.1" />
</rsb:info>
<rsb:set attr="collection" value="restaurants"/>
</rsb:script>
This section contains an example of a complete schema that has been automatically generated by GenerateSchemaFiles. Set the Location property to the file directory that will contain the schema file. The schema consists of the following parts:
<rsb:script xmlns:rsb="http://www.rssbus.com/ns/rsbscript/2">
<rsb:info title="StaticRestaurants" description="Automatic GenerateSchemaFile">
<!-- Column definitions -->
<attr name="borough" xs:type="string" other:bsonpath="$.borough" />
<attr name="cuisine" xs:type="string" other:bsonpath="$.cuisine" />
<attr name="address_building" xs:type="string" other:bsonpath="$.address.building" />
<attr name="address_street" xs:type="string" other:bsonpath="$.address.street" />
<attr name="address_coord_0" xs:type="double" other:bsonpath="$.address.coord.0" />
<attr name="address_coord_1" xs:type="double" other:bsonpath="$.address.coord.1" />
</rsb:info>
<rsb:set attr="collection" value="restaurants"/>
</rsb:script>
The Sync App maps types from the data source to the corresponding data type available in the schema. The table below documents these mappings.
| MongoDB | CData Schema |
| ObjectId | bson:ObjectId |
| Double | double |
| Decimal | decimal |
| String | string |
| Object | string |
| Array | bson:Array |
| Binary | binary |
| Boolean | bool |
| Date | datetime |
| Null | bson:Null |
| Regex | bson:Regex |
| Integer | int |
| Long | long |
| MinKey | bson:MinKey |
| MaxKey | bson:MaxKey |
This section details a selection of advanced features of the MongoDB Sync App.
The Sync App supports the use of user defined views, virtual tables whose contents are decided by a pre-configured user defined query. These views are useful when you cannot directly control queries being issued to the drivers. For an overview of creating and configuring custom views, see User Defined Views .
Use SSL Configuration to adjust how Sync App handles TLS/SSL certificate negotiations. You can choose from various certificate formats;. For further information, see the SSLServerCert property under "Connection String Options" .
Configure the Sync App for compliance with Firewall and Proxy, including Windows proxies. You can also set up tunnel connections.
For further information, see Query Processing.
To enable TLS, set UseSSL to True.
With this configuration, the Sync App attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
The MongoDB Sync App also supports setting client certificates. Set the following to connect using a client certificate.
Set the following properties:
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| AuthScheme | The authentication mechanism that MongoDB will use to authenticate the connection. |
| Server | The host name or IP address of the server hosting the MongoDB database. |
| Port | The port for the MongoDB database. |
| User | Specifies the user ID of the authenticating MongoDB user account. |
| Password | Specifies the password of the authenticating user account. |
| Database | The name of the MongoDB database. |
| UseSSL | This field sets whether SSL is enabled. |
| AuthDatabase | The name of the MongoDB database for authentication. |
| ReplicaSet | This property allows you to specify multiple servers in addition to the one configured in Server and Port . Specify both a server name and port; separate servers with a comma. |
| DNSServer | Specify the DNS server when resolving MongoDB seed list. |
| Property | Description |
| KerberosKDC | The Kerberos Key Distribution Center (KDC) service used to authenticate the user. |
| KerberosRealm | The Kerberos Realm used to authenticate the user. |
| KerberosSPN | The service principal name (SPN) for the Kerberos Domain Controller. |
| KerberosUser | The principal name for the Kerberos Domain Controller. Used in the format host/user@realm. |
| KerberosKeytabFile | The Keytab file containing your pairs of Kerberos principals and encrypted keys. |
| KerberosServiceRealm | The Kerberos realm of the service. |
| KerberosServiceKDC | The Kerberos KDC of the service. |
| KerberosTicketCache | The full file path to an MIT Kerberos credential cache file. |
| Property | Description |
| SSLClientCert | Specifies the TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). This property works in conjunction with other SSL-related properties to establish a secure connection. |
| SSLClientCertType | Specifies the type of key store containing the TLS/SSL client certificate for SSL Client Authentication. Choose from a variety of key store formats depending on your platform and certificate source. |
| SSLClientCertPassword | Specifes the password required to access the TLS/SSL client certificate store. Use this property if the selected certificate store type requires a password for access. |
| SSLClientCertSubject | Specifes the subject of the TLS/SSL client certificate to locate it in the certificate store. Use a comma-separated list of distinguished name fields, such as CN=www.server.com, C=US. The wildcard * selects the first certificate in the store. |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| SSHAuthMode | The authentication method used when establishing an SSH Tunnel to the service. |
| SSHClientCert | A certificate to be used for authenticating the SSHUser. |
| SSHClientCertPassword | The password of the SSHClientCert key if it has one. |
| SSHClientCertSubject | The subject of the SSH client certificate. |
| SSHClientCertType | The type of SSHClientCert private key. |
| SSHServer | The SSH server. |
| SSHPort | The SSH port. |
| SSHUser | The SSH user. |
| SSHPassword | The SSH password. |
| SSHServerFingerprint | The SSH server fingerprint. |
| UseSSH | Whether to tunnel the MongoDB connection over SSH. Use SSH. |
| Property | Description |
| FirewallType | Specifies the protocol the provider uses to tunnel traffic through a proxy-based firewall. |
| FirewallServer | Identifies the IP address, DNS name, or host name of a proxy used to traverse a firewall and relay user queries to network resources. |
| FirewallPort | Specifies the TCP port to be used for a proxy-based firewall. |
| FirewallUser | Identifies the user ID of the account authenticating to a proxy-based firewall. |
| FirewallPassword | Specifies the password of the user account authenticating to a proxy-based firewall. |
| Property | Description |
| LogModules | Specifies the core modules to include in the log file. Use a semicolon-separated list of module names. By default, all modules are logged. |
| Property | Description |
| Location | Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path. |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Tables | Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC . |
| Views | Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC . |
| Property | Description |
| BuiltInColumnMapping | A list of column name mappings for MongoDB's built-in columns. |
| Compression | Specify the compression method. Compression is not enabled when it is None. |
| DataModel | By default, the provider will not automatically discover the metadata for a child table as its own distinct table. To enable this functionality, set DataModel to Relational . |
| FlattenArrays | By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays. |
| FlattenObjects | Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. |
| GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
| MaxRows | Specifies the maximum rows returned for queries without aggregation or GROUP BY. |
| NoCursorTimeout | The server normally times out idle cursors after an inactivity period (10 minutes) to prevent excess memory use. Set this option to prevent that. |
| Other | Specifies additional hidden properties for specific use cases. These are not required for typical provider functionality. Use a semicolon-separated list to define multiple properties. |
| Pagesize | Specifies the maximum number of results to return from MongoDB, per page. This setting overrides the default page size set by the datasource, which is optimized for most use cases. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property. |
| QueryPassthrough | This option passes the query to MongoDB as-is. |
| ReadPreference | Set this to a strategy for reading from a replica set. Accepted values are primary, primaryPreferred, secondary, secondaryPreferred, and nearest. |
| ReadPreferenceTags | Use this property to target a replica set member or members that are associated with tags. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| ServiceKind | Specify the kind of service. |
| SlaveOK | This property sets whether the provider is allowed to read from secondary (slave) servers. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout. |
| TypeDetectionScheme | Comma-separated options for how the provider will scan the data to determine the fields and datatypes in each document collection. |
| UpdateScheme | Sets replacing or merging target document with updating fields is performed by executing update statement. |
| UseFindAPI | Execute MongoDB queries using db.collection.find(). |
| UserDefinedViews | Specifies a filepath to a JSON configuration file defining custom views. The provider automatically detects and uses the views specified in this file. |
| WriteConcern | Requests acknowledgment that the write operation has propagated to the specified number of mongod instances. |
| WriteConcernJournaled | Requires acknowledgment that the mongod instances, as specified in the WriteConcern property, have written to the on-disk journal. |
| WriteConcernTimeout | This option specifies a time limit, in milliseconds, for the write concern. |
| WriteScheme | Sets whether the object type for inserted or updated objects is determined from the existing column metadata or the input value type. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | The authentication mechanism that MongoDB will use to authenticate the connection. |
| Server | The host name or IP address of the server hosting the MongoDB database. |
| Port | The port for the MongoDB database. |
| User | Specifies the user ID of the authenticating MongoDB user account. |
| Password | Specifies the password of the authenticating user account. |
| Database | The name of the MongoDB database. |
| UseSSL | This field sets whether SSL is enabled. |
| AuthDatabase | The name of the MongoDB database for authentication. |
| ReplicaSet | This property allows you to specify multiple servers in addition to the one configured in Server and Port . Specify both a server name and port; separate servers with a comma. |
| DNSServer | Specify the DNS server when resolving MongoDB seed list. |
The authentication mechanism that MongoDB will use to authenticate the connection.
Accepted values are MONGODB-CR, SCRAM-SHA-1, SCRAM-SHA-256, GSSAPI, PLAIN, and NONE. The following authentication types correspond to the authentication values.
Generally, this property does not need to be set for this authentication type, as the Sync App uses different challenge-response mechanisms by default to authenticate a user to different versions of MongoDB.
Set AuthScheme to PLAIN to use LDAP authentication. This value specifies the SASL PLAIN mechanism; note that this mechanism transmits credentials over plain-text, so it is not suitable for use without TLS/SSL on untrusted networks.
Set AuthScheme to GSSAPI to use Kerberos authentication. Additionally configure the following properties as configured for the MongoDB environment:
| KerberosKDC | The FQDN of the domain controller. |
| KerberosRealm | The Kerberos Realm (for Windows this will be the AD domain). |
| KerberosSPN | The assigned service principle name for the user. |
| AuthDatabase | This value should be set to '$external'. |
| User | The user created in the $external database. |
| Password | The corresponding User's password. |
Set AuthScheme to X509 to use X.509 certificate authentication.
The host name or IP address of the server hosting the MongoDB database.
The host name or IP address of the server hosting the MongoDB database. If you choose to connect using DNS seed lists, set this option to "mongodb+srv://" + the name of the server your MongoDB instance is running on.
If connecting through MongoDB Atlas, set the Server connection property to the shard value of the primary cluster (ex: cluster0-shard-00-00-test.mongodb.net). More information about sharding can be found here: MongoDB Sharding.
The port for the MongoDB database.
The port for the MongoDB database.
Specifies the user ID of the authenticating MongoDB user account.
The authenticating server requires both User and Password to validate the user's identity.
Specifies the password of the authenticating user account.
The authenticating server requires both User and Password to validate the user's identity.
The name of the MongoDB database.
The name of the MongoDB database.
This field sets whether SSL is enabled.
This field sets whether the Sync App will attempt to negotiate TLS/SSL connections to the server. By default, the Sync App checks the server's certificate against the system's trusted certificate store. To specify another certificate, set SSLServerCert.
The name of the MongoDB database for authentication.
The name of the MongoDB database for authentication. Only needed if the authentication database is different from the database to retrieve data from.
This property allows you to specify multiple servers in addition to the one configured in Server and Port . Specify both a server name and port; separate servers with a comma.
This property allows you to specify the other servers in the replica set in addition to the one configured in Server and Port. You must specify all servers in the replica set using ReplicaSet, Server, and Port.
Specify both a server name and port in ReplicaSet; separate servers with a comma. For example:
Server=localhost;Port=27017;ReplicaSet=localhost:27018,localhost:27019;
To find the primary server, the Sync App queries the servers in ReplicaSet and the server specified by Server and Port.
Note that only the primary server in a replica set is writable. Secondaries can be readable if the SlaveOK setting allows it. To configure a strategy executing SELECT queries to secondaries, see ReadPreference.
Specify the DNS server when resolving MongoDB seed list.
Specify the DNS server when resolving MongoDB seed list.
This section provides a complete list of the Kerberos properties you can configure in the connection string for this provider.
| Property | Description |
| KerberosKDC | The Kerberos Key Distribution Center (KDC) service used to authenticate the user. |
| KerberosRealm | The Kerberos Realm used to authenticate the user. |
| KerberosSPN | The service principal name (SPN) for the Kerberos Domain Controller. |
| KerberosUser | The principal name for the Kerberos Domain Controller. Used in the format host/user@realm. |
| KerberosKeytabFile | The Keytab file containing your pairs of Kerberos principals and encrypted keys. |
| KerberosServiceRealm | The Kerberos realm of the service. |
| KerberosServiceKDC | The Kerberos KDC of the service. |
| KerberosTicketCache | The full file path to an MIT Kerberos credential cache file. |
The Kerberos Key Distribution Center (KDC) service used to authenticate the user.
The Kerberos properties are used when using SPNEGO or Windows Authentication. The Sync App will request session tickets and temporary session keys from the Kerberos KDC service. The Kerberos KDC service is conventionally colocated with the domain controller.
If Kerberos KDC is not specified, the Sync App will attempt to detect these properties automatically from the following locations:
The Kerberos Realm used to authenticate the user.
The Kerberos properties are used when using SPNEGO or Windows Authentication. The Kerberos Realm is used to authenticate the user with the Kerberos Key Distribution Service (KDC). The Kerberos Realm can be configured by an administrator to be any string, but conventionally it is based on the domain name.
If Kerberos Realm is not specified, the Sync App will attempt to detect these properties automatically from the following locations:
The service principal name (SPN) for the Kerberos Domain Controller.
If the SPN on the Kerberos Domain Controller is not the same as the URL that you are authenticating to, use this property to set the SPN.
The principal name for the Kerberos Domain Controller. Used in the format host/user@realm.
If the user you are using for the database doesn't match the user that is in the Kerberos database, this should be set to the Kerberos principal name.
The Keytab file containing your pairs of Kerberos principals and encrypted keys.
The Keytab file containing your pairs of Kerberos principals and encrypted keys.
The Kerberos realm of the service.
The KerberosServiceRealm is the specify the service Kerberos realm when using cross-realm Kerberos authentication.
In most cases, a single realm and KDC machine are used to perform the Kerberos authentication and this property is not required.
This property is available for complex setups where a different realm and KDC machine are used to obtain an authentication ticket (AS request) and a service ticket (TGS request).
The Kerberos KDC of the service.
The KerberosServiceKDC is used to specify the service Kerberos KDC when using cross-realm Kerberos authentication.
In most cases, a single realm and KDC machine are used to perform the Kerberos authentication and this property is not required.
This property is available for complex setups where a different realm and KDC machine are used to obtain an authentication ticket (AS request) and a service ticket (TGS request).
The full file path to an MIT Kerberos credential cache file.
This property can be set if you wish to use a credential cache file that was created using the MIT Kerberos Ticket Manager or kinit command.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLClientCert | Specifies the TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). This property works in conjunction with other SSL-related properties to establish a secure connection. |
| SSLClientCertType | Specifies the type of key store containing the TLS/SSL client certificate for SSL Client Authentication. Choose from a variety of key store formats depending on your platform and certificate source. |
| SSLClientCertPassword | Specifes the password required to access the TLS/SSL client certificate store. Use this property if the selected certificate store type requires a password for access. |
| SSLClientCertSubject | Specifes the subject of the TLS/SSL client certificate to locate it in the certificate store. Use a comma-separated list of distinguished name fields, such as CN=www.server.com, C=US. The wildcard * selects the first certificate in the store. |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
Specifies the TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). This property works in conjunction with other SSL-related properties to establish a secure connection.
This property specifies the client certificate store for SSL Client Authentication. Use this property alongside SSLClientCertType, which defines the type of the certificate store, and SSLClientCertPassword, which specifies the password for password-protected stores. When SSLClientCert is set and SSLClientCertSubject is configured, the driver searches for a certificate matching the specified subject.
Certificate store designations vary by platform. On Windows, certificate stores are identified by names such as MY (personal certificates), while in Java, the certificate store is typically a file containing certificates and optional private keys.
The following are designations of the most common User and Machine certificate stores in Windows:
| MY | A certificate store holding personal certificates with their associated private keys. |
| CA | Certifying authority certificates. |
| ROOT | Root certificates. |
| SPC | Software publisher certificates. |
For PFXFile types, set this property to the filename. For PFXBlob types, set this property to the binary contents of the file in PKCS12 format.
Specifies the type of key store containing the TLS/SSL client certificate for SSL Client Authentication. Choose from a variety of key store formats depending on your platform and certificate source.
This property determines the format and location of the key store used to provide the client certificate. Supported values include platform-specific and universal key store formats. The available values and their usage are:
| USER - default | For Windows, this specifies that the certificate store is a certificate store owned by the current user. Note that this store type is not available in Java. |
| MACHINE | For Windows, this specifies that the certificate store is a machine store. Note that this store type is not available in Java. |
| PFXFILE | The certificate store is the name of a PFX (PKCS12) file containing certificates. |
| PFXBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in PFX (PKCS12) format. |
| JKSFILE | The certificate store is the name of a Java key store (JKS) file containing certificates. Note that this store type is only available in Java. |
| JKSBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in JKS format. Note that this store type is only available in Java. |
| PEMKEY_FILE | The certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate. |
| PEMKEY_BLOB | The certificate store is a string (base64-encoded) that contains a private key and an optional certificate. |
| PUBLIC_KEY_FILE | The certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate. |
| PUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate. |
| SSHPUBLIC_KEY_FILE | The certificate store is the name of a file that contains an SSH-style public key. |
| SSHPUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains an SSH-style public key. |
| P7BFILE | The certificate store is the name of a PKCS7 file containing certificates. |
| PPKFILE | The certificate store is the name of a file that contains a PuTTY Private Key (PPK). |
| XMLFILE | The certificate store is the name of a file that contains a certificate in XML format. |
| XMLBLOB | The certificate store is a string that contains a certificate in XML format. |
| BCFKSFILE | The certificate store is the name of a file that contains an Bouncy Castle keystore. |
| BCFKSBLOB | The certificate store is a string (base-64-encoded) that contains a Bouncy Castle keystore. |
Specifes the password required to access the TLS/SSL client certificate store. Use this property if the selected certificate store type requires a password for access.
This property provides the password needed to open a password-protected certificate store. This property is necessary when using certificate stores that require a password for decryption, as is often recommended for PFX or JKS type stores.
If the certificate store type does not require a password, for example USER or MACHINE on Windows, this property can be left blank. Ensure that the password matches the one associated with the specified certificate store to avoid authentication errors.
Specifes the subject of the TLS/SSL client certificate to locate it in the certificate store. Use a comma-separated list of distinguished name fields, such as CN=www.server.com, C=US. The wildcard * selects the first certificate in the store.
This property determines which client certificate to load based on its subject. The Sync App searches for a certificate that exactly matches the specified subject. If no exact match is found, the Sync App looks for certificates containing the value of the subject. If no match is found, no certificate is selected.
The subject should follow the standard format of a comma-separated list of distinguished name fields and values. For example, CN=www.server.com, OU=Test, C=US. Common fields include the following:
| Field | Meaning |
| CN | Common Name. This is commonly a host name like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
Note: If any field contains special characters, such as commas, the value must be quoted. For example: CN="Example, Inc.", C=US.
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space or colon separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space or colon separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
If not specified, any certificate trusted by the machine is accepted.
Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.
This section provides a complete list of the SSH properties you can configure in the connection string for this provider.
| Property | Description |
| SSHAuthMode | The authentication method used when establishing an SSH Tunnel to the service. |
| SSHClientCert | A certificate to be used for authenticating the SSHUser. |
| SSHClientCertPassword | The password of the SSHClientCert key if it has one. |
| SSHClientCertSubject | The subject of the SSH client certificate. |
| SSHClientCertType | The type of SSHClientCert private key. |
| SSHServer | The SSH server. |
| SSHPort | The SSH port. |
| SSHUser | The SSH user. |
| SSHPassword | The SSH password. |
| SSHServerFingerprint | The SSH server fingerprint. |
| UseSSH | Whether to tunnel the MongoDB connection over SSH. Use SSH. |
The authentication method used when establishing an SSH Tunnel to the service.
A certificate to be used for authenticating the SSHUser.
SSHClientCert must contain a valid private key in order to use public key authentication. A public key is optional, if one is not included then the Sync App generates it from the private key. The Sync App sends the public key to the server and the connection is allowed if the user has authorized the public key.
The SSHClientCertType field specifies the type of the key store specified by SSHClientCert. If the store is password protected, specify the password in SSHClientCertPassword.
Some types of key stores are containers which may include multiple keys. By default the Sync App will select the first key in the store, but you can specify a specific key using SSHClientCertSubject.
The password of the SSHClientCert key if it has one.
This property is required for SSH tunneling when using certificate-based authentication. If the SSH certificate is in a password-protected key store, provide the password using this property to access the certificate.
The subject of the SSH client certificate.
When loading a certificate the subject is used to locate the certificate in the store.
If an exact match is not found, the store is searched for subjects containing the value of the property.
If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks the first certificate in the certificate store.
The certificate subject is a comma separated list of distinguished name fields and values. For instance "CN=www.server.com, OU=test, C=US, [email protected]". Common fields and their meanings are displayed below.
| Field | Meaning |
| CN | Common Name. This is commonly a host name like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma it must be quoted.
The type of SSHClientCert private key.
This property can take one of the following values:
| Types | Description | Allowed Blob Values |
| MACHINE/USER | Blob values are not supported. | |
| JKSFILE/JKSBLOB | base64-only | |
| PFXFILE/PFXBLOB | A PKCS12-format (.pfx) file. Must contain both a certificate and a private key. | base64-only |
| PEMKEY_FILE/PEMKEY_BLOB | A PEM-format file. Must contain an RSA, DSA, or OPENSSH private key. Can optionally contain a certificate matching the private key. | base64 or plain text. Newlines may be replaced with spaces when providing the blob as text. |
| PPKFILE/PPKBLOB | A PuTTY-format private key created using the puttygen tool. | base64-only |
| XMLFILE/XMLBLOB | An XML key in the format generated by the .NET RSA class: RSA.ToXmlString(true). | base64 or plain text. |
The SSH server.
The SSH server.
The SSH port.
The SSH port.
The SSH user.
The SSH user.
The SSH password.
The SSH password.
The SSH server fingerprint.
The SSH server fingerprint.
Whether to tunnel the MongoDB connection over SSH. Use SSH.
By default the Sync App will attempt to connect directly to MongoDB. When this option is enabled, the Sync App will instead establish an SSH connection with the SSHServer and tunnel the connection to MongoDB through it.
This section provides a complete list of the Firewall properties you can configure in the connection string for this provider.
| Property | Description |
| FirewallType | Specifies the protocol the provider uses to tunnel traffic through a proxy-based firewall. |
| FirewallServer | Identifies the IP address, DNS name, or host name of a proxy used to traverse a firewall and relay user queries to network resources. |
| FirewallPort | Specifies the TCP port to be used for a proxy-based firewall. |
| FirewallUser | Identifies the user ID of the account authenticating to a proxy-based firewall. |
| FirewallPassword | Specifies the password of the user account authenticating to a proxy-based firewall. |
Specifies the protocol the provider uses to tunnel traffic through a proxy-based firewall.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
The following table provides port number information for each of the supported protocols.
| Protocol | Default Port | Description |
| TUNNEL | 80 | The port where the Sync App opens a connection to MongoDB. Traffic flows back and forth via the proxy at this location. |
| SOCKS4 | 1080 | The port where the Sync App opens a connection to MongoDB. SOCKS 4 then passes theFirewallUser value to the proxy, which determines whether the connection request should be granted. |
| SOCKS5 | 1080 | The port where the Sync App sends data to MongoDB. If the SOCKS 5 proxy requires authentication, set FirewallUser and FirewallPassword to credentials the proxy recognizes. |
Identifies the IP address, DNS name, or host name of a proxy used to traverse a firewall and relay user queries to network resources.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
Specifies the TCP port to be used for a proxy-based firewall.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
Identifies the user ID of the account authenticating to a proxy-based firewall.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
Specifies the password of the user account authenticating to a proxy-based firewall.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| LogModules | Specifies the core modules to include in the log file. Use a semicolon-separated list of module names. By default, all modules are logged. |
Specifies the core modules to include in the log file. Use a semicolon-separated list of module names. By default, all modules are logged.
This property lets you customize the log file content by specifying the logging modules to include. Logging modules categorize logged information into distinct areas, such as query execution, metadata, or SSL communication. Each module is represented by a four-character code, with some requiring a trailing space for three-letter names.
For example, EXEC logs query execution, and INFO logs general provider messages. To include multiple modules, separate their names with semicolons as follows: INFO;EXEC;SSL.
The Verbosity connection property takes precedence over the module-based filtering specified by this property. Only log entries that meet the verbosity level and belong to the specified modules are logged. Leave this property blank to include all available modules in the log file.
For a complete list of available modules and detailed guidance on configuring logging, refer to the Advanced Logging section in Logging.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| Location | Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path. |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Tables | Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC . |
| Views | Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC . |
Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path.
The Location property is only needed if you want to either customize definitions (for example, change a column name, ignore a column, etc.) or extend the data model with new tables, views, or stored procedures.
If left unspecified, the default location is %APPDATA%\\CData\\MongoDB Data Provider\\Schema, where %APPDATA% is set to the user's configuration directory:
| Platform | %APPDATA% |
| Windows | The value of the APPDATA environment variable |
| Linux | ~/.config |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC .
Listing all available tables from some databases can take extra time, thus degrading performance. Providing a list of tables in the connection string saves time and improves performance.
If there are lots of tables available and you already know which ones you want to work with, you can use this property to restrict your viewing to only those tables. To do this, specify the tables you want in a comma-separated list. Each table should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Tables=TableA,[TableB/WithSlash],WithCatalog.WithSchema.`TableC With Space`.
Note: If you are connecting to a data source with multiple schemas or catalogs, you must specify each table you want to view by its fully qualified name. This avoids ambiguity between tables that may exist in multiple catalogs or schemas.
Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC .
Listing all available views from some databases can take extra time, thus degrading performance. Providing a list of views in the connection string saves time and improves performance.
If there are lots of views available and you already know which ones you want to work with, you can use this property to restrict your viewing to only those views. To do this, specify the views you want in a comma-separated list. Each view should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Views=ViewA,[ViewB/WithSlash],WithCatalog.WithSchema.`ViewC With Space`.
Note: If you are connecting to a data source with multiple schemas or catalogs, you must specify each view you want to examine by its fully qualified name. This avoids ambiguity between views that may exist in multiple catalogs or schemas.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| BuiltInColumnMapping | A list of column name mappings for MongoDB's built-in columns. |
| Compression | Specify the compression method. Compression is not enabled when it is None. |
| DataModel | By default, the provider will not automatically discover the metadata for a child table as its own distinct table. To enable this functionality, set DataModel to Relational . |
| FlattenArrays | By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays. |
| FlattenObjects | Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. |
| GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
| MaxRows | Specifies the maximum rows returned for queries without aggregation or GROUP BY. |
| NoCursorTimeout | The server normally times out idle cursors after an inactivity period (10 minutes) to prevent excess memory use. Set this option to prevent that. |
| Other | Specifies additional hidden properties for specific use cases. These are not required for typical provider functionality. Use a semicolon-separated list to define multiple properties. |
| Pagesize | Specifies the maximum number of results to return from MongoDB, per page. This setting overrides the default page size set by the datasource, which is optimized for most use cases. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property. |
| QueryPassthrough | This option passes the query to MongoDB as-is. |
| ReadPreference | Set this to a strategy for reading from a replica set. Accepted values are primary, primaryPreferred, secondary, secondaryPreferred, and nearest. |
| ReadPreferenceTags | Use this property to target a replica set member or members that are associated with tags. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| ServiceKind | Specify the kind of service. |
| SlaveOK | This property sets whether the provider is allowed to read from secondary (slave) servers. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout. |
| TypeDetectionScheme | Comma-separated options for how the provider will scan the data to determine the fields and datatypes in each document collection. |
| UpdateScheme | Sets replacing or merging target document with updating fields is performed by executing update statement. |
| UseFindAPI | Execute MongoDB queries using db.collection.find(). |
| UserDefinedViews | Specifies a filepath to a JSON configuration file defining custom views. The provider automatically detects and uses the views specified in this file. |
| WriteConcern | Requests acknowledgment that the write operation has propagated to the specified number of mongod instances. |
| WriteConcernJournaled | Requires acknowledgment that the mongod instances, as specified in the WriteConcern property, have written to the on-disk journal. |
| WriteConcernTimeout | This option specifies a time limit, in milliseconds, for the write concern. |
| WriteScheme | Sets whether the object type for inserted or updated objects is determined from the existing column metadata or the input value type. |
A list of column name mappings for MongoDB's built-in columns.
This property takes a comma-separated list of MongoDB column names for built-in columns and maps them to new names.
The remappable built-in columns are "_index", "P_id" and "_id".
For example:
_index=BuiltInIndex,P_id=Parent_Id,_id=My_Id
Remapping these columns is particularly useful for resolving "column names must be unique" errors that can arise when the Sync App finds additional columns named "_index", "P_id" or "_id" other than the built-in columns.
Specify the compression method. Compression is not enabled when it is None.
Specify the compression method. Compression is not enabled when it is None.
By default, the provider will not automatically discover the metadata for a child table as its own distinct table. To enable this functionality, set DataModel to Relational .
When setting DataModel to Relational, the discovery of child tables extends to root level elements and those found within top-level array elements. Additionally, the provider exposes _id and parent_id columns to enable JOIN operations between parent and child tables. The _id column acts as a primary key for the flattened table, while the parent_id column identifies the parent document.
By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays.
By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. This is only recommended for arrays that are expected to be short.
Set FlattenArrays to the number of elements you want to return from nested arrays. The specified elements are returned as columns. The zero-based index is concatenated to the column name. Other elements are ignored.
For example, you can return an arbitrary number of elements from an array of strings:
["FLOW-MATIC","LISP","COBOL"]When FlattenArrays is set to 1, the preceding array is flattened into the following table:
| Column Name | Column Value |
| languages.0 | FLOW-MATIC |
Setting FlattenArrays to -1 will flatten all the elements of nested arrays.
Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.
Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. To generate the column name, the Sync App concatenates the property name onto the object name with a dot.
For example, you can flatten the nested objects below at connection time:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
When FlattenObjects is set to true and FlattenArrays is set to 1, the preceding array is flattened into the following table:
| Column Name | Column Value |
| grades.0.grade | A |
| grades.0.score | 2 |
Indicates the user preference as to when schemas should be generated and saved.
GenerateSchemaFiles enables you to save the table definitions identified by Automatic Schema Discovery. This property outputs schemas to .rsd files in the path specified by Location.
Available settings are the following:
When you set GenerateSchemaFiles to OnUse, the Sync App generates schemas as you execute SELECT queries. Schemas are generated for each table referenced in the query.
When you set GenerateSchemaFiles to OnCreate, schemas are only generated when a CREATE TABLE query is executed.
Another way to use this property is to obtain schemas for every table in your database when you connect. To do so, set GenerateSchemaFiles to OnStart and connect.
If your data structures are volatile, consider setting GenerateSchemaFiles to Never and using dynamic schemas. See Automatic Schema Discovery for more information about dynamic schemas.
Schema files have a simple format that makes them easy to modify. See Custom Schema Definitions for more information.
Specifies the maximum rows returned for queries without aggregation or GROUP BY.
This property sets an upper limit on the number of rows the Sync App returns for queries that do not include aggregation or GROUP BY clauses. This limit ensures that queries do not return excessively large result sets by default.
When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting. If MaxRows is set to "-1", no row limit is enforced unless a LIMIT clause is explicitly included in the query.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
The server normally times out idle cursors after an inactivity period (10 minutes) to prevent excess memory use. Set this option to prevent that.
The server normally times out idle cursors after an inactivity period (10 minutes) to prevent excess memory use. Set this option to prevent that.
Specifies additional hidden properties for specific use cases. These are not required for typical provider functionality. Use a semicolon-separated list to define multiple properties.
This property allows advanced users to configure hidden properties for specialized scenarios. These settings are not required for normal use cases but can address unique requirements or provide additional functionality. Multiple properties can be defined in a semicolon-separated list.
Note: It is strongly recommended to set these properties only when advised by the support team to address specific scenarios or issues.
Specify multiple properties in a semicolon-separated list.
| DefaultColumnSize | Sets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000. |
| ConvertDateTimeToGMT | Determines whether to convert date-time values to GMT, instead of the local time of the machine. |
| RecordToFile=filename | Records the underlying socket data transfer to the specified file. |
Specifies the maximum number of results to return from MongoDB, per page. This setting overrides the default page size set by the datasource, which is optimized for most use cases.
You may want to adjust the default pagesize to optimize results for a particular object or service endpoint you are querying. Be aware that increasing the page size may improve performance, but it could also result in higher memory consumption per page.
Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property.
This property allows you to define which pseudocolumns the Sync App exposes as table columns.
To specify individual pseudocolumns, use the following format: "Table1=Column1;Table1=Column2;Table2=Column3"
To include all pseudocolumns for all tables use: "*=*"
This option passes the query to MongoDB as-is.
When set to 'True', the specified query will be passed to MongoDB as-is. Currently only these shell commands are supported:
Set this to a strategy for reading from a replica set. Accepted values are primary, primaryPreferred, secondary, secondaryPreferred, and nearest.
This property enables you to execute queries to a member in a replica set other other than the primary member. Accepted values are the following:
When this property is set, query results may not reflect the latest changes if a write operation has not yet been replicated to a secondary machine. You can use ReadPreference to accomplish the following, with some risk that the Sync App will return stale data:
When directing the Sync App to execute SELECT statements to a secondary server, SlaveOK must also be set. Otherwise, the Sync App will return an error response.
Use this property to target a replica set member or members that are associated with tags.
To make use of ReadPreferenceTags you must configure ReadPreference to a value other than the primary value (the default value). The required format is a list of semicolon seperated tag sets where each tag set is a list of key value pairs separated by commas. For example:
The maximum number of rows to scan to look for the columns available in a table.
The columns in a table must be determined by scanning table rows. This value determines the maximum number of rows that will be scanned.
Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.
Setting to a value of -1 causes the Sync App to scan an arbitrary number of rows until it reaches the final row.
Specify the kind of service.
Specify the kind of service.
This property sets whether the provider is allowed to read from secondary (slave) servers.
This property sets whether the Sync App is allowed to read from secondary (slave) servers in a replica set. You can fine-tune how the Sync App queries secondary servers with ReadPreference.
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout.
This property controls the maximum time, in seconds, that the Sync App waits for an operation to complete before canceling it. If the timeout period expires before the operation finishes, the Sync App cancels the operation and throws an exception.
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.
Setting this property to 0 disables the timeout, allowing operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server. Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
Comma-separated options for how the provider will scan the data to determine the fields and datatypes in each document collection.
| None | Setting TypeDetectionScheme to None will return all columns as a string type. Cannot be combined with other options. |
| RowScan | Setting TypeDetectionScheme to RowScan will scan rows to heuristically determine the data type. The RowScanDepth determines the number of rows to be scanned. Can be used with Recent. |
| Recent | Setting TypeDetectionScheme to 'Recent' will instead execute the rowscan on the most recent documents inserted into the collection. This is a more expensive operation that may be significantly slower on large datasets. |
Sets replacing or merging target document with updating fields is performed by executing update statement.
Sets replacing or merging target document with updating fields is performed by executing update statement. When the default value Default is used, the Sync App updates the target document by replacing the whole original document with new one. When the value is set to Merge, only the specific field in the target document will be updated.
For example, if you have a collection 'classySample' as below.
{
"_id": "1",
"message": {
"component_items": [{"locked": true}],
"id":1
}
}
UPDATE [classySample] SET [message.component_items.0.locked] = false WHERE [message.id] = 1
In the query above, the 'message' document will be replaced with new document constructed with SET clause, the collection after updating looks like
{
"_id": "1",
"message": {
"component_items": [
{
"locked": false
}
]
}
}
But when using Merge, only the 'locked' field in 'component_items' will be updated, the collection becomes
{
"_id": "1",
"message": {
"component_items": [
{
"locked": false
}
],
"id": 1
}
}
Execute MongoDB queries using db.collection.find().
Amazon DocumentDB doesn't support the legacy OP_QUERY interface, so this must be set to True to query DocumentDB clusters with db.collection.find() instead.
Specifies a filepath to a JSON configuration file defining custom views. The provider automatically detects and uses the views specified in this file.
This property allows you to define and manage custom views through a JSON-formatted configuration file called UserDefinedViews.json. These views are automatically recognized by the Sync App and enable you to execute custom SQL queries as if they were standard database views. The JSON file defines each view as a root element with a child element called "query", which contains the SQL query for the view. For example:
{
"MyView": {
"query": "SELECT * FROM [CData].[Sample].Customers WHERE MyColumn = 'value'"
},
"MyView2": {
"query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
}
}
You can define multiple views in a single file and specify the filepath using this property. For example: UserDefinedViews=C:\Path\To\UserDefinedViews.json. When you use this property, only the specified views are seen by the Sync App.
Refer to User Defined Views for more information.
Requests acknowledgment that the write operation has propagated to the specified number of mongod instances.
Requests acknowledgment that the write operation has propagated to the specified number of mongod instances.
Requires acknowledgment that the mongod instances, as specified in the WriteConcern property, have written to the on-disk journal.
It requests acknowledgment that the mongod instances, as specified in the WriteConcern property, have written to the on-disk journal.
This option specifies a time limit, in milliseconds, for the write concern.
This option specifies a time limit, in milliseconds, for the write concern.
Sets whether the object type for inserted or updated objects is determined from the existing column metadata or the input value type.
Sets whether the object type for inserted or updated objects is determined from the existing column metadata or the input value type. When the default value Metadata is used, the Sync App uses the data type as determined by the TypeDetectionScheme for objects pushed to MongoDB. When the value is set to RawValue, the type of the object in the INSERT determines what type is used for MongoDB.
For example, if you have a field 'c1' in MongoDB defined as String type, the metadata returns the column as String as well. In the following query, the resulting field in MongoDB is therefore defined as String when using WriteScheme=Metadata. But when using RawValue, the inserting field type is Date instead since the FROM_UNIXTIME() function returns an actual Date object:
INSERT INTO Table1 (c1) VALUES (FROM_UNIXTIME(1636910867039, 0))
INSERT INTO t1 ("c1") VALUES (())
This returns an empty array:
"c1":[]