CData Cloud offers access to Couchbase across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a MySQL or SQL Server database can connect to Couchbase through CData Cloud.
CData Cloud allows you to standardize and configure connections to Couchbase as though it were any other OData endpoint, or standard SQL Server/MySQL database.
This page provides a guide to Establishing a Connection to Couchbase in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to Couchbase and configure any necessary connection properties to create a database in CData Cloud
Accessing data from Couchbase through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to Couchbase by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
To connect to data, set the Server property to the hostname or IP address of the Couchbase server(s) you are authenticating to.
If your Couchbase server is configured to use SSL, you can enable it either by using an https URL for Server (like https://couchbase.server), or by setting the UseSSL property to True.
By default, the Cloud connects to the N1QL Query service. In order to connect to the Couchbase Analytics service, you will also need to set the CouchbaseService property to Analytics.
Set the following to connect to Couchbase Cloud:
The Cloud supports several forms of authentication. Couchbase Cloud only accepts Standard authentication, while Couchbase Server accepts Standard authentication, client certificates, and credentials files.
To authenticate with standard authentication, set the following:
The Cloud supports authenticating with client certificates when SSL is enabled. To use client certificate authentication, set the following properties:
You can also authenticate using using a credentials file containing multiple logins. This is included for legacy use and is not recommended when connecting to a Couchbase Server that supports role-based authentication.
Couchbase is a schema-free document database that provides high performance, availability, and scalability. These features are not necessarily incompatible with a standards-compliant query language like SQL-92.
The Cloud models the schema-free Couchbase objects into relational tables and translates SQL queries into N1QL or SQL++ (Analytics) queries to get the requested data. In this section we will show various schemes that the Cloud offers to bridge the gap with relational SQL and a document database.
When the Cloud first connects to Couchbase, it opens each bucket and scans a configurable number of rows from that bucket. It uses those rows to determine the columns in that bucket and their data types, as well as how to build flavored and child tables for any arrays within those documents. For Couchbase Enterprise version 4.5.1 and later, the Cloud may can also be configured to use the INFER command when TypeDetectionScheme is set to INFER. This allows the Cloud to get a more accurate column listing for the bucket, and to detect more complex flavors.
When using the Analytics service, the Cloud only does column and child table detection. Flavored tables are provided by Couchbase itself using shadow datasets. Also, Analytics mode does not currently have INFER support, so only row scan is supported.
For more details, refer to Automatic Schema Discovery to see how flavored tables and child tables are modelled from Couchbase data. Setting NumericStrings is also recommended as it can avoid type detection issues with certain kinds of text data.
Optionally, you can use Custom Schema Definitions to project your chosen relational structure on top of a Couchbase object. This allows you to define your chosen column names, their data types, and the locations of their values in the Couchbase document.
See Query Mapping for more details on how various N1QL and SQL++ operations are represented as SQL.
See Vertical Flattening for more details on how arrays and objects are mapped into fields.
See JSON Functions for more details on how to extract data from raw JSON strings.
If the documents within a bucket contain fields with arrays, then the Cloud will expose those fields as their own tables in addition to exposing them as JSON aggregates on the main table. The structure of these child tables depends upon whether the array contains objects or primitive values.
If the arrays contain primitive values like numbers or strings, the child table will have only two columns: one called "Document.Id" which is the primary key of the document containing the array, and one called "value" which contains the value within the array. For example, if the bucket "Games" contains these documents:
/* Primary key "1" */
{
"scores": [1,2,3]
}
/* Primary key "2" */
{
"scores": [4,5,6]
}
The Cloud will build a table called "Games_scores" containing these rows:
| Document.Id | value |
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 2 | 4 |
| 2 | 5 |
| 2 | 6 |
If the arrays contain objects, the child table will have a column for each field that occurs within the objects, as well as a "Document.Id" column which contains the primary key of the document containing the array. For example, if the bucket "Games" contains these documents:
/* Primary key "1" */
{
"moves": [
{"piece": "pawn", "square": "c3"},
{"piece": "rook", "square": "d5"}
]
}
/* Primary key "2" */
{
"moves": [
{"piece": "knight", "square": "f1"},
{"piece": "bishop", "square": "e4"}
]
}
The Cloud will build a table called "Games_moves" containing these rows:
| Document.Id | piece | square |
| 1 | pawn | c3 |
| 1 | rook | d5 |
| 2 | knight | f1 |
| 2 | biship | e4 |
Note that the above data model is not fully relational, which has important limitations for use-cases that involve complex JOINs or DML operations on child tables. The NewChildJoinsMode connection property exposes an alternative data model which avoids these limitations. Please refer to its page in the connection property section of the documentation for more details.
The Cloud can also detect when there are multiple types of documents within the same bucket, as long as TypeDetectionScheme is set to Infer or DocType and CouchbaseService is set to N1QL. These different types of documents are exposed as their own tables containing only the appropriate rows.
For example, the bucket "Games" contains documents which have a "type" value of either "chess" or "football":
/* Primary key "1" */
{
"type": "chess",
"result": "stalemate"
}
/* Primary key "2" */
{
"type": "chess",
"result": "black win"
}
/* Primary key "3" */
{
"type": "football",
"score": 23
}
/* Primary key "4" */
{
"type": "football",
"score": 18
}
The Cloud will create three tables for this bucket: one called "Games" which contains all the documents:
| Document.Id | result | score | type |
| 1 | stalemate | NULL | chess |
| 2 | black win | NULL | chess |
| 3 | NULL | 23 | football |
| 4 | NULL | 18 | football |
One called "Games.chess" which contains only documents where the type is "chess":
| Document.Id | result | type |
| 1 | stalemate | chess |
| 2 | black win | chess |
And one called "Games.football" which contains only documents where the type is "football":
| Document.Id | score | type |
| 3 | 23 | football |
| 4 | 18 | football |
Note that the Cloud will not include columns in a flavored table that are not defined on the documents in that flavor. For example, even though both the "result" and "score" columns are included on the base table, "Games.chess" only includes "result" and "Games.football" only includes "score".
/* Primary key "1" */
{
"type": "chess",
"results": ["stalemate", "white win"]
}
/* Primary key "2" */
{
"type": "chess",
"results": ["black win", "stalemate"]
}
/* Primary key "3" */
{
"type": "football",
"scores": [23, 12]
}
/* Primary key "4" */
{
"type": "football",
"scores": [18, 36]
}
Then the Cloud will generate these tables:
| Table Name | Child Field | Flavor Condition |
| Games | ||
| Games_results | results | |
| Games_scores | scores | |
| Games.chess | "type" = "chess" | |
| Games.chess_results | results | "type" = "chess" |
| Games.football | "type" = "football" | |
| Games.football_scores | scores | "type" = "football" |
The Cloud maps SQL-92-compliant queries into corresponding N1QL or SQL++ queries. Although the mapping below is not complete, it should help you get a sense for the common patterns the Cloud uses during this transformation.
The SELECT statements are translated to the appropriate N1QL SELECT query as shown below. Due to the similarities between SQL-92 and N1QL, many queries will simply be direct translations.
One major difference is that when the schema for a given Couchbase bucket exists in the Cloud, a SELECT * query will be translated to directly select the individual fields in the bucket. The Cloud will also automatically create a Document.Id column based on the primary key of each document in the bucket.
| SQL Query | N1QL Query |
| SELECT * FROM users | SELECT META(`users`).id AS `id`, ... FROM `users` |
| SELECT [Document.Id], status FROM users | SELECT META(`users`).id AS `Document.Id`, `users`.`status` FROM `users` |
| SELECT * FROM users WHERE status = 'A' OR age = 50 | SELECT META(`users`).id AS `id`, ... FROM `users` WHERE TOSTRING(`users`.`status`) = "A" OR TONUMBER(`users`.`age`) = 50 |
| SELECT * FROM users WHERE name LIKE 'A%' | SELECT META(`users`).id AS `id`, ... FROM `users` WHERE TOSTRING(`users`.`name`) LIKE "A%" |
| SELECT * FROM users WHERE status = 'A' ORDER BY [Document.Id] DESC | SELECT META(`users`).id AS `id`, ... FROM `users` WHERE TOSTRING(`users`.`status`) = "A" ORDER BY META(`users`).id DESC |
| SELECT * FROM users WHERE status IN ('A', 'B') | SELECT META(`users`).id, ... FROM `users` WHERE TOSTRING(`users`.`status`) IN ["A", "B"] |
Note that conditions can include extra type functions if the Cloud detects that a type conversion may be necessary. You can disable these type conversions using the StrictComparison property. For clarity, the rest of the N1QL samples are shown without these extra conversion functions.
When a query has either equals or IN clause that targets the Document.Id column, and there is no OR clause to override it, the Cloud will convert the Document.Id filter into a USE KEYS clause. This avoids the overhead of scanning an index because the document keys are already known to the N1QL engine (this optimization does not apply to the Analytics CouchbaseService).
| SQL Query | N1QL Query |
| SELECT * FROM users WHERE [Document.Id] = '1' | SELECT ... FROM `users` USE KEYS ["1"] |
| SELECT * FROM users WHERE [Document.Id] IN ('2', '3') | SELECT ... FROM `users` USE KEYS ["2", "3"] |
| SELECT * FROM users WHERE [Document.Id] = '4' OR [Document.Id] = '5' | SELECT ... FROM `users` USE KEYS ["4", "5"] |
| SELECT * FROM users WHERE [Document.Id] = '6' AND status = 'A' | SELECT ... FROM `users` USE KEYS ["6"] WHERE `status` = "A" |
In addition to being used for SELECT queries, the same optimization is performed for DML operations as shown below.
As long as all the child tables in a query share the same parent, and they are combined using INNER JOINs on their Document.Id columns, the Cloud will combine the JOINs into a single UNNEST expression. Unlike N1QL UNNEST queries, you must explicitly JOIN with the base table if you want to access its fields.
| SQL Query | N1QL Query |
| SELECT * FROM users_posts | SELECT META(`users`).id, `users_posts`.`text`, ... FROM `users` UNNEST `users`.`posts` AS `users_posts` |
| SELECT * FROM users INNER JOIN users_posts ON users.[Document.Id] = users_posts.[Document.Id] | SELECT META(`users`).id, `users`.`name`, ..., `users_posts`.`text`, ... FROM `users` UNNEST `users`.`posts` AS `users_posts` |
| SELECT * FROM users INNER JOIN users_posts ... INNER JOIN users_comments ON ... | SELECT ... FROM `users` UNNEST `users`.`posts` AS `users_posts` UNNEST `users`.`comments` AS `users_comments` |
Flavored tables always have the appropriate condition included when you query, so that only documents from the flavor will be returned:
| SQL Query | N1QL Query |
| SELECT * FROM [users.subscriber] | SELECT ... FROM `users` WHERE `docType` = "subscriber" |
| SELECT * FROM [users.subscriber] WHERE age > 50 | SELECT ... FROM `users` WHERE `docType` = "subscriber" AND `age` > 50 |
N1QL has several built-in aggregate functions. The Cloud makes extensive use of this for various aggregate queries. See some examples below:
| SQL Query | N1QL Query |
| SELECT Count(*) As Count FROM Orders | SELECT Count(*) AS `count` FROM `Orders` |
| SELECT Sum(price) As total FROM Orders | SELECT Sum(`price`) As `total` FROM `Orders` |
| SELECT cust_id, Sum(price) As total FROM Orders GROUP BY cust_id ORDER BY total | SELECT `cust_id`, Sum(`price`) As `total` FROM `Orders` GROUP BY `cust_id` ORDER BY `total` |
| SELECT cust_id, ord_date, Sum(price) As total FROM Orders GROUP BY cust_id, ord_date Having total > 250 | SELECT `cust_id`, `ord_date`, Sum(`price`) As `total` FROM `Orders` GROUP BY `cust_id`, `ord_date` Having `total` > 250 |
The SQL INSERT statement is mapped to the N1QL INSERT statement as shown below. This works the same for both top-level fields as well as fields produced by Vertical Flattening:
| SQL Query | N1QL Query |
| INSERT INTO users([Document.Id], age, status) VALUES ('bcd001', 45, 'A') | INSERT INTO `users`(KEY, VALUE) VALUES ('bcd001', { "age" : 45, "status" : "A" }) |
| INSERT INTO users([Document.Id], [metrics.posts]) VALUES ('bcd002', 0) | INSERT INTO `users`(KEY, VALUE) VALUES ('bcd002', {"metrics': {"posts": 0}}) |
Inserts on child tables are converted internally into N1QL UPDATEs using array operations. Since that this does not create the top-level document, the Document.Id provided must refer to a document that already exists.
Another limitation of child table inserts is that multi-valued inserts must all use the same Document.Id. The provider will verify this before modifying any data and raise an error if this constraint is violated.
| SQL Query | N1QL Query |
| INSERT INTO users_ratings([Document.Id], value) VALUES ('bcd001', 4.8), ('bcd001', 3.2) | UPDATE `users` USE KEYS "bcd001" SET `ratings` = ARRAY_PUT(`ratings`, 4.8, 3.2) |
| INSERT INTO users_reviews([Document.Id], score) VALUES ('bcd002', 'Great'), ('bcd002', 'Lacking') | UPDATE `users` USE KEYS "bcd001" SET `ratings` = ARRAY_PUT(`ratings`, {"score": "Great"}, {"score": "Lacking"}) |
Bulk inserts are also supported the SQL Bulk Insert is converted as shown below:
INSERT INTO users#Temp([Document.Id], KEY, VALUE) VALUES('bcd001', 45, "A")
INSERT INTO users#Temp([Document.Id], KEY, VALUE) VALUES('bcd002', 24, "B")
INSERT INTO users SELECT * FROM users#Temp
is converted to:
INSERT INTO `users` (KEY, VALUE) VALUES
('bcd001', {"age": 45, "status": "A"}),
('bcd002', {"age": 24, "status": "B"})
Like multi-valued inserts on child tables, all the rows in a bulk insert must also have the same Document.Id.
The SQL UPDATE statement is mapped to the N1SQL UPDATE statement as shown below:
| SQL Query | N1QL Query |
| UPDATE users SET status = 'C' WHERE [Document.Id] = 'bcd001' | UPDATE `users` USE KEYS ["bcd001"] SET `status` = "C" |
| UPDATE users SET status = 'C' WHERE age > 45 | UPDATE `users` SET `status` = "C" WHERE `age` > 45 |
When updating a child table, the SQL query is converted to an UPDATE query using either a "FOR" expression or an "ARRAY" expression:
| SQL Query | N1QL Query |
| UPDATE users_ratings SET value = 5.0 WHERE value > 5.0 | UPDATE `users` SET `ratings` = ARRAY CASE WHEN `value` > 5.0 THEN 5 ELSE `value` END FOR `value` IN `ratings` END |
| UPDATE users_reviews SET score = 'Unknown' WHERE score = '' | UPDATE `users` SET `$child`.`score` = 'Unknown' FOR `$child` IN `reviews` WHEN `$child`.`score` = "" END |
| SQL Query | N1QL Query |
| UPDATE [users.subscriber] SET status = 'C' WHERE age > 45 | UPDATE `users` SET `status` = "C" WHERE `docType` = "subscriber" AND `age` > 45 |
The SQL DELETE statement is mapped to the N1QL DELETE statement as shown below:
| SQL Query | N1QL Query |
| DELETE FROM users WHERE [Document.Id] = 'bcd001' | DELETE FROM `users` USE KEYS ["bcd001"] |
| DELETE FROM users WHERE status = 'inactive' | DELETE FROM `users` WHERE `status` = "inactive" |
When deleting from a child table, the SQL query is converted to an UPDATE query using an "ARRAY" expression:
| SQL Query | N1QL Query |
| DELETE FROM users_ratings WHERE value < 0 | UPDATE `users` SET `ratings` = ARRAY `value` FOR `value` IN `ratings` WHEN NOT (`value` < 0) END |
| DELETE FROM users_reviews WHERE score = '' | UPDATE `users` SET `reviews` = ARRAY `$child` FOR `$child` IN `reviews` WHEN NOT (`$child`.`score` = "") END |
| SQL Query | N1QL Query |
| DELETE FROM [users.subscriber] WHERE status = 'inactive' | DELETE FROM `users` WHERE `docType` = "subscriber" AND status = "inactive" |
/* Primary key "1" */
{
"address" : {
"building" : "1007",
"coord" : [-73.856077, 40.848447],
"street" : "Morris Park Ave",
"zipcode" : "10462"
},
"borough" : "Bronx",
"cuisine" : "Bakery",
"grades" : [{
"date" : "2014-03-03T00:00:00Z",
"grade" : "A",
"score" : 2
}, {
"date" : "2013-09-11T00:00:00Z",
"grade" : "A",
"score" : 6
}, {
"date" : "2013-01-24T00:00:00Z",
"grade" : "A",
"score" : 10
}, {
"date" : "2011-11-23T00:00:00Z",
"grade" : "A",
"score" : 9
}, {
"date" : "2011-03-10T00:00:00Z",
"grade" : "B",
"score" : 14
}],
"name" : "Morris Park Bake Shop",
"restaurant_id" : "30075445"
}
SELECT [address.building], [address.street] FROM restaurantsWould return this resultset:
| address.building | addres.street |
| 1007 | Morris Park Ave |
SELECT [address.coord.0], [address.coord.1] FROM restaurantsWould return this resultset:
| address.coord.0 | address.coord.1 |
| -73.856077 | 40.838447 |
Note that array flattening should only be used in cases where you know the number of array items in advance, such as with "address.coord" which will always contain two items. For arrays like "grades" which can contain arbitrary numbers of items, consider using the child tables described in Automatic Schema Discovery instead, since they will allow you to read all of the values within the array.
User-defined functions are a new feature provided by Couchbase 7 and up. They can be used with the Cloud like normal functions but with a special naming convention for using scoped functions. Normally the Cloud requires that functions already exist before they are used, to define them refer to the Couchbase documentation on CREATE FUNCTION queries. These may be run at the Couchbase console or with the Cloud in QueryPassthrough mode.
Couchbase has support for both scalar functions as well as functions that return results from subqueries. The Cloud supports scalar functions within its SQL dialect but subquery functions can only be used when QueryPassthrough is enabled. The rest of this section covers the Cloud's SQL dialect and assums that QueryPassthrough is disabled.
In both N1QL and Analytics mode, global user-defined functions can be accessed using either their simple names or their qualified names.
The simple name is just the name of the function:
SELECT ageInYears(birthdate) FROM users
Global functions may also be invoked by qualifying them with the default namespace.
Qualified names are quoted names that contain internal separators, which by default is a period though this can be changed using the DataverseSeparator property.
In both N1QL and Analytics the global namespace is called Default:
SELECT [Default.ageInYears](birthdate) FROM users
Calling global functions using simple names is recommended. While the default qualfier is supported, its only intended use is for when a UDF clashes with a standard SQL function that the Cloud would otherwise translate.
Both N1QL and Analytics also allow functions to be defined outside of a global context.
In Analytics functions can be attached to both dataverses and scopes which are called using two-part and three-part names respectively.
In N1QL functions may only be attached to scopes so only three-part names may be used.
/* N1QL AND Analytics */ SELECT [socialNetwork.accounts.ageInYears](birthdate) FROM [socialNetwork.accounts.users] /* Analytics only */ SELECT [socailNetwork.ageInYears](birthdate) FROM [socialNetwork.accounts.users]
The Cloud can return JSON structures as column values. The Cloud enables you to use standard SQL functions to work with these JSON structures. The examples in this section use the following array:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
SELECT Name, JSON_EXTRACT(grades,'[0].grade') AS Grade, JSON_EXTRACT(grades,'[0].score') AS Score FROM Students;
| Column Name | Example Value |
| Grade | A |
| Score | 2 |
SELECT Name, JSON_COUNT(grades,'[x]') AS NumberOfGrades FROM Students;
| Column Name | Example Value |
| NumberOfGrades | 5 |
SELECT Name, JSON_SUM(score,'[x].score') AS TotalScore FROM Students;
| Column Name | Example Value |
| TotalScore | 41 |
SELECT Name, JSON_MIN(score,'[x].score') AS LowestScore FROM Students;
| Column Name | Example Value |
| LowestScore | 2 |
SELECT Name, JSON_MAX(score,'[x].score') AS HighestScore FROM Students;
| Column Name | Example Value |
| HighestScore | 14 |
SELECT [Document.Id], grade, score, DOCUMENT(*) FROM gradesFor example, that query would return:
| Document.Id | grade | score | DOCUMENT |
| 1 | A | 6 | {"document.id":1,"grade":"A","score":6} |
| 2 | A | 10 | {"document.id":1,"grade":"A","score":10} |
| 3 | A | 9 | {"document.id":1,"grade":"A","score":9} |
| 4 | B | 14 | {"document.id":1,"grade":"B","score":14} |
SELECT DOCUMENT(*) FROM gradesThis query would return:
| DOCUMENT | |
| {"grades":{"grade":"A","score":6"}} | |
| {"grades":{"grade":"A","score":10"}} | |
| {"grades":{"grade":"A","score":9"}} | |
| {"grades":{"grade":"B","score":14"}} |
In addition to Automatic Schema Discovery the Cloud also allows you to statically define the schema for your Couchbase object. Schemas are defined in text-based configuration files, which makes them easy to extend. You can call the CreateSchema stored procedure to generate a schema file; see Automatic Schema Discovery for more information.
Set the Location property to the file directory that will contain the schema file. The following sections show how to extend the resulting schema or write your own.
Let's consider the document below and extract out the nested properties as their own columns:
/* Primary key "1" */
{
"id": 12,
"name": "Lohia Manufacturers Inc.",
"homeaddress": {"street": "Main "Street", "city": "Chapel Hill", "state": "NC"},
"workaddress": {"street": "10th "Street", "city": "Chapel Hill", "state": "NC"}
"offices": ["Chapel Hill", "London", "New York"]
"annual_revenue": 35600000
}
/* Primary key "2" */
{
"id": 15,
"name": "Piago Industries",
"homeaddress": {street": "Main Street", "city": "San Francisco", "state": "CA"},
"workaddress": {street": "10th Street", "city": "San Francisco", "state": "CA"}
"offices": ["Durham", "San Francisco"]
"annual_revenue": 42600000
}
<rsb:info title="Customers" description="Customers" other:dataverse="" other:bucket=customers"" other:flavorexpr="" other:flavorvalue="" other:isarray="false" other:pathspec="" other:childpath="">
<attr name="document.id" xs:type="string" key="true" other:iskey="true" other:pathspec="" />
<attr name="annual_revenue" xs:type="integer" other:iskey="false" other:pathspec="" other:field="annual_revenue" />
<attr name="homeaddress.city" xs:type="string" other:iskey="false" other:pathspec="{" other:field="homeaddress.city" />
<attr name="homeaddress.state" xs:type="string" other:iskey="false" other:pathspec="{" other:field="homeaddress.state" />
<attr name="homeaddress.street" xs:type="string" other:iskey="false" other:pathspec="{" other:field="homeaddress.street" />
<attr name="name" xs:type="string" other:iskey="false" other:pathspec="" other:field="name" />
<attr name="id" xs:type="integer" other:iskey="false" other:pathspec="" other:field="id" />
<attr name="offices" xs:type="string" other:iskey="false" other:pathspec="" other:field="offices" />
<attr name="offices.0" xs:type="string" other:iskey="false" other:pathspec="[" other:field="offices.0" />
<attr name="offices.1" xs:type="string" other:iskey="false" other:pathspec="[" other:field="offices.1" />
<attr name="workaddress.city" xs:type="string" other:iskey="false" other:pathspec="{" other:field="workaddress.city" />
<attr name="workaddress.state" xs:type="string" other:iskey="false" other:pathspec="{" other:field="workaddress.state" />
<attr name="workaddress.street" xs:type="string" other:iskey="false" other:pathspec="{" other:field="workaddress.street" />
</rsb:info>
In Custom Schema Example, you will find the complete schema that contains the example above.
| Property | Meaning |
| other:dataverse | The name of the dataverse the dataset belongs to. Empty if not an Analytics view. |
| other:bucket | The name of the bucket or dataset within Couchbase |
| other:flavorexpr | The URL encoded condition in a flavored table. For example, "%60docType%60%20%3D%20%22chess%22". |
| other:flavorvalue | The name of the flavor in a flavored table. For example, "chess". |
| other:isarray | Whether the table is an array child table. |
| other:pathspec | This is used to interpret the separators within other:childpath. See Column Properties for more details. |
| other:childpath | The path to the attribute that is used to UNNEST the child table. Empty if not a child table. |
| Property | Meaning |
| name | Required. The name of the column, lower-cased. |
| key | Used to mark the primary key. Required for Document.Id but optional for other columns. |
| xs:type | Required. The type of the column within the Cloud. |
| other:iskey | Required. Must be the same value as key, or "false" if key is not included. |
| other:pathspec | Required. This is used to interpret the separators within other:field. |
| other:field | Required. The path to the field in Couchbase. |
{
"numeric_object": {
"0": 0
},
"array": [
0
]
}
To ensure that the Cloud can distinguish between field and array accesses, the pathspec is used to determine whether each "." in the field is an array or an object. Each "{" represents a field access, while each "[" represents an array access.
For example, with a field of "a.0.b.1" and a "pathspec" of "[{[", the N1QL expression "a[0].b[1]" would be generated. If instead the "pathspec" were "{{{", then the N1QL expression "a.`0`.b.`1`" would be generated.
This section contains a complete schema. Set the Location property to the file directory that will contain the schema file. The info section enables a relational view of a Couchbase object. For more details, see Custom Schema Definitions. The table below allows the SELECT, INSERT, UPDATE, and DELETE commands as implemented in the GET, POST, MERGE, and DELETE sections of the schema below. The operations, such as couchbaseadoSysData, are internal implementations.
<rsb:script xmlns:rsb="http://www.rssbus.com/ns/rsbscript/2">
<rsb:info title="Customers" description="Customers" other:dataverse="" other:bucket=customers"" other:flavorexpr="" other:flavorvalue="" other:isarray="false" other:pathspec="" other:childpath="">
<attr name="document.id" xs:type="string" key="true" other:iskey="true" other:pathspec="" />
<attr name="annual_revenue" xs:type="integer" other:iskey="false" other:pathspec="" other:field="annual_revenue" />
<attr name="homeaddress.city" xs:type="string" other:iskey="false" other:pathspec="{" other:field="homeaddress.city" />
<attr name="homeaddress.state" xs:type="string" other:iskey="false" other:pathspec="{" other:field="homeaddress.state" />
<attr name="homeaddress.street" xs:type="string" other:iskey="false" other:pathspec="{" other:field="homeaddress.street" />
<attr name="name" xs:type="string" other:iskey="false" other:pathspec="" other:field="name" />
<attr name="id" xs:type="integer" other:iskey="false" other:pathspec="" other:field="id" />
<attr name="offices" xs:type="string" other:iskey="false" other:pathspec="" other:field="offices" />
<attr name="offices.0" xs:type="string" other:iskey="false" other:pathspec="[" other:field="offices.0" />
<attr name="offices.1" xs:type="string" other:iskey="false" other:pathspec="[" other:field="offices.1" />
<attr name="workaddress.city" xs:type="string" other:iskey="false" other:pathspec="{" other:field="workaddress.city" />
<attr name="workaddress.state" xs:type="string" other:iskey="false" other:pathspec="{" other:field="workaddress.state" />
<attr name="workaddress.street" xs:type="string" other:iskey="false" other:pathspec="{" other:field="workaddress.street" />
</rsb:info>
</rsb:script>
| Date | Build Number | Change Type | Description |
| [8452] 02/21/2023 | 8452 | Couchbase | Added
|
| 12/14/2022 | 8383 | General | Changed
|
| 09/30/2022 | 8308 | General | Changed
|
| 08/17/2022 | 8264 | General | Changed
|
| [8045] 01/10/2022 | 8045 | Couchbase | Added
|
| 12/20/2021 | 8024 | Couchbase | Added
|
| 10/26/2021 | 7969 | Couchbase | Added
|
| 09/13/2021 | 7926 | Couchbase | Added
|
| 09/02/2021 | 7915 | General | Added
|
| 08/07/2021 | 7889 | General | Changed
|
| 08/06/2021 | 7888 | General | Changed
|
| 07/23/2021 | 7874 | General | Changed
|
| 07/14/2021 | 7865 | Couchbase | Added
|
| 07/08/2021 | 7859 | General | Added
|
| 05/12/2021 | 7802 | Couchbase | Added
|
| 04/23/2021 | 7785 | General | Added
|
| 04/23/2021 | 7783 | General | Changed
|
| 04/16/2021 | 7776 | General | Added
Changed
|
| 04/15 /2021 | 7775 | General | Changed
|
| 04/12/2021 | 7772 | Couchbase | Added
|
| 03/12/2021 | 7741 | Couchbase | Changed
|
| 01/19/2021 | 7689 | Couchbase | Added
|
This section details a selection of advanced features of the Couchbase Cloud.
The Cloud allows you to define virtual tables, called user defined views, whose contents are decided by a pre-configured query. These views are useful when you cannot directly control queries being issued to the drivers. See User Defined Views for an overview of creating and configuring custom views.
Use SSL Configuration to adjust how Cloud handles TLS/SSL certificate negotiations. You can choose from various certificate formats; see the SSLServerCert property under "Connection String Options" for more information.
Configure the Cloud for compliance with Firewall and Proxy, including Windows proxies and HTTP proxies. You can also set up tunnel connections.
The Cloud offloads as much of the SELECT statement processing as possible to Couchbase and then processes the rest of the query in memory (client-side).
See Query Processing for more information.
See Logging for an overview of configuration settings that can be used to refine CData logging. For basic logging, you only need to set two connection properties, but there are numerous features that support more refined logging, where you can select subsets of information to be logged using the LogModules connection property.
The CData Cloud allows you to define a virtual table whose contents are decided by a pre-configured query. These are called User Defined Views, which are useful in situations where you cannot directly control the query being issued to the driver, e.g. when using the driver from a tool. The User Defined Views can be used to define predicates that are always applied. If you specify additional predicates in the query to the view, they are combined with the query already defined as part of the view.
There are two ways to create user defined views:
You can also have multiple view definitions and control them using the UserDefinedViews connection property. When you use this property, only the specified views are seen by the Cloud.
This User Defined View configuration file is formatted as follows:
For example:
{
"MyView": {
"query": "SELECT * FROM Customer WHERE MyColumn = 'value'"
},
"MyView2": {
"query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
}
}
Use the UserDefinedViews connection property to specify the location of your JSON configuration file. For example:
"UserDefinedViews", "C:\\Users\\yourusername\\Desktop\\tmp\\UserDefinedViews.json"
SELECT * FROM Customers WHERE City = 'Raleigh';An example of a query to the driver:
SELECT * FROM UserViews.RCustomers WHERE Status = 'Active';Resulting in the effective query to the source:
SELECT * FROM Customers WHERE City = 'Raleigh' AND Status = 'Active';That is a very simple example of a query to a User Defined View that is effectively a combination of the view query and the view definition. It is possible to compose these queries in much more complex patterns. All SQL operations are allowed in both queries and are combined when appropriate.
By default, the Cloud attempts to negotiate SSL/TLS by checking the server's certificate against the system's trusted certificate store.
To specify another certificate, see the SSLServerCert property for the available formats to do so.
The Couchbase Cloud also supports setting client certificates. Set the following to connect using a client certificate.
To connect through the Windows system proxy, you do not need to set any additional connection properties. To connect to other proxies, set ProxyAutoDetect to false.
In addition, to authenticate to an HTTP proxy, set ProxyAuthScheme, ProxyUser, and ProxyPassword, in addition to ProxyServer and ProxyPort.
Set the following properties:
For sources that do not support SQL-92, the Cloud offloads as much of SQL statement processing as possible to Couchbase and then processes the rest of the query in memory (client-side). This results in optimal performance.
For data sources with limited query capabilities, the Cloud handles transformations of the SQL query to make it simpler for the Cloud. The goal is to make smart decisions based on the query capabilities of the data source to push down as much of the computation as possible. The Couchbase Query Evaluation component examines SQL queries and returns information indicating what parts of the query the Cloud is not capable of executing natively.
The Couchbase Query Slicer component is used in more specific cases to separate a single query into multiple independent queries. The client-side Query Engine makes decisions about simplifying queries, breaking queries into multiple queries, and pushing down or computing aggregations on the client-side while minimizing the size of the result set.
There's a significant trade-off in evaluating queries, even partially, client-side. There are always queries that are impossible to execute efficiently in this model, and some can be particularly expensive to compute in this manner. CData always pushes down as much of the query as is feasible for the data source to generate the most efficient query possible and provide the most flexible query capabilities.
Capturing Cloud logging can be very helpful when diagnosing error messages or other unexpected behavior.
You will simply need to set two connection properties to begin capturing Cloud logging.
Once this property is set, the Cloud will populate the log file as it carries out various tasks, such as when authentication is performed or queries are executed. If the specified file doesn't already exist, it will be created.
The verbosity level determines the amount of detail that the Cloud reports to the Logfile. Verbosity levels from 1 to 5 are supported. These are described in the following list:
| 1 | Setting Verbosity to 1 will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors. |
| 2 | Setting Verbosity to 2 will log everything included in Verbosity 1 and additional information about the request. |
| 3 | Setting Verbosity to 3 will additionally log HTTP headers, as well as the body of the request and the response. |
| 4 | Setting Verbosity to 4 will additionally log transport-level communication with the data source. This includes SSL negotiation. |
| 5 | Setting Verbosity to 5 will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands. |
The Verbosity should not be set to greater than 1 for normal operation. Substantial amounts of data can be logged at higher verbosities, which can delay execution times.
To refine the logged content further by showing/hiding specific categories of information, see LogModules.
Best Practices for Data Security
Although we mask sensitive values, such as passwords, in the connection string and any request in the log, it is always best practice to review the logs for any sensitive information before sharing outside your organization.
You may want to refine the exact information that is recorded to the log file. This can be accomplished using the LogModules property.
This property allows you to filter the logging using a semicolon-separated list of logging modules.
All modules are four characters long. Please note that modules containing three letters have a required trailing blank space. The available modules are:
LogModules=INFO;EXEC;SSL ;SQL ;META;
Note that these modules refine the information as it is pulled after taking the Verbosity into account.
The CData Cloud supports several operations on data, including querying, deleting, modifying, and inserting.
See SELECT Statements for a syntax reference and examples.
See Data Model for information on the capabilities of the Couchbase API.
See INSERT Statements for a syntax reference and examples, as well as retrieving the new records' Ids.
The primary key Id is required to update a record. See UPDATE Statements for a syntax reference and examples.
An UPSERT updates a record if it exists and inserts the record if it does not. See UPSERT Statements for a syntax reference and examples.
The primary key Id is required to delete a record. See DELETE Statements for a syntax reference and examples.
Use EXECUTE or EXEC statements to execute stored procedures. See EXECUTE Statements for a syntax reference and examples.
Transactions are not currently supported.
Additionally, the Cloud does not support batching of SQL statements. To execute multiple commands, you can create multiple instances and execute each separately.
A SELECT statement can consist of the following basic clauses.
The following syntax diagram outlines the syntax supported by the SQL engine of the Cloud:
SELECT {
[ TOP <numeric_literal> | DISTINCT ]
{
*
| {
<expression> [ [ AS ] <column_reference> ]
| { <table_name> | <correlation_name> } .*
} [ , ... ]
}
[ INTO csv:// [ filename= ] <file_path> [ ;delimiter=tab ] ]
{
FROM <table_reference> [ [ AS ] <identifier> ]
} [ , ... ]
[ [
INNER | { { LEFT | RIGHT | FULL } [ OUTER ] }
] JOIN <table_reference> [ ON <search_condition> ] [ [ AS ] <identifier> ]
] [ ... ]
[ WHERE <search_condition> ]
[ GROUP BY <column_reference> [ , ... ]
[ HAVING <search_condition> ]
[ UNION [ ALL ] <select_statement> ]
[
ORDER BY
<column_reference> [ ASC | DESC ] [ NULLS FIRST | NULLS LAST ]
]
[
LIMIT <expression>
[
{ OFFSET | , }
<expression>
]
]
} | SCOPE_IDENTITY()
<expression> ::=
| <column_reference>
| @ <parameter>
| ?
| COUNT( * | { [ DISTINCT ] <expression> } )
| { AVG | MAX | MIN | SUM | COUNT } ( <expression> )
| NULLIF ( <expression> , <expression> )
| COALESCE ( <expression> , ... )
| CASE <expression>
WHEN { <expression> | <search_condition> } THEN { <expression> | NULL } [ ... ]
[ ELSE { <expression> | NULL } ]
END
| <literal>
| <sql_function>
<search_condition> ::=
{
<expression> { = | > | < | >= | <= | <> | != | LIKE | NOT LIKE | IN | NOT IN | IS NULL | IS NOT NULL | AND | OR | CONTAINS | BETWEEN } [ <expression> ]
} [ { AND | OR } ... ]
SELECT * FROM Customer
SELECT [TotalDue] AS MY_TotalDue FROM Customer
SELECT CAST(AnnualRevenue AS VARCHAR) AS Str_AnnualRevenue FROM Customer
SELECT * FROM Customer WHERE CustomerId = '12345'
SELECT * FROM Customer WHERE CustomerId = '12345';
SELECT COUNT(*) AS MyCount FROM Customer
SELECT COUNT(DISTINCT TotalDue) FROM Customer
SELECT DISTINCT TotalDue FROM Customer
SELECT TotalDue, MAX(AnnualRevenue) FROM Customer GROUP BY TotalDueSee Aggregate Functions for details.
SELECT TotalDue, MAX(AnnualRevenue) FROM Customer GROUP BY TotalDueSee Aggregate Functions for details.
SELECT Customers.ContactName, Orders.OrderDate FROM Customers, Orders WHERE Customers.CustomerId=Orders.CustomerIdSee JOIN Queries for details.
SELECT Name, TotalDue FROM Customer ORDER BY TotalDue ASC
SELECT Name, TotalDue FROM Customer LIMIT 10
SELECT * FROM Customer WHERE CustomerId = @param
Some input-only fields are available in SELECT statements. These fields, called pseudo columns, do not
appear as regular columns in the results, yet may be specified as part of the WHERE clause. You can use pseudo columns to access additional features from Couchbase.
SELECT * FROM Customer WHERE PSEUDO = '@PSEUDO'
Returns the number of rows matching the query criteria.
SELECT COUNT(*) FROM Customer WHERE CustomerId = '12345'
Returns the number of distinct, non-null field values matching the query criteria.
SELECT COUNT(DISTINCT Name) AS DistinctValues FROM Customer WHERE CustomerId = '12345'
Returns the average of the column values.
SELECT TotalDue, AVG(AnnualRevenue) FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the minimum column value.
SELECT MIN(AnnualRevenue), TotalDue FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the maximum column value.
SELECT TotalDue, MAX(AnnualRevenue) FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the total sum of the column values.
SELECT SUM(AnnualRevenue) FROM Customer WHERE CustomerId = '12345'
Returns the number of rows matching the query criteria.
SELECT COUNT(*) FROM Customer WHERE CustomerId = '12345'
Returns the number of distinct, non-null field values matching the query criteria.
SELECT COUNT(DISTINCT Name) AS DistinctValues FROM Customer WHERE CustomerId = '12345'
Returns the average of the column values.
SELECT TotalDue, AVG(AnnualRevenue) FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the minimum column value.
SELECT MIN(AnnualRevenue), TotalDue FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the maximum column value.
SELECT TotalDue, MAX(AnnualRevenue) FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the total sum of the column values.
SELECT SUM(AnnualRevenue) FROM Customer WHERE CustomerId = '12345'
The CData Cloud supports standard SQL joins like the following examples.
An inner join selects only rows from both tables that match the join condition:
SELECT Customers.ContactName, Orders.OrderDate FROM Customers, Orders WHERE Customers.CustomerId=Orders.CustomerId
A left join selects all rows in the FROM table and only matching rows in the JOIN table:
SELECT Customers.ContactName, Orders.OrderDate FROM Customers LEFT OUTER JOIN Orders ON Customers.CustomerId=Orders.CustomerId
The following date literal functions can be used to filter date fields using relative intervals. Note that while the <, >, and = operators are supported for these functions, <= and >= are not.
The current day.
SELECT * FROM MyTable WHERE MyDateField = L_TODAY()
The previous day.
SELECT * FROM MyTable WHERE MyDateField = L_YESTERDAY()
The following day.
SELECT * FROM MyTable WHERE MyDateField = L_TOMORROW()
Every day in the preceding week.
SELECT * FROM MyTable WHERE MyDateField = L_LAST_WEEK()
Every day in the current week.
SELECT * FROM MyTable WHERE MyDateField = L_THIS_WEEK()
Every day in the following week.
SELECT * FROM MyTable WHERE MyDateField = L_NEXT_WEEK()Also available:
The previous n days, excluding the current day.
SELECT * FROM MyTable WHERE MyDateField = L_LAST_N_DAYS(3)
The following n days, including the current day.
SELECT * FROM MyTable WHERE MyDateField = L_NEXT_N_DAYS(3)Also available:
Every day in every week, starting n weeks before current week, and ending in the previous week.
SELECT * FROM MyTable WHERE MyDateField = L_LAST_N_WEEKS(3)
Every day in every week, starting the following week, and ending n weeks in the future.
SELECT * FROM MyTable WHERE MyDateField = L_NEXT_N_WEEKS(3)Also available:
Returns the number of rows matching the query criteria.
SELECT COUNT(*) FROM Customer WHERE CustomerId = '12345'
Returns the number of distinct, non-null field values matching the query criteria.
SELECT COUNT(DISTINCT Name) AS DistinctValues FROM Customer WHERE CustomerId = '12345'
Returns the average of the column values.
SELECT TotalDue, AVG(AnnualRevenue) FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the minimum column value.
SELECT MIN(AnnualRevenue), TotalDue FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the maximum column value.
SELECT TotalDue, MAX(AnnualRevenue) FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the total sum of the column values.
SELECT SUM(AnnualRevenue) FROM Customer WHERE CustomerId = '12345'
Returns the number of rows matching the query criteria.
SELECT COUNT(*) FROM Customer WHERE CustomerId = '12345'
Returns the number of distinct, non-null field values matching the query criteria.
SELECT COUNT(DISTINCT Name) AS DistinctValues FROM Customer WHERE CustomerId = '12345'
Returns the average of the column values.
SELECT TotalDue, AVG(AnnualRevenue) FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the minimum column value.
SELECT MIN(AnnualRevenue), TotalDue FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the maximum column value.
SELECT TotalDue, MAX(AnnualRevenue) FROM Customer WHERE CustomerId = '12345' GROUP BY TotalDue
Returns the total sum of the column values.
SELECT SUM(AnnualRevenue) FROM Customer WHERE CustomerId = '12345'
Returns array of the non-MISSING values in the group, including NULL values.
Returns new array with value appended.
Returns new array with the concatenation of the input arrays.
Returns new array with distinct elements of input array.
Returns the first non-NULL value in the array, or NULL.
Returns new array with value pre-pended.
Returns new array with value appended, if value is not already present, otherwise returns the unmodified input array.
Returns new array with all occurrences of value removed.
Returns new array with all occurrences of value removed.
Returns new array with all elements in reverse order.
Returns new array with elements sorted in N1QL collation order.
Unmarshals the JSON-encoded string into a N1QL value. The empty string is MISSING.
Marshals the N1QL value into a JSON-encoded string. MISSING becomes the empty string.
Number of bytes in an uncompressed JSON encoding of the value. The exact size is implementation-dependent. Always returns an integer, and never MISSING or NULL. Returns 0 for MISSING.
Returns length of the value after evaluating the expression. The exact meaning of length depends on the type of the value: MISSING: MISSING; NULL: NULL; String: The length of the string.; Array: The number of elements in the array.; Object: The number of name/value pairs in the object; Any other value: NULL.
Returns number of name-value pairs in the object.
Returns array containing the attribute names of the object, in N1QL collation order.
Returns array containing the attribute name and value pairs of the object, in N1QL collation order of the names.
Returns array containing the attribute values of the object, in N1QL collation order of the corresponding names.
Returns arithmetic mean (average) of all the non-NULL number values in the array, or NULL if there are no such values.
Returns true if the array contains value.
Returns count of all the non-NULL values in the array, or zero if there are no such values.
Returns the number of elements in the array.
Returns the largest non-NULL, non-MISSING array element, in N1QL collation order.
Returns smallest non-NULL, non-MISSING array element, in N1QL collation order.
Returns the first position of value within the array, or -1. Array position is zero-based, i.e. the first position is 0.
Sum of all the non-NULL number values in the array, or zero if there are no such values.
Largest non-NULL, non-MISSING value if the values are of the same type; otherwise NULL.
Returns smallest non-NULL, non-MISSING value if the values are of the same type, otherwise returns NULL.
Returns the first non-MISSING value.
Returns first non-NULL, non-MISSING value.
Returns first non-NULL value. Note that this function might return MISSING if there is no non-NULL value.
Returns MISSING if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns NULL if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns first non-MISSING, non-Inf number. Returns MISSING or NULL if a non-number input is encountered first.
Returns first non-MISSING, non-NaN number. Returns MISSING or NULL if a non-number input is encountered first.
Returns first non-MISSING, non-Inf, or non-NaN number. Returns MISSING or NULL if a non-number input is encountered first.
Returns NaN if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns NegInf if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns PosInf if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns system clock at function evaluation time, as UNIX milliseconds. Varies during a query.
Returns system clock at function evaluation time, as a string in a supported format. Varies during a query.
Performs date arithmetic, and returns result of computation. n and part are used to define an interval or duration, which is then added (or subtracted) to the UNIX time stamp, returning the result.
Performs date arithmetic. n and part are used to define an interval or duration, which is then added (or subtracted) to the date string in a supported format, returning the result.
Performs date arithmetic. Returns the elapsed time between two UNIX time stamps as an integer whose unit is part.
Performs date arithmetic. Returns the elapsed time between two date strings in a supported format, as an integer whose unit is part.
Returns date part as an integer. The date expression is a number representing UNIX milliseconds, and part is one of the following date part strings.
Returns date part as an integer. The date expression is a string in a supported format, and part is one of the supported date part strings.
Returns UNIX time stamp that has been truncated so that the given date part string is the least significant.
Returns ISO 8601 time stamp that has been truncated so that the given date part string is the least significant.
Returns date that has been converted in a supported format to UNIX milliseconds.
Returns date that has been converted in a supported format to UNIX milliseconds.
Returns the string in the supported format to which the UNIX milliseconds has been converted.
Returns the UTC string to which the UNIX time stamp has been converted in the supported format.
Converts the UNIX time stamp to a string in the named time zone, and returns the string.
Returns statement time stamp as UNIX milliseconds; does not vary during a query.
Returns statement time stamp as a string in a supported format; does not vary during a query.
Converts the ISO 8601 time stamp to UTC.
Converts the supported time stamp string to the named time zone.
Returns base64 encoding of expression.
Returns absolute value of the number.
Returns arccosine in radians.
Returns arcsine in radians.
Returns arctangent in radians.
Returns arctangent of expression2/expression1.
Returns smallest integer not less than the number.
Returns cosine.
Returns radians to degrees.
Base of natural logarithms.
Returns e^expression.
Returns log base e.
Returns log base 10.
Largest integer not greater than the number.
Returns PI.
Returns expression1^expression2.
Returns degrees to radians.
Returns pseudo-random number with optional seed.
Rounds the value to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits is 0 if not given.
Valid values: -1, 0, or 1 for negative, zero, or positive numbers respectively.
Returns sine.
Returns square root.
Returns tangent.
Truncates the number to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits is 0 if not given.
True if the string contains the substring.
Converts the string so that the first letter of each word is uppercase and every other letter is lowercase.
Returns length of the string value.
Returns lowercase of the string value.
Returns string with all leading chars removed. White space by default.
Returns the first position of the substring within the string, or -1. The position is zero-based, i.e., the first position is 0.
Returns string formed by repeating expression n times.
Returns string with all occurrences of substr replaced with repl. If n is given, at most n replacements are performed.
Returns string with all trailing chars removed. White space by default.
Splits the string into an array of substrings separated by string_sep. If string_sep is not given, any combination of white space characters is used.
Returns substring from the integer position of the given length, or to the end of the string. The position is zero-based, i.e. the first position is 0. If position is negative, it is counted from the end of the string; -1 is the last position in the string.
Returns string with all leading and trailing chars removed. White space by default.
Returns uppercase of the string value.
Returns array as follows: MISSING is MISSING; NULL is NULL; Arrays are themselves; All other values are wrapped in an array.
Returns array as follows: MISSING is MISSING; NULL is NULL; Arrays of length 1 are the result of TOATOM() on their single element; Objects of length 1 are the result of TOATOM() on their single value; Booleans, numbers, and strings are themselves; All other values are NULL.
Returns array as follows: MISSING is MISSING; NULL is NULL; False is false; Numbers +0, -0, and NaN are false; Empty strings, arrays, and objects are false; All other values are true.
Returns array as follows: MISSING is MISSING; NULL is NULL; False is 0; True is 1; Numbers are themselves; Strings that parse as numbers are those numbers; All other values are NULL.
Returns array as follows: MISSING is MISSING; NULL is NULL; Objects are themselves; All other values are the empty object.
Returns array as follows: MISSING is MISSING; NULL is NULL; False is "false"; True is "true"; Numbers are their string representation; Strings are themselves; All other values are NULL.
Returns True if the string value contains the regular expression pattern.
Returns True if the string value matches the regular expression pattern.
Returns first position of the regular expression pattern within the string, or -1.
Returns new string with occurrences of pattern replaced with string_replace. If n is given, at most n replacements are performed.
Returns True if expression is an array, otherwise returns MISSING, NULL or false.
Returns True if expression is a Boolean, number, or string, otherwise returns MISSING, NULL or false.
Returns True if expression is a Boolean, otherwise returns MISSING, NULL or false.
Returns True if expression is a number, otherwise returns MISSING, NULL or false.
Returns True if expression is an object, otherwise returns MISSING, NULL or false.
Returns True if expression is a string, otherwise returns MISSING, NULL or false.
Returns one of the following strings, based on the value of expression: missing, null, boolean, number, string, array, object, or binary.
Returns arithmetic mean (average) of all the non-NULL number values in the array, or NULL if there are no such values.
Returns true if the array contains value.
Returns count of all the non-NULL values in the array, or zero if there are no such values.
Returns the number of elements in the array.
Returns the largest non-NULL, non-MISSING array element, in N1QL collation order.
Returns smallest non-NULL, non-MISSING array element, in N1QL collation order.
Returns the first position of value within the array, or -1. Array position is zero-based, i.e. the first position is 0.
Sum of all the non-NULL number values in the array, or zero if there are no such values.
Largest non-NULL, non-MISSING value if the values are of the same type; otherwise NULL.
Returns smallest non-NULL, non-MISSING value if the values are of the same type, otherwise returns NULL.
Returns the first non-MISSING value.
Returns first non-NULL, non-MISSING value.
Returns first non-NULL value. Note that this function might return MISSING if there is no non-NULL value.
Returns MISSING if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns NULL if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns first non-MISSING, non-Inf number. Returns MISSING or NULL if a non-number input is encountered first.
Returns first non-MISSING, non-NaN number. Returns MISSING or NULL if a non-number input is encountered first.
Returns first non-MISSING, non-Inf, or non-NaN number. Returns MISSING or NULL if a non-number input is encountered first.
Returns NaN if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns NegInf if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns PosInf if column1 = column2, otherwise returns column1. Returns MISSING or NULL if either input is MISSING or NULL.
Returns system clock at function evaluation time, as UNIX milliseconds. Varies during a query.
Returns system clock at function evaluation time, as a string in a supported format. Varies during a query.
Performs date arithmetic, and returns result of computation. n and part are used to define an interval or duration, which is then added (or subtracted) to the UNIX time stamp, returning the result.
Performs date arithmetic. n and part are used to define an interval or duration, which is then added (or subtracted) to the date string in a supported format, returning the result.
Performs date arithmetic. Returns the elapsed time between two UNIX time stamps as an integer whose unit is part.
Performs date arithmetic. Returns the elapsed time between two date strings in a supported format, as an integer whose unit is part.
Returns date part as an integer. The date expression is a number representing UNIX milliseconds, and part is one of the following date part strings.
Returns date part as an integer. The date expression is a string in a supported format, and part is one of the supported date part strings.
Returns UNIX time stamp that has been truncated so that the given date part string is the least significant.
Returns ISO 8601 time stamp that has been truncated so that the given date part string is the least significant.
Returns date that has been converted in a supported format to UNIX milliseconds.
Returns date that has been converted in a supported format to UNIX milliseconds.
Returns the string in the supported format to which the UNIX milliseconds has been converted.
Returns the UTC string to which the UNIX time stamp has been converted in the supported format.
Converts the UNIX time stamp to a string in the named time zone, and returns the string.
Returns statement time stamp as UNIX milliseconds; does not vary during a query.
Returns statement time stamp as a string in a supported format; does not vary during a query.
Converts the ISO 8601 time stamp to UTC.
Converts the supported time stamp string to the named time zone.
Returns base64 encoding of expression.
Returns absolute value of the number.
Returns arccosine in radians.
Returns arcsine in radians.
Returns arctangent in radians.
Returns arctangent of expression2/expression1.
Returns smallest integer not less than the number.
Returns cosine.
Returns radians to degrees.
Base of natural logarithms.
Returns e^expression.
Returns log base e.
Returns log base 10.
Largest integer not greater than the number.
Returns PI.
Returns expression1^expression2.
Returns degrees to radians.
Returns pseudo-random number with optional seed.
Rounds the value to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits is 0 if not given.
Valid values: -1, 0, or 1 for negative, zero, or positive numbers respectively.
Returns sine.
Returns square root.
Returns tangent.
Truncates the number to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits is 0 if not given.
True if the string contains the substring.
Converts the string so that the first letter of each word is uppercase and every other letter is lowercase.
Returns length of the string value.
Returns lowercase of the string value.
Returns string with all leading chars removed. White space by default.
Returns the first position of the substring within the string, or -1. The position is zero-based, i.e., the first position is 0.
Returns string formed by repeating expression n times.
Returns string with all occurrences of substr replaced with repl. If n is given, at most n replacements are performed.
Returns string with all trailing chars removed. White space by default.
Splits the string into an array of substrings separated by string_sep. If string_sep is not given, any combination of white space characters is used.
Returns substring from the integer position of the given length, or to the end of the string. The position is zero-based, i.e. the first position is 0. If position is negative, it is counted from the end of the string; -1 is the last position in the string.
Returns string with all leading and trailing chars removed. White space by default.
Returns uppercase of the string value.
Returns array as follows: MISSING is MISSING; NULL is NULL; Arrays are themselves; All other values are wrapped in an array.
Returns array as follows: MISSING is MISSING; NULL is NULL; Arrays of length 1 are the result of TOATOM() on their single element; Objects of length 1 are the result of TOATOM() on their single value; Booleans, numbers, and strings are themselves; All other values are NULL.
Returns array as follows: MISSING is MISSING; NULL is NULL; False is false; Numbers +0, -0, and NaN are false; Empty strings, arrays, and objects are false; All other values are true.
Returns array as follows: MISSING is MISSING; NULL is NULL; False is 0; True is 1; Numbers are themselves; Strings that parse as numbers are those numbers; All other values are NULL.
Returns array as follows: MISSING is MISSING; NULL is NULL; Objects are themselves; All other values are the empty object.
Returns array as follows: MISSING is MISSING; NULL is NULL; False is "false"; True is "true"; Numbers are their string representation; Strings are themselves; All other values are NULL.
You can use the SELECT INTO statement to export formatted data to a file.
The following query exports data into a file formatted in comma-separated values (CSV):
SELECT Name, TotalDue INTO [csv://Customer.txt] FROM [Customer] WHERE CustomerId = '12345'You can specify other formats in the file URI. The possible delimiters are tab, semicolon, and comma with the default being a comma. The following example exports tab-separated values:
SELECT Name, TotalDue INTO [csv://Customer.txt;delimiter=tab] FROM [Customer] WHERE CustomerId = '12345'You can specify other file formats in the URI. The following example exports tab-separated values:
The Cloud provides functions that are similar to those that are available with most standard databases. These functions are implemented in the CData provider engine and thus are available across all data sources with the same consistent API. Three categories of functions are available: string, date, and math.
The Cloud interprets all SQL function inputs as either strings or column identifiers, so you need to escape all literals as strings, with single quotes. For example, contrast the SQL Server syntax and Cloud syntax for the DATENAME function:
SELECT DATENAME(yy,GETDATE())
SELECT DATENAME('yy',GETDATE())
These functions perform string manipulations and return a string value. See STRING Functions for more details.
SELECT CONCAT(firstname, space(4), lastname) FROM Customer WHERE CustomerId = '12345'
These functions perform date and date time manipulations. See DATE Functions for more details.
SELECT CURRENT_TIMESTAMP() FROM Customer
These functions provide mathematical operations. See MATH Functions for more details.
SELECT RAND() FROM Customer
SELECT CONCAT('Mr.', SPACE(2), firstname, SPACE(4), lastname) FROM Customer
These functions can be used to specify criteria in the WHERE clause of your SQL query. See Predicate Functions for more details.
* FROM Customer WHERE CreatedDate = NOW()
Returns the ASCII code value of the left-most character of the character expression.
SELECT ASCII('0');
-- Result: 48
Converts the integer ASCII code to the corresponding character.
SELECT CHAR(48);
-- Result: '0'
Returns the starting position of the specified expression in the character string.
SELECT CHARINDEX('456', '0123456');
-- Result: 4
SELECT CHARINDEX('456', '0123456', 5);
-- Result: -1
Returns the number of UTF-8 characters present in the expression.
SELECT CHAR_LENGTH('sample text') FROM Account LIMIT 1
-- Result: 11
Returns the string that is the concatenation of two or more string values.
SELECT CONCAT('Hello, ', 'world!');
-- Result: 'Hello, world!'
Returns 1 if expressionToFind is found within expressionToSearch; otherwise, 0.
SELECT CONTAINS('0123456', '456');
-- Result: 1
SELECT CONTAINS('0123456', 'Not a number');
-- Result: 0
Returns 1 if character_expression ends with character_suffix; otherwise, 0.
SELECT ENDSWITH('0123456', '456');
-- Result: 1
SELECT ENDSWITH('0123456', '012');
-- Result: 0
Returns the number of bytes present in the file at the specified file path.
SELECT FILESIZE('C:/Users/User1/Desktop/myfile.txt');
-- Result: 23684
Returns the value formatted with the specified format.
SELECT FORMAT(12.34, '#');
-- Result: 12
SELECT FORMAT(12.34, '#.###');
-- Result: 12.34
SELECT FORMAT(1234, '0.000E0');
-- Result: 1.234E3
SELECT FORMAT('2019/01/01', 'yyyy-MM-dd');
-- Result: 2019-01-01
SELECT FORMAT('20190101', 'yyyyMMdd', 'yyyy-MM-dd');
-- Result: '2019-01-01'
Returns a representation of the unix_timestamp argument as a value in YYYY-MM-DD HH:MM:SS expressed in the current time zone.
SELECT FROM_UNIXTIME(1540495231, 1);
-- Result: 2018-10-25 19:20:31
SELECT FROM_UNIXTIME(1540495357385, 0);
-- Result: 2018-10-25 19:22:37
Returns the hash of the input value as a byte array using the given algorithm. The supported algorithms are MD5, SHA1, SHA2_256, SHA2_512, SHA3_224, SHA3_256, SHA3_384, and SHA3_512.
SELECT HASHBYTES('MD5', 'Test');
-- Result (byte array): 0x0CBC6611F5540BD0809A388DC95A615B
Returns the starting position of the specified expression in the character string.
SELECT INDEXOF('0123456', '456');
-- Result: 4
SELECT INDEXOF('0123456', '456', 5);
-- Result: -1
Replaces null with the specified replacement value.
SELECT ISNULL(42, 'Was NULL');
-- Result: 42
SELECT ISNULL(NULL, 'Was NULL');
-- Result: 'Was NULL'
Computes the average value of a JSON array within a JSON object. The path to the array is specified in the jsonpath argument. Return value is numeric or null.
SELECT JSON_AVG('[1,2,3,4,5]', '$[x]');
-- Result: 3
SELECT JSON_AVG('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[x]');
-- Result: 3
SELECT JSON_AVG('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[3..]');
-- Result: 4.5
Returns the number of elements in a JSON array within a JSON object. The path to the array is specified in the jsonpath argument. Return value is numeric or null.
SELECT JSON_COUNT('[1,2,3,4,5]', '$[x]');
-- Result: 5
SELECT JSON_COUNT('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[x]');
-- Result: 5
SELECT JSON_COUNT('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[3..]');
-- Result: 2
Selects any value in a JSON array or object. The path to the array is specified in the jsonpath argument. Return value is numeric or null.
SELECT JSON_EXTRACT('{"test": {"data": 1}}', '$.test');
-- Result: '{"data":1}'
SELECT JSON_EXTRACT('{"test": {"data": 1}}', '$.test.data');
-- Result: 1
SELECT JSON_EXTRACT('{"test": {"data": [1, 2, 3]}}', '$.test.data[1]');
-- Result: 2
Gets the maximum value in a JSON array within a JSON object. The path to the array is specified in the jsonpath argument. Return value is numeric or null.
SELECT JSON_MAX('[1,2,3,4,5]', '$[x]');
-- Result: 5
SELECT JSON_MAX('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[x]');
-- Result: 5
SELECT JSON_MAX('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[..3]');
-- Result: 4
Gets the minimum value in a JSON array within a JSON object. The path to the array is specified in the jsonpath argument. Return value is numeric or null.
SELECT JSON_MIN('[1,2,3,4,5]', '$[x]');
-- Result: 1
SELECT JSON_MIN('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[x]');
-- Result: 1
SELECT JSON_MIN('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[3..]');
-- Result: 4
Computes the summary value in JSON according to the JSONPath expression. Return value is numeric or null.
SELECT JSON_SUM('[1,2,3,4,5]', '$[x]');
-- Result: 15
SELECT JSON_SUM('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[x]');
-- Result: 15
SELECT JSON_SUM('{"test": {"data": [1,2,3,4,5]}}', '$.test.data[3..]');
-- Result: 9
Returns the specified number of characters counting from the left of the specified string.
SELECT LEFT('1234567890', 3);
-- Result: '123'
Returns the number of characters of the specified string expression.
SELECT LEN('12345');
-- Result: 5
Returns an integer representing how many characters into the string the substring appears.
SELECT LOCATE('sample','XXXXXsampleXXXXX');
-- Result: 6
Returns the character expression with the uppercase character data converted to lowercase.
SELECT LOWER('MIXED case');
-- Result: 'mixed case'
Returns the character expression with leading blanks removed.
SELECT LTRIM(' trimmed');
-- Result: 'trimmed'
Replaces the characters between start_index and end_index with the mask_character within the string.
SELECT MASK('1234567890','*',);
-- Result: '**********'
SELECT MASK('1234567890','*', 4);
-- Result: '1234******'
SELECT MASK('1234567890','*', 4, 2);
-- Result: '1234****90'
Returns the Unicode character with the specified integer code as defined by the Unicode standard.
Returns the number of bytes present in the expression.
SELECT OCTET_LENGTH('text') FROM Account LIMIT 1
-- Result: 4
Returns the starting position of the first occurrence of the pattern in the expression. Returns 0 if the pattern is not found.
SELECT PATINDEX('123%', '1234567890');
-- Result: 1
SELECT PATINDEX('%890', '1234567890');
-- Result: 8
SELECT PATINDEX('%456%', '1234567890');
-- Result: 4
Returns the starting position of the specified expression in the character string.
SELECT POSITION('456' IN '123456');
-- Result: 4
SELECT POSITION('x' IN '123456');
-- Result: 0
Returns a valid SQL Server-delimited identifier by adding the necessary delimiters to the specified Unicode string.
SELECT QUOTENAME('table_name');
-- Result: '[table_name]'
SELECT QUOTENAME('table_name', '"');
-- Result: '"table_name"'
SELECT QUOTENAME('table_name', '[');
-- Result: '[table_name]'
Replaces all occurrences of a string with another string.
SELECT REPLACE('1234567890', '456', '|');
-- Result: '123|7890'
SELECT REPLACE('123123123', '123', '.');
-- Result: '...'
SELECT REPLACE('1234567890', 'a', 'b');
-- Result: '1234567890'
Repeats the string value the specified number of times.
SELECT REPLACE('x', 5);
-- Result: 'xxxxx'
Returns the reverse order of the string expression.
SELECT REVERSE('1234567890');
-- Result: '0987654321'
Returns the right part of the string with the specified number of characters.
SELECT RIGHT('1234567890', 3);
-- Result: '890'
Returns the character expression after it removes trailing blanks.
SELECT RTRIM('trimmed ');
-- Result: 'trimmed'
Returns the four-character Soundex code, based on how the string sounds when spoken.
SELECT SOUNDEX('smith');
-- Result: 'S530'
Returns the string that consists of repeated spaces.
SELECT SPACE(5);
-- Result: ' '
Returns a section of the string between to delimiters.
SELECT SPLIT('a/b/c/d', '/', 1);
-- Result: 'a'
SELECT SPLIT('a/b/c/d', '/', -2);
-- Result: 'c'
Returns 1 if character_expression starts with character_prefix; otherwise, 0.
SELECT STARTSWITH('0123456', '012');
-- Result: 1
SELECT STARTSWITH('0123456', '456');
-- Result: 0
Returns the character data converted from the numeric data. For example, STR(123.45, 6, 1) returns 123.5.
SELECT STR('123.456');
-- Result: '123'
SELECT STR('123.456', 2);
-- Result: '**'
SELECT STR('123.456', 10, 2);
-- Result: '123.46'
Inserts a string into another string. It deletes the specified length of characters in the first string at the start position and then inserts the second string into the first string at the start position.
SELECT STUFF('1234567890', 3, 2, 'xx');
-- Result: '12xx567890'
Returns the part of the string with the specified length; starts at the specified index.
SELECT SUBSTRING('1234567890' FROM 3 FOR 2);
-- Result: '34'
SELECT SUBSTRING('1234567890' FROM 3);
-- Result: '34567890'
Converts the value of this instance to its equivalent string representation.
SELECT TOSTRING(123);
-- Result: '123'
SELECT TOSTRING(123.456);
-- Result: '123.456'
SELECT TOSTRING(null);
-- Result: ''
Returns the character expression with leading and/or trailing blanks removed.
SELECT TRIM(' trimmed ');
-- Result: 'trimmed'
SELECT TRIM(LEADING FROM ' trimmed ');
-- Result: 'trimmed '
SELECT TRIM('-' FROM '-----trimmed-----');
-- Result: 'trimmed'
SELECT TRIM(BOTH '-' FROM '-----trimmed-----');
-- Result: 'trimmed'
SELECT TRIM(TRAILING '-' FROM '-----trimmed-----');
-- Result: '-----trimmed'
Returns the integer value defined by the Unicode standard of the first character of the input expression.
Returns the character expression with lowercase character data converted to uppercase.
SELECT UPPER('MIXED case');
-- Result: 'MIXED CASE'
Extracts an XML document using the specified XPath to flatten the XML. A comma is used to separate the outputs by default, but this can be changed by specifying the third parameter.
SELECT XML_EXTRACT('<vowels><ch>a</ch><ch>e</ch><ch>i</ch><ch>o</ch><ch>u</ch></vowels>', '/vowels/ch');
-- Result: 'a,e,i,o,u'
SELECT XML_EXTRACT('<vowels><ch>a</ch><ch>e</ch><ch>i</ch><ch>o</ch><ch>u</ch></vowels>', '/vowels/ch', ';');
-- Result: 'a;e;i;o;u'
Returns the current date value.
SELECT CURRENT_DATE();
-- Result: 2018-02-01
Returns the current time stamp of the database system as a datetime value. This value is equal to GETDATE and SYSDATETIME, and is always in the local timezone.
SELECT CURRENT_TIMESTAMP();
-- Result: 2018-02-01 03:04:05
Returns the datetime value that results from adding the specified number (a signed integer) to the specified date part of the date.
SELECT DATEADD('d', 5, '2018-02-01');
-- Result: 2018-02-06
SELECT DATEADD('hh', 5, '2018-02-01 00:00:00');
-- Result: 2018-02-01 05:00:00
Returns the difference (a signed integer) of the specified time interval between the specified start date and end date.
SELECT DATEDIFF('d', '2018-02-01', '2018-02-10');
-- Result: 9
SELECT DATEDIFF('hh', '2018-02-01 00:00:00', '2018-02-01 12:00:00');
-- Result: 12
Returns the datetime value for the specified year, month, and day.
SELECT DATEFROMPARTS(2018, 2, 1);
-- Result: 2018-02-01
Returns the character string that represents the specified date part of the specified date.
SELECT DATENAME('yy', '2018-02-01');
-- Result: '2018'
SELECT DATENAME('dw', '2018-02-01');
-- Result: 'Thursday'
Returns a character string that represents the specified date part of the specified date.
SELECT DATEPART('yy', '2018-02-01');
-- Result: 2018
SELECT DATEPART('dw', '2018-02-01');
-- Result: 5
Returns the datetime value for the specified date parts.
SELECT DATETIME2FROMPARTS(2018, 2, 1, 1, 2, 3, 456, 3);
-- Result: 2018-02-01 01:02:03.456
Returns the datetime value for the specified date parts.
SELECT DATETIMEFROMPARTS(2018, 2, 1, 1, 2, 3, 456);
-- Result: 2018-02-01 01:02:03.456
Truncates the date to the precision of the given date part. Modeled after the Oracle TRUNC function.
SELECT DATE_TRUNC('05-04-2005', 'YY');
-- Result: '1/1/2005'
SELECT DATE_TRUNC('05-04-2005', 'MM');
-- Result: '5/1/2005'
Truncates the date to the precision of the given date part. Modeled after the PostgreSQL date_trunc function.
SELECT DATE_TRUNC2('year', '2020-02-04');
-- Result: '2020-01-01'
SELECT DATE_TRUNC2('week', '2020-02-04', 'monday');
-- Result: '2020-02-02', which is the previous Monday
Returns the integer that specifies the day component of the specified date.
SELECT DAY('2018-02-01');
-- Result: 1
SELECT DAYOFMONTH('04/15/2000');
-- Result: 15
SELECT DAYOFWEEK('04/15/2000');
-- Result: 7
SELECT DAYOFYEAR('04/15/2000');
-- Result: 106
Returns the last day of the month that contains the specified date with an optional offset.
SELECT EOMONTH('2018-02-01');
-- Result: 2018-02-28
SELECT LAST_DAY('2018-02-01');
-- Result: 2018-02-28
SELECT EOMONTH('2018-02-01', 2);
-- Result: 2018-04-30
SELECT FDWEEK('02-08-2018');
-- Result: 2/4/2018
SELECT FDMONTH('02-08-2018');
-- Result: 2/1/2018
SELECT FDQUARTER('05-08-2018');
-- Result: 4/1/2018
Returns the time stamp associated with the Date Modified of the relevant file.
SELECT FILEMODIFIEDTIME('C:/Documents/myfile.txt');
-- Result: 6/25/2019 10:06:58 AM
Returns a date derived from the number of days after 1582-10-15 (based upon the Gregorian calendar). This will be equivalent to the MYSQL FROM_DAYS function.
SELECT FROM_DAYS(736000); -- Result: 2/6/2015
Returns the current time stamp of the database system as a datetime value. This value is equal to CURRENT_TIMESTAMP and SYSDATETIME, and is always in the local timezone.
SELECT GETDATE();
-- Result: 2018-02-01 03:04:05
Returns the current time stamp of the database system formatted as a UTC datetime value. This value is equal to SYSUTCDATETIME.
SELECT GETUTCDATE();
-- For example, if the local timezone is Eastern European Time (GMT+2)
-- Result: 2018-02-01 05:04:05
Returns the hour component from the provided datetime.
SELECT HOUR('02-02-2020 11:30:00');
-- Result: 11
Returns 1 if the value is a valid date, time, or datetime value; otherwise, 0.
SELECT ISDATE('2018-02-01', 'yyyy-MM-dd');
-- Result: 1
SELECT ISDATE('Not a date');
-- Result: 0
Returns a time stamp equivalent to exactly one week before the current date.
SELECT LAST_WEEK(); //Assume the date is 3/17/2020 -- Result: 3/10/2020
Returns a time stamp equivalent to exactly one month before the current date.
SELECT LAST_MONTH(); //Assume the date is 3/17/2020 -- Result: 2/17/2020
Returns a time stamp equivalent to exactly one year before the current date.
SELECT LAST_YEAR(); //Assume the date is 3/17/2020 -- Result: 3/10/2019
Returns the last day of the provided week.
SELECT LDWEEK('02-02-2020');
-- Result: 2/8/2020
Returns the last day of the provided month.
SELECT LDMONTH('02-02-2020');
-- Result: 2/29/2020
Returns the last day of the provided quarter.
SELECT LDQUARTER('02-02-2020');
-- Result: 3/31/2020
Returns a date value from a year and a number of days.
SELECT MAKEDATE(2020, 1);
-- Result: 2020-01-01
Returns the minute component from the provided datetime.
SELECT MINUTE('02-02-2020 11:15:00');
-- Result: 15
Returns the month component from the provided datetime.
SELECT MONTH('02-02-2020');
-- Result: 2
Returns the quarter associated with the provided datetime.
SELECT QUARTER('02-02-2020');
-- Result: 1
Returns the second component from the provided datetime.
SELECT SECOND('02-02-2020 11:15:23');
-- Result: 23
Returns the datetime value for the specified date and time.
SELECT SMALLDATETIMEFROMPARTS(2018, 2, 1, 1, 2);
-- Result: 2018-02-01 01:02:00
Parses the provided string value and returns the corresponding datetime.
SELECT STRTODATE('03*04*2020','dd*MM*yyyy');
-- Result: 4/3/2020
Returns the current time stamp as a datetime value of the database system. It is equal to GETDATE and CURRENT_TIMESTAMP, and is always in the local timezone.
SELECT SYSDATETIME();
-- Result: 2018-02-01 03:04:05
Returns the current system date and time as a UTC datetime value. It is equal to GETUTCDATE.
SELECT SYSUTCDATETIME();
-- For example, if the local timezone is Eastern European Time (GMT+2)
-- Result: 2018-02-01 05:04:05
Returns the time value for the specified time and with the specified precision.
SELECT TIMEFROMPARTS(1, 2, 3, 456, 3);
-- Result: 01:02:03.456
Returns the number of days since 0000-00-01. This will only return a value for dates on or after 1582-10-15 (based upon the Gregorian calendar). This will be equivalent to the MYSQL TO_DAYS function.
SELECT TO_DAYS('02-06-2015');
-- Result: 736000
Returns the week (of the year) associated with the provided datetime.
SELECT WEEK('02-17-2020 11:15:23');
-- Result: 8
Returns the integer that specifies the year of the specified date.
SELECT YEAR('2018-02-01');
-- Result: 2018
Returns the absolute (positive) value of the specified numeric expression.
SELECT ABS(15);
-- Result: 15
SELECT ABS(-15);
-- Result: 15
Returns the arc cosine, the angle in radians whose cosine is the specified float expression.
SELECT ACOS(0.5);
-- Result: 1.0471975511966
Returns the arc sine, the angle in radians whose sine is the specified float expression.
SELECT ASIN(0.5);
-- Result: 0.523598775598299
Returns the arc tangent, the angle in radians whose tangent is the specified float expression.
SELECT ATAN(10);
-- Result: 1.47112767430373
Returns the angle in radians between the positive x-axis and the ray from the origin to the point (y, x) where x and y are the values of the two specified float expressions.
SELECT ATN2(1, 1);
-- Result: 0.785398163397448
Returns the smallest integer greater than or equal to the specified numeric expression.
SELECT CEILING(1.3);
-- Result: 2
SELECT CEILING(1.5);
-- Result: 2
SELECT CEILING(1.7);
-- Result: 2
Returns the trigonometric cosine of the specified angle in radians in the specified expression.
SELECT COS(1);
-- Result: 0.54030230586814
Returns the trigonometric cotangent of the angle in radians specified by float_expression.
SELECT COT(1);
-- Result: 0.642092615934331
Returns the angle in degrees for the angle specified in radians.
SELECT DEGREES(3.1415926);
-- Result: 179.999996929531
Returns the exponential value of the specified float expression. For example, EXP(LOG(20)) is 20.
SELECT EXP(2);
-- Result: 7.38905609893065
Evaluates the expression.
SELECT EXPR('1 + 2 * 3');
-- Result: 7
SELECT EXPR('1 + 2 * 3 == 7');
-- Result: true
Returns the largest integer less than or equal to the numeric expression.
SELECT FLOOR(1.3);
-- Result: 1
SELECT FLOOR(1.5);
-- Result: 1
SELECT FLOOR(1.7);
-- Result: 1
Returns the greatest of the supplied integers.
SELECT GREATEST(3,5,8,10,1) -- Result: 10
Returns a the equivalent hex for the input value.
SELECT HEX(866849198);
-- Result: 33AB11AE
SELECT HEX('Sample Text');
-- Result: 53616D706C652054657874
Returns the least of the supplied integers.
SELECT LEAST(3,5,8,10,1) -- Result: 1
Returns the natural logarithm of the specified float expression.
SELECT LOG(7.3890560);
-- Result: 1.99999998661119
Returns the base-10 logarithm of the specified float expression.
SELECT LOG10(10000);
-- Result: 4
Returns the integer value associated with the remainder when dividing the dividend by the divisor.
SELECT MOD(10,3); -- Result: 1
Returns the opposite to the real number input.
SELECT NEGATE(10); -- Result: -10 SELECT NEGATE(-12.4) --Result: 12.4
Returns the constant value of pi.
SELECT PI()
-- Result: 3.14159265358979
Returns the value of the specified expression raised to the specified power.
SELECT POWER(2, 10);
-- Result: 1024
SELECT POWER(2, -2);
-- Result: 0.25
Returns the angle in radians of the angle in degrees.
SELECT RADIANS(180);
-- Result: 3.14159265358979
Returns a pseudorandom float value from 0 through 1, exclusive.
SELECT RAND();
-- This result may be different, since the seed is randomized
-- Result: 0.873159630165044
SELECT RAND(1);
-- This result will always be the same, since the seed is constant
-- Result: 0.248668584157093
Returns the numeric value rounded to the specified length or precision.
SELECT ROUND(1.3, 0);
-- Result: 1
SELECT ROUND(1.55, 1);
-- Result: 1.6
SELECT ROUND(1.7, 0, 0);
-- Result: 2
SELECT ROUND(1.7, 0, 1);
-- Result: 1
SELECT ROUND (1.24);
-- Result: 1.0
Returns the positive sign (1), 0, or negative sign (-1) of the specified expression.
SELECT SIGN(0);
-- Result: 0
SELECT SIGN(10);
-- Result: 1
SELECT SIGN(-10);
-- Result: -1
Returns the trigonometric sine of the angle in radians.
SELECT SIN(1);
-- Result: 0.841470984807897
Returns the square root of the specified float value.
SELECT SQRT(100);
-- Result: 10
Returns the square of the specified float value.
SELECT SQUARE(10);
-- Result: 100
SELECT SQUARE(-10);
-- Result: 100
Returns the tangent of the input expression.
SELECT TAN(1);
-- Result: 1.5574077246549
Returns the supplied decimal number truncated to have the supplied decimal precision.
SELECT TRUNC(10.3423,2); -- Result: 10.34
To create new records, use INSERT statements.
The INSERT statement specifies the columns to be inserted and the new column values. You can specify the column values in a comma-separated list in the VALUES clause, as shown in the following example:
INSERT INTO <table_name>
( <column_reference> [ , ... ] )
VALUES
( { <expression> | NULL } [ , ... ] )
<expression> ::=
| @ <parameter>
| ?
| <literal>
The following is an example query:
INSERT INTO Customer (TotalDue) VALUES ('John')
To modify existing records, use UPDATE statements.
The UPDATE statement takes as input a comma-separated list of columns and new column values as name-value pairs in the SET clause, as shown in the following example:
UPDATE <table_name> SET { <column_reference> = <expression> } [ , ... ] WHERE { Id = <expression> } [ { AND | OR } ... ]
<expression> ::=
| @ <parameter>
| ?
| <literal>
The following is an example query:
UPDATE Customer SET TotalDue='John' WHERE Id = @myId
To delete information from a table, use DELETE statements.
The DELETE statement requires the table name in the FROM clause and the row's primary key in the WHERE clause, as shown in the following example:
<delete_statement> ::= DELETE FROM <table_name> WHERE { Id = <expression> } [ { AND | OR } ... ]
<expression> ::=
| @ <parameter>
| ?
| <literal>
The following is an example query:
DELETE FROM Customer WHERE Id = @myId
To execute stored procedures, you can use EXECUTE or EXEC statements.
EXEC and EXECUTE assign stored procedure inputs, referenced by name, to values or parameter names.
To execute a stored procedure as an SQL statement, use the following syntax:
{ EXECUTE | EXEC } <stored_proc_name>
{
[ @ ] <input_name> = <expression>
} [ , ... ]
<expression> ::=
| @ <parameter>
| ?
| <literal>
Reference stored procedure inputs by name:
EXECUTE my_proc @second = 2, @first = 1, @third = 3;
Execute a parameterized stored procedure statement:
EXECUTE my_proc second = @p1, first = @p2, third = @p3;
PIVOT and UNPIVOT can be used to change a table-valued expression into another table.
"SELECT 'AverageCost' AS Cost_Sorted_By_Production_Days, [0], [1], [2], [3], [4]
FROM
(
SELECT DaysToManufacture, StandardCost
FROM Production.Product
) AS SourceTable
PIVOT
(
AVG(StandardCost)
FOR DaysToManufacture IN ([0], [1], [2], [3], [4])
) AS PivotTable;"
"SELECT VendorID, Employee, Orders
FROM
(SELECT VendorID, Emp1, Emp2, Emp3, Emp4, Emp5
FROM pvt) p
UNPIVOT
(Orders FOR Employee IN
(Emp1, Emp2, Emp3, Emp4, Emp5)
)AS unpvt;"
For further information on PIVOT and UNPIVOT, see FROM clause plus JOIN, APPLY, PIVOT (Transact-SQL)
Depending upon the connection settings being used, the Cloud can present several different mappings between Couchbase entities and relational tables and views. For more details on each of these capabilities, refer to the NoSQL portion of this documentation.
Please see the Automatic Schema Discovery section for more details on how flavor and child tables are exposed. In addition, the NewChildJoinsMode connection property is recommended for workflows that make heavy use of child tables. The documentation for that connection property details the improvements it makes to the Cloud data model.
Couchbase has different ways of grouping buckets and datasets depending on the CouchbaseService and version of Couchbase you are connecting to:
All of the schemas provided by the Cloud are dynamically retrieved from Couchbase, so any changes in the buckets or fields within Couchbase will be reflected in the Cloud the next time you connect.
You may also issue a reset query to refresh schemas without having to close the connection:
RESET SCHEMA CACHE
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with Couchbase.
Stored procedures accept a list of parameters, perform their intended function, and then return, if applicable, any relevant response data from Couchbase, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
| AddDocument | Upsert entire JSON documents to Couchbase as-is. |
| CreateBucket | Creates a new bucket in CouchBase. |
| CreateCollection | Creates a collection under an existing scope |
| CreateScope | Creates a scope under an existing bucket |
| CreateUserTable | An internal operation used when GenerateSchemaFiles=OnCreate |
| DeleteBucket | Deletes a bucket (and all its collections and scopes, where supported) |
| DeleteCollection | Deletes a collection (Couchbase 7 and up) |
| DeleteScope | Deletes a scope and all its collections (Couchbase 7 and up) |
| FlushBucket | Removes all documents from a bucket in Couchbase. |
| ListIndices | Lists all indices available in Couchbase |
| ManageIndices | Creates/Drops an index in a target bucket in Couchbase. |
Upsert entire JSON documents to Couchbase as-is.
| Name | Type | Required | Description |
| BucketName | String | True | The bucket to insert the document into. |
| SourceTable | String | False | The name of the temp table containing ID and Document columns. Required if no ID is specified. |
| ID | String | False | The primary key to insert the document under. Required if no SourceTable is specified. |
| Document | String | False | The JSON text of the document to insert. Required if not SourceTable is specified. |
| Name | Type | Description |
| RowsAffected | String | The number of rows successfully updated |
Creates a new bucket in CouchBase.
Buckets using @AuthType 'none' can be created by specifying only the @Name, @AuthType, @BucketType, and @RamQuotaMB. The @ProxyPort option may also be required, depending upon what version of Couchbase you are connecting to.
EXECUTE CreateBucket @Name = 'Players', @AuthType = 'NONE', @BucketType = 'COUCHBASE', @RamQuotaMB = 100, @ProxyPort = 1234
When creating a bucket with @AuthType 'sasl', the @ProxyPort must not be provided, and the @SaslPassword is optional:
EXECUTE CreateBucket @Name = 'Players', @AuthType = 'SASL', @BucketType = 'COUCHBASE', @RamQuotaMB = 100
All other parameters can be used regardless of what @AuthType you provide.
| Name | Type | Required | Description |
| Name | String | True | The name of the bucket to create. |
| AuthType | String | True | The type of authentication to use can be sasl or none. |
| BucketType | String | True | The type of the bucket, can be memcached or couchbase. |
| EvictionPolicy | String | False | What to evict from the cache if the bucket is full, can be fullEviction or valueOnly |
| FlushEnabled | String | False | Enables or disables flush all support, can be 0 or 1. |
| ParallelDBAndViewCompaction | String | False | Enables simultaneous compactions of the database and the views, can be true or false. |
| ProxyPort | String | False | The proxy port, must be unused, required if authorization is not SASL. |
| RamQuotaMB | String | True | The amount of RAM to allocate to the bucket, in megabytes. |
| ReplicaIndex | String | False | Enables or disables replicate indexes, can be 1 or 0. |
| ReplicaNumber | String | False | A number between 0 and 3, specifies number of replicas. |
| SaslPassword | String | False | SASL password, may be provided if the authentication type is SASL. |
| ThreadsNumber | String | False | A number between 2 and 8, specifies number of concurrent readers/writers. |
| CompressionMode | String | False | Either Off (no compression), Passive (documents inserted compressed stay comressed) or Active (server can compress any document). On Couchbase Enterprise, Passive is the default. |
| ConflictResolutionType | String | False | How the server will resolve conflicts between cluster nodes. Either lww (timestamp-based resolution) or seqno (revision ID-based resolution). Defaults to seqno on Couchbase Enterprise. |
| Name | Type | Description |
| Success | String | Whether or not the bucket was successfully created. |
Creates a collection under an existing scope
| Name | Type | Required | Description |
| Bucket | String | True | The name of the bucket containing the collection. |
| Scope | String | True | The name of the scope containing the collection. |
| Name | String | True | The name of the collection to create. |
| Name | Type | Description |
| Success | Bool | Whether or not the collection was successfully created. |
Creates a schema definition of a table in Couchbase. Results may change depending of the value of FlattenObjects, FlattenArrays, and TypeDetectionScheme.
| Name | Type | Required | Accepts Output Streams | Description |
| TableName | String | True | False | The name of the table. |
| FileName | String | False | False | The full file path and name of the schema to generate. Ex : 'C:\\Users\\User\\Desktop\\Couchbase\\sheet.rsd' |
| Overwrite | String | False | False | Will delete any existing schema file for this table. |
| FileStream | String | False | True | Stream to write the schema to. Only used if FileName is not provided. |
| Name | Type | Description |
| Result | String | Whether or not the schema was successfully built. |
| FileData | String | The content of the schema encoded as base64. Only returned if the FileName and FileStream are not provided. |
Creates a scope under an existing bucket
| Name | Type | Required | Description |
| Bucket | String | True | The name of the bucket containing the scope. |
| Name | String | True | The name of the scope to create. |
| Name | Type | Description |
| Success | Bool | Whether or not the scope was successfully created. |
An internal operation used when GenerateSchemaFiles=OnCreate
Note: This procedure makes use of indexed parameters. These input parameters are denoted with a '#' character at the end of their names.
Indexed parameters facilitate providing multiple instances a single parameter as inputs for the procedure.
Suppose there is an input parameter named Param#. Input multiple instances of an indexed parameter like this:
EXEC ProcedureName Param#1 = "value1", Param#2 = "value2", Param#3 = "value3"
| Name | Type | Required | Description |
| CreateNotExist | String | False | Whether an existing table is an error or not |
| TableName | String | False | The name of the table to create |
| ColumnNames# | String | False | For each column, its name |
| ColumnDataTypes# | String | False | For each column, its type |
| ColumnSizes# | String | False | For each column, its size (ignored) |
| ColumnScales# | String | False | For each column, its scale (ignored) |
| ColumnIsNulls# | String | False | For each column, whether it allows NULLs (ignored) |
| ColumnDefaults# | String | False | For each column, its default value (ignored) |
| Location | String | False | Where the schema file is generated |
| Name | Type | Description |
| AffectedTables | String | The number of tables created, either 0 or 1 |
Deletes a bucket (and all its collections and scopes, where supported)
| Name | Type | Required | Description |
| Name | String | True | The name of the bucket to delete. |
| Name | Type | Description |
| Success | Bool | Whether or not the bucket was successfully deleted. |
Deletes a collection (Couchbase 7 and up)
| Name | Type | Required | Description |
| Bucket | String | True | The name of the bucket containing the collection. |
| Scope | String | True | The name of the scope containing the collection. |
| Name | String | True | The name of the collection to delete. |
| Name | Type | Description |
| Success | Bool | Whether or not the collection was successfully deleted. |
Deletes a scope and all its collections (Couchbase 7 and up)
| Name | Type | Required | Description |
| Bucket | String | True | The name of the bucket containing the scope. |
| Name | String | True | The name of the scope to delete. |
| Name | Type | Description |
| Success | Bool | Whether or not the scope was successfully deleted. |
Removes all documents from a bucket in Couchbase.
| Name | Type | Required | Description |
| Name | String | True | The name of the bucket to delete. Flush must be enabled on this bucket. |
| Name | Type | Description |
| Success | Bool | Whether or not the bucket was successfully flushed. |
Lists all indices available in Couchbase
| Name | Type | Description |
| Id | String | The unique index ID |
| Datastore_id | String | The server hosting the indexed bucket |
| Namespace_id | String | The pool hosting the indexed bucket |
| Bucket_id | String | The bucket the index applies to if the index applies to a collection (Couchbase 7 and up). NULL otherwise. |
| Scope_id | String | The scope the index applies to if the index applies to a collection (Couchbase 7 and up). NULL otherwise. |
| Keyspace_id | String | The collection the index applies to, if the index applis to a collection (Couchbase 7 and up). The bucket the index applies to otherwise. |
| Index_key | String | A list of keys participating in the index |
| Condition | String | The N1QL filter that the index applies to |
| Is_primary | String | Whether the index is on the primary key |
| Name | String | The name of the index |
| State | String | Whether the index is available |
| Using | String | Whether the index is backed by GSI or a view |
Creates/Drops an index in a target bucket in Couchbase.
An anonymous primary index can be created with these parameters:
EXECUTE ManageIndices @BucketName = 'Players' @Action = 'CREATE' @IsPrimary = 'true' @IndexType = 'VIEW'
This is the same as executing this N1QL:
CREATE PRIMARY INDEX ON `Players` USING VIEW
A named primary index can be created by specifying an @Name, in addition to the parameters listed above:
EXECUTE ManageIndices @BucketName = 'Players' @Action = 'CREATE' @IsPrimary = 'true' @Name = 'Players_primary' @IndexType = 'VIEW'
A secondary index can be created by setting @IsPrimary to false and providing at least one expression.
EXECUTE ManageIndices @BucketName = 'Players', @Action = 'CREATE', @IsPrimary = 'false', @Name = 'Players_playtime_score', @Expressions = '["score", "playtime"]'
This is the same as running the following N1QL:
CREATE INDEX `Players_playtime_score` ON `Players`(score, playtime) USING GSI;
Multiple nodes and filters can also be provied to generate more complex indices. They must be provided as JSON lists:
EXECUTE ManageIndices @BucketName = 'Players', @Name = 'TopPlayers', @Expressions = '["score", "playtime"]', @Filter = '["topscore > 1000", "playtime > 600"]', @Nodes = '["127.0.0.1:8091", "192.168.0.100:8091"]'
This is the same as running the following N1QL:
CREATE INDEX `TopPlayers` ON `Players`(score, playtime) WHERE topscore > 1000 AND playtime > 600 USING GSI WITH { "nodes": ["127.0.0.1:8091", "192.168.0.100:8091"]};
| Name | Type | Required | Description |
| BucketName | String | True | The target bucket to create or drop the the index from. |
| ScopeName | String | False | The target scope to create or drop the index from (Couchbase 7 and up) |
| CollectionName | String | False | The target collection to create or drop the index from (Couchbase 7 and up) |
| Action | String | True | Specifies which action to perform on the index, can be Create or Drop. |
| Expressions | String | False | A list of expressions or functions, encoded as JSON, that the index will be based off of. At least one is required if IsPrimary is set to false and the action is Create. |
| Name | String | False | The name of the index to create or drop, required if IsPrimary is set to false. |
| IsPrimary | String | False | Specifies wether the index should be a primary index.
The default value is true. |
| Filters | String | False | A list of filters, encoded as JSON, to apply on the index. |
| IndexType | String | False | The type of index to create, can be GSI or View, only used if the action is Create.
The default value is GSI. |
| ViewName | String | False | Deprecated, included for compatibility only. Does nothing. |
| Nodes | String | False | A list, encoded as JSON, of nodes to contain the index, must contain the port. Only used if the action is Create. |
| NumReplica | String | False | How many replicas to create among the index nodes in the cluster. |
| Name | Type | Description |
| Success | String | Whether or not the index was successfully created or dropped. |
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for Couchbase:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries:
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
| Name | Type | Description |
| CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
| Name | Type | Description |
| CatalogName | String | The database name. |
| SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
| Name | Type | Description |
| CatalogName | String | The database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view. |
| TableType | String | The table type (table or view). |
| Description | String | A description of the table or view. |
| IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the Customer table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Customer'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view containing the column. |
| ColumnName | String | The column name. |
| DataTypeName | String | The data type name. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The storage size of the column. |
| DisplaySize | Int32 | The designated column's normal maximum width in characters. |
| NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
| IsNullable | Boolean | Whether the column can contain null. |
| Description | String | A brief description of the column. |
| Ordinal | Int32 | The sequence number of the column. |
| IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
| IsGeneratedColumn | String | Whether the column is generated. |
| IsHidden | Boolean | Whether the column is hidden. |
| IsArray | Boolean | Whether the column is an array. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
| Name | Type | Description |
| CatalogName | String | The database containing the stored procedure. |
| SchemaName | String | The schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure. |
| Description | String | A description of the stored procedure. |
| ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the SelectEntries stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName='SelectEntries' AND Direction=1 OR Direction=2
| Name | Type | Description |
| CatalogName | String | The name of the database containing the stored procedure. |
| SchemaName | String | The name of the schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure containing the parameter. |
| ColumnName | String | The name of the stored procedure parameter. |
| Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
| DataTypeName | String | The name of the data type. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
| NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
| IsNullable | Boolean | Whether the parameter can contain null. |
| IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
| IsArray | Boolean | Whether the parameter is an array. |
| Description | String | The description of the parameter. |
| Ordinal | Int32 | The index of the parameter. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the Customer table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Customer'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
| IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
| ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| KeySeq | String | The sequence number of the primary key. |
| KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the index. |
| SchemaName | String | The name of the schema containing the index. |
| TableName | String | The name of the table containing the index. |
| IndexName | String | The index name. |
| ColumnName | String | The name of the column associated with the index. |
| IsUnique | Boolean | True if the index is unique. False otherwise. |
| IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
| Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
| SortOrder | String | The sort order: A for ascending or D for descending. |
| OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
When querying this table, the config connection string should be used:
jdbc:cdata:couchbase:config:
This connection string enables you to query this table without a valid connection.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
| Name | Type | Description |
| Name | String | The name of the connection property. |
| ShortDescription | String | A brief description. |
| Type | String | The data type of the connection property. |
| Default | String | The default value if one is not explicitly set. |
| Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
| Value | String | The value you set or a preconfigured default. |
| Required | Boolean | Whether the property is required to connect. |
| Category | String | The category of the connection property. |
| IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
| Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
| PropertyName | String | A camel-cased truncated form of the connection property name. |
| Ordinal | Int32 | The index of the parameter. |
| CatOrdinal | Int32 | The index of the parameter category. |
| Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
| Visible | Boolean | Informs whether the property is visible in the connection UI. |
| ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. The following result set indicates the SELECT functionality that the Cloud can offload to the data source or process client side. Your data source may support additional SQL syntax. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
| AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
| COUNT | Whether COUNT function is supported. | YES, NO |
| IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
| IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
| SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
| GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
| OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
| OUTER_JOINS | Whether outer joins are supported. | YES, NO |
| SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
| STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
| NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
| TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
| REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
| REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
| IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
| SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
| DIALECT | Indicates the SQL dialect to use. | |
| KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
| SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
| SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
| DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
| DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
| SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
| SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
| SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
| PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
| ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
| PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
| MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
| REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
| REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
| REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
| REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
| REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
| IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
| CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
| CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name='SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.
| Name | Type | Description |
| NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
| VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
| Name | Type | Description |
| Id | String | The database-generated Id returned from a data modification operation. |
| Batch | String | An identifier for the batch. 1 for a single operation. |
| Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
| Message | String | SUCCESS or an error message if the update in the batch failed. |
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
| Property | Description |
| AuthScheme | The type of authentication to use when connecting to Couchbase. |
| User | The Couchbase user account used to authenticate. |
| Password | The password used to authenticate the user. |
| CredentialsFile | Use this property if you need to provide credentials for multiple users or buckets. This file takes priority over other forms of authentication. |
| Server | The address of the Couchbase server or servers to which you are connecting. |
| CouchbaseService | Determines the Couchbase service to connect to. Default is N1QL. Available options are N1QL and Analytics. |
| ConnectionMode | Determines how to connect to the Couchbase server. Must be either Direct or Cloud. |
| DNSServer | Determines what DNS server to use when retrieving Couchbase Capella information. |
| N1QLPort | The port for connecting to the Couchbase N1QL Endpoint. |
| AnalyticsPort | The port for connecting to the Couchbase Analytics Endpoint. |
| WebConsolePort | The port for connecting to the Couchbase Web Console. |
| Property | Description |
| SSLClientCert | The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). |
| SSLClientCertType | The type of key store containing the TLS/SSL client certificate. |
| SSLClientCertPassword | The password for the TLS/SSL client certificate. |
| SSLClientCertSubject | The subject of the TLS/SSL client certificate. |
| UseSSL | Whether to negotiate TLS/SSL when connecting to the Couchbase server. |
| SSLServerCert | The certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| FirewallType | The protocol used by a proxy-based firewall. |
| FirewallServer | The name or IP address of a proxy-based firewall. |
| FirewallPort | The TCP port for a proxy-based firewall. |
| FirewallUser | The user name to use to authenticate with a proxy-based firewall. |
| FirewallPassword | A password used to authenticate to a proxy-based firewall. |
| Property | Description |
| ProxyAutoDetect | This indicates whether to use the system proxy settings or not. This takes precedence over other proxy settings, so you'll need to set ProxyAutoDetect to FALSE in order use custom proxy settings. |
| ProxyServer | The hostname or IP address of a proxy to route HTTP traffic through. |
| ProxyPort | The TCP port the ProxyServer proxy is running on. |
| ProxyAuthScheme | The authentication type to use to authenticate to the ProxyServer proxy. |
| ProxyUser | A user name to be used to authenticate to the ProxyServer proxy. |
| ProxyPassword | A password to be used to authenticate to the ProxyServer proxy. |
| ProxySSLType | The SSL type to use when connecting to the ProxyServer proxy. |
| ProxyExceptions | A semicolon separated list of destination hostnames or IPs that are exempt from connecting through the ProxyServer . |
| Property | Description |
| Logfile | A filepath which designates the name and location of the log file. |
| Verbosity | The verbosity level that determines the amount of detail included in the log file. |
| LogModules | Core modules to be included in the log file. |
| MaxLogFileSize | A string specifying the maximum size in bytes for a log file (for example, 10 MB). |
| MaxLogFileCount | A string specifying the maximum file count of log files. |
| Property | Description |
| Location | A path to the directory that contains the schema files defining tables, views, and stored procedures. |
| BrowsableSchemas | This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC. |
| Tables | This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA,TableB,TableC. |
| Views | Restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC. |
| Dataverse | Which Analytics dataverse to scan when discovering tables. |
| TypeDetectionScheme | Determines how the provider builds tables and columns from the buckets found in Couchbase. |
| InferNumSampleValues | The maximum number of values for every field to scan before determining its data type. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER. |
| InferSampleSize | The maximum number of documents to scan for the columns available in the bucket. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER. |
| InferSimilarityMetric | Specifies the similarity degree where different schemas will be considered to be the same flavor. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER. |
| FlexibleSchemas | Whether the provider allows queries to use columns that it has not discovered. |
| ExposeTTL | Specifies whether document TTL information should be exposed. |
| NumericStrings | Whether to allow string values to be treated as numbers. |
| IgnoreChildAggregates | Whether the provider exposes aggregate columns that are also available as child tables. Ignored if TableSupport is not set to Full. |
| TableSupport | How much effort the provider will put into discovering tables on the Couchbase server. |
| NewChildJoinsMode | Determines the kind of child table model the provider exposes. |
| Property | Description |
| AutoCache | Automatically caches the results of SELECT queries into a cache database specified by either CacheLocation or both of CacheConnection and CacheProvider . |
| CacheLocation | Specifies the path to the cache when caching to a file. |
| CacheTolerance | The tolerance for stale data in the cache specified in seconds when using AutoCache . |
| Offline | Use offline mode to get the data from the cache instead of the live source. |
| CacheMetadata | This property determines whether or not to cache the table metadata to a file store. |
| Property | Description |
| AllowJSONParameters | Allows raw JSON to be used in parameters when QueryPassthrough is enabled. |
| ChildSeparator | The character or characters used to denote child tables. |
| CreateTableRamQuota | The default RAM quota, in megabytes, to use when inserting buckets via the CREATE TABLE syntax. |
| DataverseSeparator | The character or characters used to denote Analytics dataverses and scopes/collections. |
| FlattenArrays | The number of elements to expose as columns from nested arrays. Ignored if IgnoreChildAggregates is enabled. |
| FlattenObjects | Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. |
| FlavorSeparator | The character or characters used to denote flavors. |
| GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
| InsertNullValues | Determines whether an INSERT should include fields that have NULL values. |
| MaxRows | Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time. |
| Other | These hidden properties are used only in specific use cases. |
| Pagesize | The maximum number of results to return per page from Couchbase. |
| PeriodsSeparator | The character or characters used to denote hierarchy. |
| PseudoColumns | This property indicates whether or not to include pseudo columns as columns to the table. |
| QueryExecutionTimeout | This sets the server-side timeout for the query, which governs how long Couchbase will execute the query before returning a timeout error. |
| QueryPassthrough | This option passes the query to the Couchbase server as is. |
| Readonly | You can use this property to enforce read-only access to Couchbase from the provider. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| RTK | The runtime key used for licensing. |
| StrictComparison | Adjusts how precisely to translate filters on SQL input queries into Couchbase queries. This can be set to a comma-separated list of values, where each value can be one of: date, number, boolean, or string. |
| Timeout | The value in seconds until the timeout error is thrown, canceling the operation. |
| TransactionDurability | Specifies how a document must be stored for a transaction to succeed. Specifies whether to use N1QL transactions when executing queries. |
| TransactionTimeout | This sets the amount of time a transaction may execute before it is timed out by Couchbase. |
| UpdateNullValues | Determines whether an UPDATE writes NULL values as NULL, or removes them. |
| UseCollectionsForDDL | Whether to assume that CREATE TABLE statements use collections instead of flavors. Only takes effect when connecting to Couchbase v7+ and GenerateSchemaFiles is set to OnCreate. |
| UserDefinedViews | A filepath pointing to the JSON configuration file containing your custom views. |
| UseTransactions | Specifies whether to use N1QL transactions when executing queries. |
| ValidateJSONParameters | Allows the provider to validate that string parameters are valid JSON before sending the query to Couchbase. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | The type of authentication to use when connecting to Couchbase. |
| User | The Couchbase user account used to authenticate. |
| Password | The password used to authenticate the user. |
| CredentialsFile | Use this property if you need to provide credentials for multiple users or buckets. This file takes priority over other forms of authentication. |
| Server | The address of the Couchbase server or servers to which you are connecting. |
| CouchbaseService | Determines the Couchbase service to connect to. Default is N1QL. Available options are N1QL and Analytics. |
| ConnectionMode | Determines how to connect to the Couchbase server. Must be either Direct or Cloud. |
| DNSServer | Determines what DNS server to use when retrieving Couchbase Capella information. |
| N1QLPort | The port for connecting to the Couchbase N1QL Endpoint. |
| AnalyticsPort | The port for connecting to the Couchbase Analytics Endpoint. |
| WebConsolePort | The port for connecting to the Couchbase Web Console. |
The type of authentication to use when connecting to Couchbase.
string
"Auto"
Note that only Basic authentication is supported when using the "Cloud" ConnectionMode.
The Couchbase user account used to authenticate.
string
""
Together with Password, this field is used to authenticate against the Couchbase server.
The password used to authenticate the user.
string
""
The User and Password are together used to authenticate with the server.
Use this property if you need to provide credentials for multiple users or buckets. This file takes priority over other forms of authentication.
string
""
Use this property if you need to provide credentials for multiple users or buckets. This takes priority over other forms of authentication.
Set CredentialsFile to the path to a file that has the same markup as below:
[{"user": "YourUserName1", "pass":"YourPassword1"},
{"user": "YourUserName2", "pass":"YourPassword2"}]
The address of the Couchbase server or servers to which you are connecting.
string
""
This value can be set to a hostname or an IP address, like "couchbase-server.com" or "1.2.3.4". It can also be set to an HTTP or HTTPS URL, such as "https://couchbase-server.com" or "http://1.2.3.4". If ConnectionMode is set to Cloud then this should be the hostname of the Couchbase Cloud instance as reported in the control panel.
If the URL form is used, then setting this option will also set the UseSSL option: if the URL scheme is "https://", then UseSSL will be set to true, and a URL with "http://" will set UseSSL to false.
A port value cannot be used as part of this option, so values like "http://couchbase-server.com:8093" are not allowed. Please use WebConsolePort, N1QLPort and AnalyticsPort.
This value can also accept multiple servers in the above format separated by commas, such as "1.2.3.4, couchbase-server.com". This will allow the Cloud to recover the connection in case some of the servers listed are inaccessible.
Note that while the Cloud will try to recover the connection as a whole, it may lose individual operations. For example, while a long-running query will fail if the server becomes inaccesssible while that query is running, that query can be retried on the same connection and the Cloud will execute it on the next active server.
Determines the Couchbase service to connect to. Default is N1QL. Available options are N1QL and Analytics.
string
"N1QL"
Determines the Couchbase service to connect to. Default is N1QL. Available options are N1QL and Analytics
Determines how to connect to the Couchbase server. Must be either Direct or Cloud.
string
"Direct"
By default the Cloud connects to Couchbase directly using the address given in the Server option. The Server must be running the appropriate CouchbaseService to accept the connection. This will work in most on-premise or basic cloud deployments.
This should be set to Cloud when connecting to Couchbase Capella or a custom deployment that uses service records. These records will allow the Cloud to determine the exact Couchbase servers that provide the appropriate CouchbaseService. You must also set the DNSServer property so that the Cloud is able to fetch these service records.
Note that enabling Cloud mode will override these connection properties with the values discovered by contacting the cluster:
Determines what DNS server to use when retrieving Couchbase Capella information.
string
""
In most cases any public DNS server can be provided here such as the ones provided by OpenDNS, Cloudflare or Google.
If these are not accessible then you will need to use the DNS server configured by your network administrator.
The port for connecting to the Couchbase N1QL Endpoint.
string
""
This defaults to 8093 when not using SSL, and 18093 when using SSL. See UseSSL.
This port is used for submitting queries when CouchbaseService is set to N1QL. Any requests to manage indices will also go through this port.
The port for connecting to the Couchbase Analytics Endpoint.
string
""
This defaults to 8095 when not using SSL, and 18095 when using SSL. See UseSSL.
This port is used for submitting queries when CouchbaseService is set to Analytics.
The port for connecting to the Couchbase Web Console.
string
""
This defaults to 8091 when not using SSL, and 18091 when using SSL. See UseSSL.
This port is used for API operations like managing buckets.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLClientCert | The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). |
| SSLClientCertType | The type of key store containing the TLS/SSL client certificate. |
| SSLClientCertPassword | The password for the TLS/SSL client certificate. |
| SSLClientCertSubject | The subject of the TLS/SSL client certificate. |
| UseSSL | Whether to negotiate TLS/SSL when connecting to the Couchbase server. |
| SSLServerCert | The certificate to be accepted from the server when connecting using TLS/SSL. |
The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL).
string
""
The name of the certificate store for the client certificate.
The SSLClientCertType field specifies the type of the certificate store specified by SSLClientCert. If the store is password protected, specify the password in SSLClientCertPassword.
SSLClientCert is used in conjunction with the SSLClientCertSubject field in order to specify client certificates. If SSLClientCert has a value, and SSLClientCertSubject is set, a search for a certificate is initiated. See SSLClientCertSubject for more information.
Designations of certificate stores are platform-dependent.
The following are designations of the most common User and Machine certificate stores in Windows:
| MY | A certificate store holding personal certificates with their associated private keys. |
| CA | Certifying authority certificates. |
| ROOT | Root certificates. |
| SPC | Software publisher certificates. |
In Java, the certificate store normally is a file containing certificates and optional private keys.
When the certificate store type is PFXFile, this property must be set to the name of the file. When the type is PFXBlob, the property must be set to the binary contents of a PFX file (for example, PKCS12 certificate store).
The type of key store containing the TLS/SSL client certificate.
string
"USER"
This property can take one of the following values:
| USER - default | For Windows, this specifies that the certificate store is a certificate store owned by the current user. Note that this store type is not available in Java. |
| MACHINE | For Windows, this specifies that the certificate store is a machine store. Note that this store type is not available in Java. |
| PFXFILE | The certificate store is the name of a PFX (PKCS12) file containing certificates. |
| PFXBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in PFX (PKCS12) format. |
| JKSFILE | The certificate store is the name of a Java key store (JKS) file containing certificates. Note that this store type is only available in Java. |
| JKSBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in JKS format. Note that this store type is only available in Java. |
| PEMKEY_FILE | The certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate. |
| PEMKEY_BLOB | The certificate store is a string (base64-encoded) that contains a private key and an optional certificate. |
| PUBLIC_KEY_FILE | The certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate. |
| PUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate. |
| SSHPUBLIC_KEY_FILE | The certificate store is the name of a file that contains an SSH-style public key. |
| SSHPUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains an SSH-style public key. |
| P7BFILE | The certificate store is the name of a PKCS7 file containing certificates. |
| PPKFILE | The certificate store is the name of a file that contains a PuTTY Private Key (PPK). |
| XMLFILE | The certificate store is the name of a file that contains a certificate in XML format. |
| XMLBLOB | The certificate store is a string that contains a certificate in XML format. |
The password for the TLS/SSL client certificate.
string
""
If the certificate store is of a type that requires a password, this property is used to specify that password to open the certificate store.
The subject of the TLS/SSL client certificate.
string
"*"
When loading a certificate the subject is used to locate the certificate in the store.
If an exact match is not found, the store is searched for subjects containing the value of the property. If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks the first certificate in the certificate store.
The certificate subject is a comma separated list of distinguished name fields and values. For example, "CN=www.server.com, OU=test, C=US, [email protected]". The common fields and their meanings are shown below.
| Field | Meaning |
| CN | Common Name. This is commonly a host name like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma, it must be quoted.
Whether to negotiate TLS/SSL when connecting to the Couchbase server.
bool
false
When this is set to true, the defaults for the following options change:
| Property | Plaintext Default | SSL Default |
| AnalyticsPort | 8095 | 18095 |
| N1QLPort | 8093 | 18093 |
| WebConsolePort | 8091 | 18091 |
This option should be enabled when connecting to Couchbase Capella because all Capella deployments use SSL by default.
The certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space or colon separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space or colon separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
If not specified, any certificate trusted by the machine is accepted.
Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.
This section provides a complete list of the Firewall properties you can configure in the connection string for this provider.
| Property | Description |
| FirewallType | The protocol used by a proxy-based firewall. |
| FirewallServer | The name or IP address of a proxy-based firewall. |
| FirewallPort | The TCP port for a proxy-based firewall. |
| FirewallUser | The user name to use to authenticate with a proxy-based firewall. |
| FirewallPassword | A password used to authenticate to a proxy-based firewall. |
The protocol used by a proxy-based firewall.
string
"NONE"
This property specifies the protocol that the Cloud will use to tunnel traffic through the FirewallServer proxy. Note that by default, the Cloud connects to the system proxy; to disable this behavior and connect to one of the following proxy types, set ProxyAutoDetect to false.
| Type | Default Port | Description |
| TUNNEL | 80 | When this is set, the Cloud opens a connection to Couchbase and traffic flows back and forth through the proxy. |
| SOCKS4 | 1080 | When this is set, the Cloud sends data through the SOCKS 4 proxy specified by FirewallServer and FirewallPort and passes the FirewallUser value to the proxy, which determines if the connection request should be granted. |
| SOCKS5 | 1080 | When this is set, the Cloud sends data through the SOCKS 5 proxy specified by FirewallServer and FirewallPort. If your proxy requires authentication, set FirewallUser and FirewallPassword to credentials the proxy recognizes. |
To connect to HTTP proxies, use ProxyServer and ProxyPort. To authenticate to HTTP proxies, use ProxyAuthScheme, ProxyUser, and ProxyPassword.
The name or IP address of a proxy-based firewall.
string
""
This property specifies the IP address, DNS name, or host name of a proxy allowing traversal of a firewall. The protocol is specified by FirewallType: Use FirewallServer with this property to connect through SOCKS or do tunneling. Use ProxyServer to connect to an HTTP proxy.
Note that the Cloud uses the system proxy by default. To use a different proxy, set ProxyAutoDetect to false.
The TCP port for a proxy-based firewall.
int
0
This specifies the TCP port for a proxy allowing traversal of a firewall. Use FirewallServer to specify the name or IP address. Specify the protocol with FirewallType.
The user name to use to authenticate with a proxy-based firewall.
string
""
The FirewallUser and FirewallPassword properties are used to authenticate against the proxy specified in FirewallServer and FirewallPort, following the authentication method specified in FirewallType.
A password used to authenticate to a proxy-based firewall.
string
""
This property is passed to the proxy specified by FirewallServer and FirewallPort, following the authentication method specified by FirewallType.
This section provides a complete list of the Proxy properties you can configure in the connection string for this provider.
| Property | Description |
| ProxyAutoDetect | This indicates whether to use the system proxy settings or not. This takes precedence over other proxy settings, so you'll need to set ProxyAutoDetect to FALSE in order use custom proxy settings. |
| ProxyServer | The hostname or IP address of a proxy to route HTTP traffic through. |
| ProxyPort | The TCP port the ProxyServer proxy is running on. |
| ProxyAuthScheme | The authentication type to use to authenticate to the ProxyServer proxy. |
| ProxyUser | A user name to be used to authenticate to the ProxyServer proxy. |
| ProxyPassword | A password to be used to authenticate to the ProxyServer proxy. |
| ProxySSLType | The SSL type to use when connecting to the ProxyServer proxy. |
| ProxyExceptions | A semicolon separated list of destination hostnames or IPs that are exempt from connecting through the ProxyServer . |
This indicates whether to use the system proxy settings or not. This takes precedence over other proxy settings, so you'll need to set ProxyAutoDetect to FALSE in order use custom proxy settings.
bool
true
This takes precedence over other proxy settings, so you'll need to set ProxyAutoDetect to FALSE in order use custom proxy settings.
To connect to an HTTP proxy, see ProxyServer. For other proxies, such as SOCKS or tunneling, see FirewallType.
The hostname or IP address of a proxy to route HTTP traffic through.
string
""
The hostname or IP address of a proxy to route HTTP traffic through. The Cloud can use the HTTP, Windows (NTLM), or Kerberos authentication types to authenticate to an HTTP proxy.
If you need to connect through a SOCKS proxy or tunnel the connection, see FirewallType.
By default, the Cloud uses the system proxy. If you need to use another proxy, set ProxyAutoDetect to false.
The TCP port the ProxyServer proxy is running on.
int
80
The port the HTTP proxy is running on that you want to redirect HTTP traffic through. Specify the HTTP proxy in ProxyServer. For other proxy types, see FirewallType.
The authentication type to use to authenticate to the ProxyServer proxy.
string
"BASIC"
This value specifies the authentication type to use to authenticate to the HTTP proxy specified by ProxyServer and ProxyPort.
Note that the Cloud will use the system proxy settings by default, without further configuration needed; if you want to connect to another proxy, you will need to set ProxyAutoDetect to false, in addition to ProxyServer and ProxyPort. To authenticate, set ProxyAuthScheme and set ProxyUser and ProxyPassword, if needed.
The authentication type can be one of the following:
If you need to use another authentication type, such as SOCKS 5 authentication, see FirewallType.
A user name to be used to authenticate to the ProxyServer proxy.
string
""
The ProxyUser and ProxyPassword options are used to connect and authenticate against the HTTP proxy specified in ProxyServer.
You can select one of the available authentication types in ProxyAuthScheme. If you are using HTTP authentication, set this to the user name of a user recognized by the HTTP proxy. If you are using Windows or Kerberos authentication, set this property to a user name in one of the following formats:
user@domain domain\user
A password to be used to authenticate to the ProxyServer proxy.
string
""
This property is used to authenticate to an HTTP proxy server that supports NTLM (Windows), Kerberos, or HTTP authentication. To specify the HTTP proxy, you can set ProxyServer and ProxyPort. To specify the authentication type, set ProxyAuthScheme.
If you are using HTTP authentication, additionally set ProxyUser and ProxyPassword to HTTP proxy.
If you are using NTLM authentication, set ProxyUser and ProxyPassword to your Windows password. You may also need these to complete Kerberos authentication.
For SOCKS 5 authentication or tunneling, see FirewallType.
By default, the Cloud uses the system proxy. If you want to connect to another proxy, set ProxyAutoDetect to false.
The SSL type to use when connecting to the ProxyServer proxy.
string
"AUTO"
This property determines when to use SSL for the connection to an HTTP proxy specified by ProxyServer. This value can be AUTO, ALWAYS, NEVER, or TUNNEL. The applicable values are the following:
| AUTO | Default setting. If the URL is an HTTPS URL, the Cloud will use the TUNNEL option. If the URL is an HTTP URL, the component will use the NEVER option. |
| ALWAYS | The connection is always SSL enabled. |
| NEVER | The connection is not SSL enabled. |
| TUNNEL | The connection is through a tunneling proxy. The proxy server opens a connection to the remote host and traffic flows back and forth through the proxy. |
A semicolon separated list of destination hostnames or IPs that are exempt from connecting through the ProxyServer .
string
""
The ProxyServer is used for all addresses, except for addresses defined in this property. Use semicolons to separate entries.
Note that the Cloud uses the system proxy settings by default, without further configuration needed; if you want to explicitly configure proxy exceptions for this connection, you need to set ProxyAutoDetect = false, and configure ProxyServer and ProxyPort. To authenticate, set ProxyAuthScheme and set ProxyUser and ProxyPassword, if needed.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| Logfile | A filepath which designates the name and location of the log file. |
| Verbosity | The verbosity level that determines the amount of detail included in the log file. |
| LogModules | Core modules to be included in the log file. |
| MaxLogFileSize | A string specifying the maximum size in bytes for a log file (for example, 10 MB). |
| MaxLogFileCount | A string specifying the maximum file count of log files. |
A filepath which designates the name and location of the log file.
string
""
Once this property is set, the Cloud will populate the log file as it carries out various tasks, such as when authentication is performed or queries are executed. If the specified file doesn't already exist, it will be created.
Connection strings and version information are also logged, though connection properties containing sensitive information are masked automatically.
If a relative filepath is supplied, the location of the log file will be resolved based on the path found in the Location connection property.
For more control over what is written to the log file, you can adjust the Verbosity property.
Log contents are categorized into several modules. You can show/hide individual modules using the LogModules property.
To edit the maximum size of a single logfile before a new one is created, see MaxLogFileSize.
If you would like to place a cap on the number of logfiles generated, use MaxLogFileCount.
The verbosity level that determines the amount of detail included in the log file.
string
"1"
The verbosity level determines the amount of detail that the Cloud reports to the Logfile. Verbosity levels from 1 to 5 are supported. These are detailed in the Logging page.
Core modules to be included in the log file.
string
""
Only the modules specified (separated by ';') will be included in the log file. By default all modules are included.
See the Logging page for an overview.
A string specifying the maximum size in bytes for a log file (for example, 10 MB).
string
"100MB"
When the limit is hit, a new log is created in the same folder with the date and time appended to the end. The default limit is 100 MB. Values lower than 100 kB will use 100 kB as the value instead.
Adjust the maximum number of logfiles generated with MaxLogFileCount.
A string specifying the maximum file count of log files.
int
-1
When the limit is hit, a new log is created in the same folder with the date and time appended to the end and the oldest log file will be deleted.
The minimum supported value is 2. A value of 0 or a negative value indicates no limit on the count.
Adjust the maximum size of the logfiles generated with MaxLogFileSize.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| Location | A path to the directory that contains the schema files defining tables, views, and stored procedures. |
| BrowsableSchemas | This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC. |
| Tables | This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA,TableB,TableC. |
| Views | Restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC. |
| Dataverse | Which Analytics dataverse to scan when discovering tables. |
| TypeDetectionScheme | Determines how the provider builds tables and columns from the buckets found in Couchbase. |
| InferNumSampleValues | The maximum number of values for every field to scan before determining its data type. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER. |
| InferSampleSize | The maximum number of documents to scan for the columns available in the bucket. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER. |
| InferSimilarityMetric | Specifies the similarity degree where different schemas will be considered to be the same flavor. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER. |
| FlexibleSchemas | Whether the provider allows queries to use columns that it has not discovered. |
| ExposeTTL | Specifies whether document TTL information should be exposed. |
| NumericStrings | Whether to allow string values to be treated as numbers. |
| IgnoreChildAggregates | Whether the provider exposes aggregate columns that are also available as child tables. Ignored if TableSupport is not set to Full. |
| TableSupport | How much effort the provider will put into discovering tables on the Couchbase server. |
| NewChildJoinsMode | Determines the kind of child table model the provider exposes. |
A path to the directory that contains the schema files defining tables, views, and stored procedures.
string
"%APPDATA%\\CData\\Couchbase Data Provider\\Schema"
The path to a directory which contains the schema files for the Cloud (.rsd files for tables and views, .rsb files for stored procedures). The folder location can be a relative path from the location of the executable. The Location property is only needed if you want to customize definitions (for example, change a column name, ignore a column, and so on) or extend the data model with new tables, views, or stored procedures.
If left unspecified, the default location is "%APPDATA%\\CData\\Couchbase Data Provider\\Schema" with %APPDATA% being set to the user's configuration directory:
This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC.
string
""
Listing the schemas from databases can be expensive. Providing a list of schemas in the connection string improves the performance.
This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA,TableB,TableC.
string
""
Listing the tables from some databases can be expensive. Providing a list of tables in the connection string improves the performance of the Cloud.
This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.
Specify the tables you want in a comma-separated list. Each table should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Tables=TableA,[TableB/WithSlash],WithCatalog.WithSchema.`TableC With Space`.
Note that when connecting to a data source with multiple schemas or catalogs, you will need to provide the fully qualified name of the table in this property, as in the last example here, to avoid ambiguity between tables that exist in multiple catalogs or schemas.
Restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC.
string
""
Listing the views from some databases can be expensive. Providing a list of views in the connection string improves the performance of the Cloud.
This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.
Specify the views you want in a comma-separated list. Each view should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Views=ViewA,[ViewB/WithSlash],WithCatalog.WithSchema.`ViewC With Space`.
Note that when connecting to a data source with multiple schemas or catalogs, you will need to provide the fully qualified name of the table in this property, as in the last example here, to avoid ambiguity between tables that exist in multiple catalogs or schemas.
Which Analytics dataverse to scan when discovering tables.
string
""
This property is empty by default, which means that all dataverses will be scanned and table names will be generated as described in DataverseSeparator.
If you assign this property to a non-blank value, then the Cloud will scan only the corresponding dataverse (for example, setting this to "Default" scans the Default dataverse). Since only one dataverse is being scanned, table names will not be prefixed with the dataverse name. It is recommended to set this property to "Default" if you are coming from a previous version of the Cloud and need backwards compatability.
If you are connecting to Couchbase 7.0 or later, this option will be treated as a compound name containing both a dataset and a scope.
For example, if you have previously created collections like these:
CREATE ANALYTICS SCOPE websites.exampledotcom CREATE ANALYTICS COLLECTION websites.exampledotcom.traffic ON examplecom_traffic_bucket CREATE ANALYTICS COLLECTION websites.exampledotcom.ads ON examplecom_ads_bucketYou would set this option to "websites.exampledotcom".
Determines how the provider builds tables and columns from the buckets found in Couchbase.
string
"DocType"
A comma-separated list of the following options:
| DocType | This discovers tables by checking at each bucket and looking for different values of the "docType" field in the documents. For example, if the bucket beer-sample contains documents with "docType" = 'brewery' and "docType" = 'beer', this will generate three tables: beer-sample (containing all documents), beer-sample.brewery (containing just breweries) and beer-sample.beer (containing just beers).
Like RowScan, this will scan a sample of the documents in each flavor and determine the data type for each field. RowScanDepth determines how many documents are scanned from each flavor. |
| DocType=fieldName | Like DocType, but this scans based off of a field called "fieldName" rather than "docType". "fieldName" must match the field name in Couchbase exactly, including case. |
| Infer | This uses the N1QL INFER statement to determine what tables and columns exist. This does more flexible flavor detection than DocType, but is only available for Couchbase Enterprise. |
| RowScan | This reads a sample of documents from a bucket, and heuristically determines the data type. RowScanDepth determines how many documents are scanned. It does not do any flavor detection. |
| None | This is like RowScan, but will always return columns that have string types instead of the detected type. |
The maximum number of values for every field to scan before determining its data type. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.
string
"10"
The maximum number of values to scan from every field of the sampled documents before determining the field's data type. This property enables additional configuration of Automatic Schema Discovery when you are using the Couchbase Infer command -- TypeDetectionScheme must also be set to Infer to use this propery.
The maximum number of documents to scan for the columns available in the bucket. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.
string
"100"
The maximum number of documents to scan for the columns available in the bucket. The Infer command will return column metadata by scanning a random sample of documents of the size specified here.
Setting a high value may decrease performance. Setting a low value may prevent the column and data type from being determined properly, especially when there is null data.
This property enables additional configuration of Automatic Schema Discovery when you are using the Couchbase Infer command -- TypeDetectionScheme must also be set to Infer to use this propery.
Specifies the similarity degree where different schemas will be considered to be the same flavor. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.
string
"0.7"
This property specifies how similar two schemas must be to be considered to be the same flavor. As an example, consider the following rows:
Row 1: ColA, ColB, ColC, ColD
Row 2: ColA, ColB, ColE, ColF
Row 3: ColB, ColF, ColX, ColY
You can configure the columns returned for each flavor with different InferSimilarityMetric values, as in the following examples:
You can then query document flavors using dot notation, as in the following statement:
SELECT * FROM [Items.Technology]
This property enables additional configuration of Automatic Schema Discovery when you are using the Couchbase Infer command -- TypeDetectionScheme must also be set to Infer to use this propery.
Whether the provider allows queries to use columns that it has not discovered.
bool
false
By default Cloud will only allow queries to use columns that it has found during the metadata discovery process (see TypeDetectionScheme for details). This means that the Cloud has the full information for each column it presents, but it also means that fields set on only a few documents may not be exposed. Disabling this option means that the Cloud will allow you to write a query with any columns you want. If you use columns in a query that have not been discovered the Cloud will assume that they are simple strings.
For example, the Cloud uses column type information to automatically convert dates for comparision since Couchbase cannot natively compare dates directly.
If the Cloud detects that datecol is a date field, it can apply the STR_TO_MILLIS conversion automatically:
/* SQL */
WHERE datecol < '2020-06-12';
/* N1QL */
WHERE STR_TO_MILLIS(datecol) < STR_TO_MILLIS('2020-06-12');
When using undiscovered columns the Cloud cannot make this type of conversion for you. You must apply any needed conversions manually to ensure that operations behave the way you want them to.
Specifies whether document TTL information should be exposed.
bool
false
By default the Cloud does not expose TTL values or consider document TTLs when performing DML operations. Enabling this option exposes TTL values in two ways:
Note that enabling this features requires that your server be version 6.5.1 or later and that your CouchbaseService is set to N1QL. If either of these is not the case the Cloud will not connect.
Whether to allow string values to be treated as numbers.
bool
true
By default this property is enabled and the Cloud will treat string values as numeric if they all the values it samples during schema detection are numeric. This can cause type errors later on if the field contains non-numeric values in other documents. If this property is disabled then numeric strings are left as strings although other string-based data types like timestamps will still be detected.
For example, the "code" field in the below bucket would be affected by this
setting. By default it would be considered an integer but if this property
were enabled it would be treated as a string.
{ "code": "123", "message": "Please restart your computer" }
{ "code": "456", "message": "Urgent update must be applied" }
Whether the provider exposes aggregate columns that are also available as child tables. Ignored if TableSupport is not set to Full.
bool
false
The Cloud will expose array fields within a bucket as a separate child table, such as in the Games_scores example described in Automatic Schema Discovery.
By default the Cloud will also expose these array fields as JSON aggregates on the base table.
For example, either of these queries would return information on game scores:
/* Return each score as an individual row */ SELECT value FROM Games_scores; /* Return all scores for each Game as a JSON string */ SELECT scores FROM Games;
Since these aggregates are exposed on the base table, they will be generated even when the information they contain is redundant.
For example, when performing this join the scores aggregate on Games is populated as well as the value column on Games_scores.
Internally this causes two copies of the scores data to be transferred from Couchbase.
/* Retrieves score data twice, once for Games.scores and once for Games_scores.value */ SELECT * FROM Games INNER JOIN Games_scores ON Games.[Document.Id] = Games_scores.[Document.Id]
This option can be used to prevent the aggregate field from being exposed when the same information is also available from a child table.
In the games example, setting this option to true means that the Games table would only expose a primary key column.
The only way to retrieve information about scores would be the child table, so score data would only be read once from Couchbase.
/* Only exposes Document.Id, not scores */ SELECT * FROM Games; /* Only retrieves score data once for Games_scores.value */ SELECT * FROM Games INNER JOIN Games_scores ON Games.[Document.Id] = Games_scores.[Document.Id]
Note that this option overrides FlattenArrays, since all data from flattened arrays is also avaialable as child tables. If this option is set then no array flattening is performed, even if FlattenArrays is set to a value over 0.
How much effort the provider will put into discovering tables on the Couchbase server.
string
"Full"
The available options are:
| Full | The Cloud will discover the available buckets, and look inside of each of those buckets for child tables. This provides the most flexible way to access nested data, but requires that each bucket on your server have primary indexes. |
| Basic | The Cloud will discover the available buckets, but will not look inside of them for child tables. This is recommended for cases where you either want to reduce the time that schema detection takes, or if your buckets do not have primary indexes. |
| None | The Cloud will only use the schema files found in the Location directory, and will not discover buckets on the server. This option should only be used after you have already created schema files. Using this option without schema files will result in no tables being available. |
Determines the kind of child table model the provider exposes.
string
"false"
By default the Cloud exposes a backwards-compatible data model that is not fully relational. In this mode non-child tables have a primary key called Document.Id, but child tables do not have a primary key. Instead they have a column called Document.Id which has the same value as the Document.Id of the parent row that contains the child row.
For example, a parent table invoices containing invoice records may look like this:
| Document.Id | customer |
| 1 | Adam |
| 2 | Beatrice |
| 3 | Charlie |
And its child invoices_lineitems containing line items may look like this:
| Document.Id | item |
| 1 | laptop |
| 1 | keyboard |
| 2 | stapler |
| 3 | whiteboard |
| 3 | markers |
This model has several limitations:
The NewChildJoins data model is fully relational. In this mode non-child tables have the same Document.Id as before, but child tables are extended to have both a foreign key and a primary key. The foreign key is called Document.Parent and it refers to the Document.Id of the row in the parent table that contains the child row. The primary key is called Document.Id and it contains a path which uniquely refers to that child row.
For example, the same tables as above would look like this in the NewChildJoins model. invoices would be the same:
| Document.Id | customer |
| 1 | Adam |
| 2 | Beatrice |
| 3 | Charlie |
However, invoices_lineitems would have both a primary and foreign key. The primary key contains the ID of the parent row as well as the child row's position in the parent.
| Document.Id | Document.Parent | item |
| 1$1 | 1 | laptop |
| 1$2 | 1 | keyboard |
| 2$1 | 2 | stapler |
| 3$1 | 3 | whiteboard |
| 3$2 | 3 | markers |
This fixes the limitations of the old data model:
This section provides a complete list of the Caching properties you can configure in the connection string for this provider.
| Property | Description |
| AutoCache | Automatically caches the results of SELECT queries into a cache database specified by either CacheLocation or both of CacheConnection and CacheProvider . |
| CacheLocation | Specifies the path to the cache when caching to a file. |
| CacheTolerance | The tolerance for stale data in the cache specified in seconds when using AutoCache . |
| Offline | Use offline mode to get the data from the cache instead of the live source. |
| CacheMetadata | This property determines whether or not to cache the table metadata to a file store. |
Automatically caches the results of SELECT queries into a cache database specified by either CacheLocation or both of CacheConnection and CacheProvider .
bool
false
When AutoCache = true, the Cloud automatically maintains a cache of your table's data in the database of your choice.
When AutoCache = true, the Cloud caches to a simple, file-based cache. You can configure its location or cache to a different database with the following properties:
Specifies the path to the cache when caching to a file.
string
"%APPDATA%\\CData\\Couchbase Data Provider"
The CacheLocation is a simple, file-based cache.
If left unspecified, the default location is "%APPDATA%\\CData\\Couchbase Data Provider" with %APPDATA% being set to the user's configuration directory:
The tolerance for stale data in the cache specified in seconds when using AutoCache .
int
600
The tolerance for stale data in the cache specified in seconds. This only applies when AutoCache is used. The Cloud checks with the data source for newer records after the tolerance interval has expired. Otherwise, it returns the data directly from the cache.
Use offline mode to get the data from the cache instead of the live source.
bool
false
When Offline = true, all queries execute against the cache as opposed to the live data source. In this mode, certain queries like INSERT, UPDATE, DELETE, and CACHE are not allowed.
This property determines whether or not to cache the table metadata to a file store.
bool
false
As you execute queries with this property set, table metadata in the Couchbase catalog are cached to the file store specified by CacheLocation if set or the user's home directory otherwise. A table's metadata will be retrieved only once, when the table is queried for the first time.
The Cloud automatically persists metadata in memory for up to two hours when you first discover the metadata for a table or view and therefore, CacheMetadata is generally not required. CacheMetadata becomes useful when metadata operations are expensive such as when you are working with large amounts of metadata or when you have many short-lived connections.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| AllowJSONParameters | Allows raw JSON to be used in parameters when QueryPassthrough is enabled. |
| ChildSeparator | The character or characters used to denote child tables. |
| CreateTableRamQuota | The default RAM quota, in megabytes, to use when inserting buckets via the CREATE TABLE syntax. |
| DataverseSeparator | The character or characters used to denote Analytics dataverses and scopes/collections. |
| FlattenArrays | The number of elements to expose as columns from nested arrays. Ignored if IgnoreChildAggregates is enabled. |
| FlattenObjects | Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. |
| FlavorSeparator | The character or characters used to denote flavors. |
| GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
| InsertNullValues | Determines whether an INSERT should include fields that have NULL values. |
| MaxRows | Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time. |
| Other | These hidden properties are used only in specific use cases. |
| Pagesize | The maximum number of results to return per page from Couchbase. |
| PeriodsSeparator | The character or characters used to denote hierarchy. |
| PseudoColumns | This property indicates whether or not to include pseudo columns as columns to the table. |
| QueryExecutionTimeout | This sets the server-side timeout for the query, which governs how long Couchbase will execute the query before returning a timeout error. |
| QueryPassthrough | This option passes the query to the Couchbase server as is. |
| Readonly | You can use this property to enforce read-only access to Couchbase from the provider. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| RTK | The runtime key used for licensing. |
| StrictComparison | Adjusts how precisely to translate filters on SQL input queries into Couchbase queries. This can be set to a comma-separated list of values, where each value can be one of: date, number, boolean, or string. |
| Timeout | The value in seconds until the timeout error is thrown, canceling the operation. |
| TransactionDurability | Specifies how a document must be stored for a transaction to succeed. Specifies whether to use N1QL transactions when executing queries. |
| TransactionTimeout | This sets the amount of time a transaction may execute before it is timed out by Couchbase. |
| UpdateNullValues | Determines whether an UPDATE writes NULL values as NULL, or removes them. |
| UseCollectionsForDDL | Whether to assume that CREATE TABLE statements use collections instead of flavors. Only takes effect when connecting to Couchbase v7+ and GenerateSchemaFiles is set to OnCreate. |
| UserDefinedViews | A filepath pointing to the JSON configuration file containing your custom views. |
| UseTransactions | Specifies whether to use N1QL transactions when executing queries. |
| ValidateJSONParameters | Allows the provider to validate that string parameters are valid JSON before sending the query to Couchbase. |
Allows raw JSON to be used in parameters when QueryPassthrough is enabled.
bool
false
This option affects how string parameters are handled when using direct N1QL and SQL++ queries through QueryPassthrough. For example, consider this query:
INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", @x)
By default, this option is disabled and string parameters are quoted and escaped into JSON strings. That means that any value can be safely used as a string parameter, but it also means that parameters cannot be used as raw JSON documents:
/*
* If @x is set to: test value " contains quote
*
* Result is a valid query
*/
INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", "test value \" contains quote")
/*
* If @x is set to: {"a": ["valid", "JSON", "value"]}
*
* Result contains string instead of JSON document
*/
INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", "{\"a\": [\"valid\", \"JSON\", \"value\"]})
When this option is enabled, string parameters are assumed to be valid JSON. This means that raw JSON documents can be used as parameters, but it also means that all simple strings must be escaped:
/*
* If @x is set to: test value " contains quote
*
* Result is an invalid query
*/
INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", test value " contains quote)
/*
* If @x is set to: {"a": ["valid", "JSON", "value"]}
*
* Result is a JSON document
*/
INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", {"a": ["valid", "JSON", "value"]})
Please refer to ValidateJSONParameters for more details on how parameters are validated when this option is enabled.
The character or characters used to denote child tables.
string
"_"
When creating a child table for an array underneath a bucket, the Cloud will generate the name of the child table by concatenating the name of the base table, along with this separator and each path element.
For example, if this document were in the bucket "customers", then the child table for the addresses field would be called "customers_addresses".
{
"addresses": [
{"street": "123 Main St"},
{"street": "424 Pleasant Ct"},
{"street": "719 Blue Way"}
]
}
The default RAM quota, in megabytes, to use when inserting buckets via the CREATE TABLE syntax.
string
"250"
The default RAM quota, in megabytes, to use when inserting buckets via the CREATE TABLE syntax.
The character or characters used to denote Analytics dataverses and scopes/collections.
string
"."
When using the Analytics serivce, the Cloud will scan all datasets from all available dataverses. To avoid potential name conflicts, it will include the dataverse name and the dataset name in the generated table name.
By default this is set to ".", so that if there is a dataset called "users" on the "Default" dataverse, then the table generated will be "Default.users".
This property is also used when generating table names for collections (on both N1QL and Analytics) on Couchbase 7 and later. For example, a bucket called "users" that has two collections called "active" and "inactive" under the "status" scope would be detected as the tables "users.status.active" and "users.status.inactive".
The number of elements to expose as columns from nested arrays. Ignored if IgnoreChildAggregates is enabled.
string
"0"
By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. This is only recommended for arrays that are expected to be short.
Set FlattenArrays to the number of elements you want to return from nested arrays. The specified elements are returned as columns. The zero-based index is concatenated to the column name. Other elements are ignored.
For example, you can return an arbitrary number of elements from an array of strings:
["FLOW-MATIC","LISP","COBOL"]When FlattenArrays is set to 1, the preceding array is flattened into the following table:
| Column Name | Column Value |
| languages.0 | FLOW-MATIC |
Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.
bool
true
Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. The property name is concatenated onto the object name with an underscore to generate the column name.
For example, you can flatten the nested objects below at connection time:
address : {
"street" : "123 Main St.",
"city" : "Nowhere",
"state" : "NY",
"zip" : "12345"
}
When FlattenObjects is set to true, the preceding object is flattened into the following table:
| Column Name | Column Value |
| address.street | 123 Main St. |
| address.city | Nowhere |
| address.state | NY |
| address.zip | 12345 |
The character or characters used to denote flavors.
string
"."
When the Cloud detects a flavored table, using either a DocType or Infer TypeDetectionScheme, it names flavored tables by concatenating the underlying bucket name, this seprator, and the value of the bucket's primary flavor.
For example, if the Cloud detects the flavor "docType = 'beer'" on the "beer-sample" bucket, then it will generate the table "beer-sample.beer" which contains only documents in "beer-sample" which have the "beer" doctype.
Indicates the user preference as to when schemas should be generated and saved.
string
"Never"
GenerateSchemaFiles enables you to save the table definitions identified by Automatic Schema Discovery. This property outputs schemas to .rsd files in the path specified by Location.
Available settings are the following:
When you set GenerateSchemaFiles to OnUse, the Cloud generates schemas as you execute SELECT queries. Schemas are generated for each table referenced in the query.
When you set GenerateSchemaFiles to OnCreate, schemas are only generated when a CREATE TABLE query is executed.
Another way to use this property is to obtain schemas for every table in your database when you connect. To do so, set GenerateSchemaFiles to OnStart and connect.
If your data structures are volatile, consider setting GenerateSchemaFiles to Never and using dynamic schemas. See Automatic Schema Discovery for more information about dynamic schemas.
Schema files have a simple format that makes them easy to modify. See Custom Schema Definitions for more information.
Determines whether an INSERT should include fields that have NULL values.
bool
true
By default the Cloud uses NULL values provided in an INSERT statement and inserts them as JSON null values.
If this option is disabled, SQL NULL values are ignored during an INSERT. In the case of array columns (FlattenArrays must be set to retrieve these), this means that array indices are shifted over to compensate for the values that have been removed.
Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time.
int
-1
Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time.
These hidden properties are used only in specific use cases.
string
""
The properties listed below are available for specific use cases. Normal driver use cases and functionality should not require these properties.
Specify multiple properties in a semicolon-separated list.
| DefaultColumnSize | Sets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000. |
| ConvertDateTimeToGMT | Determines whether to convert date-time values to GMT, instead of the local time of the machine. |
| RecordToFile=filename | Records the underlying socket data transfer to the specified file. |
The maximum number of results to return per page from Couchbase.
int
1000
The Pagesize property affects the maximum number of results to return per page from Couchbase. Setting a higher value may result in better performance at the cost of additional memory allocated per page consumed.
The character or characters used to denote hierarchy.
string
"."
When flattening objects and arrays, the Cloud will use this value to separate different levels of objects and arrays. For example, if your Couchbase server returns a document like this (and FlattenObjects is enabled), then the Cloud will return the columns "geo.latitude" and "geo.longitude" if the periods separator is set to ".".
{
"geo": {
"latitude": 35.9132,
"longitude": -79.0558
}
}
This property indicates whether or not to include pseudo columns as columns to the table.
string
""
This setting is particularly helpful in Entity Framework, which does not allow you to set a value for a pseudo column unless it is a table column. The value of this connection setting is of the format "Table1=Column1, Table1=Column2, Table2=Column3". You can use the "*" character to include all tables and all columns; for example, "*=*".
This sets the server-side timeout for the query, which governs how long Couchbase will execute the query before returning a timeout error.
string
"-1"
Th default is -1, which disables the timeout. When enabling the timeout, the value must include both an amount and a unit, which can be one of: "ns" (nanoseconds), "us" (microseconds), "ms" (milliseconds), "s" (seconds), "m" (minutes) or "h" (hours). For example, "5m" and "300s" both set timeouts of 5 minutes.
There is a server-side timeout as well called the "index scan timeout", which will override this one if it is lower. By default the index scan timeout is 2 minutes, but it can be changed by setting the "indexer.settings.scan_timeout" property on your Couchbase server.
This option passes the query to the Couchbase server as is.
bool
false
When this is set, queries are passed through directly to Couchbase.
You can use this property to enforce read-only access to Couchbase from the provider.
bool
false
If this property is set to true, the Cloud will allow only SELECT queries. INSERT, UPDATE, DELETE, and stored procedure queries will cause an error to be thrown.
The maximum number of rows to scan to look for the columns available in a table.
int
100
The columns in a table must be determined by scanning table rows. This value determines the maximum number of rows that will be scanned.
Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.
The runtime key used for licensing.
string
""
The RTK property may be used to license a build.
Adjusts how precisely to translate filters on SQL input queries into Couchbase queries. This can be set to a comma-separated list of values, where each value can be one of: date, number, boolean, or string.
string
""
This option is empty by default, which means that WHERE clauses sent to Couchbase will include extra functions that convert values so that more comparisons work.
For example, leaving the "string" setting out of the list causes arrays to be
converted, so that they can be compared with strings:
SELECT * FROM Bucket WHERE MyArrayColumn = '[1,2,3]'
If set to a value, queries including the relevant types of comparisons will be translated literally. This makes better use of Couchbase's indexes, but means that the types of comparisons must be in a format Couchbase can compare directly.
For example, if "date" is provided as one of the options, then dates must match
the format they are stored as in Couchbase since they will not be converted
automatically:
SELECT * FROM Bucket WHERE MyDateColumn = '2018-10-31T10:00:00';
The value in seconds until the timeout error is thrown, canceling the operation.
int
60
If Timeout = 0, operations do not time out. The operations run until they complete successfully or until they encounter an error condition.
If Timeout expires and the operation is not yet complete, the Cloud throws an exception.
Specifies how a document must be stored for a transaction to succeed. Specifies whether to use N1QL transactions when executing queries.
string
"Majority"
If UseTransactions is enabled, then this option can be set to determine when Couchbase will allow writes in transactions to commit. The Couchbase documentation on Durability and Transactions contains the full details, below is a high-level summary.
This option controls requirements on both quorum and persistence. The quorum may either require no bucket replicas to receive the document (None), or a majority of replicas to have the document (all others). The persistence level requires either that the document be stored in the replica memory (Majoriy) or on the replica disk (MajorityAndPersistActive, PersistToMajority).
None is only useful if the bucket you are using is not configured for replicas. The other options can be used depending on the required performance and durability tradeoffs. Persisting to more replicas is slower but provides greater resilience against a node crashing.
This sets the amount of time a transaction may execute before it is timed out by Couchbase.
string
""
If transactions are enabled, then the Cloud will default to the server's default transaction timeout setting.
When enabling the timeout, the value must include both an amount and a unit, which can be one of: "ns" (nanoseconds), "us" (microseconds), "ms" (milliseconds), "s" (seconds), "m" (minutes) or "h" (hours). For example, "5m" and "300s" both set timeouts of 5 minutes.
There are also cluster-level and node-level transaction timeouts which override this one if they are smaller. For example, if the node-level timeout is set to a minute then setting this option to "5m" will have no effect.
Determines whether an UPDATE writes NULL values as NULL, or removes them.
bool
true
By default the Cloud will use NULL values provided in an UPDATE statement and set the field in Couchbase to NULL.
If this option is disabled SQL NULL values in an UPDATE will cause the Cloud to mark the field as MISSING. This removes the field from the object containing it, or if the field is contained in an array (per FlattenArrays) then that element is set to NULL.
This option should be used with care as the Cloud may not detect that the field exists if it is removed from enough documents within a bucket.
Whether to assume that CREATE TABLE statements use collections instead of flavors. Only takes effect when connecting to Couchbase v7+ and GenerateSchemaFiles is set to OnCreate.
bool
false
Normally the Cloud will assume that compound table names referenced in a CREATE TABLE statement are flavors.
For compatibility, this is still the default with Couchbase v7+ even though flavors are not recommended there.
CREATE TABLE [myBucket.myFlavor]( [Document.Id] VARCHAR PRIMARY KEY, docType VARCHAR, sometext VARCHAR, somenum INT )
Enable this option to assume that CREATE TABLE statements refer to collection instead.
In that scenario this query willl create the bucket and scope if necessary, before creating the colleciton and setting a primary index:
CREATE TABLE [myBucket.myScope.myCollection]( [Document.Id] VARCHAR PRIMARY KEY, sometext VARCHAR, somenum INT )
A filepath pointing to the JSON configuration file containing your custom views.
string
""
User Defined Views are defined in a JSON-formatted configuration file called UserDefinedViews.json. The Cloud automatically detects the views specified in this file.
You can also have multiple view definitions and control them using the UserDefinedViews connection property. When you use this property, only the specified views are seen by the Cloud.
This User Defined View configuration file is formatted as follows:
For example:
{
"MyView": {
"query": "SELECT * FROM Customer WHERE MyColumn = 'value'"
},
"MyView2": {
"query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
}
}
Use the UserDefinedViews connection property to specify the location of your JSON configuration file. For example:
"UserDefinedViews", "C:\\Users\\yourusername\\Desktop\\tmp\\UserDefinedViews.json"
Specifies whether to use N1QL transactions when executing queries.
string
"Never"
By default the Cloud does not use transactions for compatibility with older versions of Couchbase. All of the other options require a connection to Couchbase 7 or above. The N1QL service must also be enabled using CouchbaseService.
Setting this to Always means that all queries will use transactions. An explicit transaction may be created on the connection and queries will use that transaction while it is active. If there is no explicit transaction then queries will use implicit transactions instead.
Setting this to Explicit enables support for explicit transactions only. Explicit transactions may be created but if one is not currently active, then statements will not create an implicit transaction.
Allows the provider to validate that string parameters are valid JSON before sending the query to Couchbase.
bool
true
When AllowJSONParameters and QueryPassthrough are enabled, the query parameters given to the Cloud will be treated as raw JSON documents instead of arbitrary string values. This option controls what happens when invalid JSON is given to the Cloud in this mode.
When this option is enabled, the Cloud will check that all string parameters can be parsed as valid JSON. If any cannot be, an error will be raised and the query will not be run.
When this option is disabled, no check is performed and all string parameter values are substituted into the query directly. This makes executing prepared statements faster, but less safe since invalid N1QL or SQL++ may be sent to the Couchbase.