CData Cloud offers access to Elasticsearch across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a SQL Server database can connect to Elasticsearch through CData Cloud.
CData Cloud allows you to standardize and configure connections to Elasticsearch as though it were any other OData endpoint or standard SQL Server.
This page provides a guide to Establishing a Connection to Elasticsearch in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to Elasticsearch and configure any necessary connection properties to create a database in CData Cloud
Accessing data from Elasticsearch through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to Elasticsearch by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
Set the following to connect to data:
Server=01.02.03.04
OR
Server=01.01.01.01:1234,02.02.02.02:5678
The Cloud uses X-Pack Security for authentication and TLS/SSL encryption. You can prefix the server value with "https://" to connect using TLS/SSL.
Set the following to connect to data:
The Cloud uses X-Pack Security for authentication and TLS/SSL encryption.
Note: Requests are signed using AWS Signature Version 4.
Set the AuthScheme to Basic, and set User and Password properties and/or use PKI (public key infrastructure) to authenticate. Once the Cloud is connected, X-Pack performs user authentication and grants role permissions based on the realms you have configured.
To use PKI, set the SSLClientCert, SSLClientCertType, SSLClientCertSubject, and SSLClientCertPassword properties.
Note: TLS/SSL and client authentication must be enabled on X-Pack to use PKI.
To enable TLS/SSL in the Cloud, set UseSSL to true;.
To authenticate using account root credentials, set these parameters:
Note: Amazon discourages the use of this authentication scheme for anything but simple tests. The account root credentials have the full permissions of the user, making this the least secure authentication method.
If multi-factor authentication is required, specify the following:
Note: If you want to control the duration of the temporary credentials, set the TemporaryTokenDuration property (default: 3600 seconds).
To authenticate using temporary credentials, specify the following:
The Cloud can now request resources using the same permissions provided by long-term credentials (such as IAM user credentials) for the lifespan of the temporary credentials.
To authenticate using both temporary credentials and an IAM role, set all the parameters described above, and specify these additional parameters:
If multi-factor authentication is required, specify the following:
Note: If you want to control the duration of the temporary credentials, set the TemporaryTokenDuration property (default: 3600 seconds).
Set AuthScheme to AwsEC2Roles.
If you are using the Cloud from an EC2 Instance and have an IAM Role assigned to the instance, you can use the IAM Role to authenticate. Since the Cloud automatically obtains your IAM Role credentials and authenticates with them, it is not necessary to specify AWSAccessKey and AWSSecretKey.
If you are also using an IAM role to authenticate, you must additionally specify the following:
The Elasticsearch Cloud now supports IMDSv2. Unlike IMDSv1, the new version requires an authentication token. Endpoints and response are the same in both versions.
In IMDSv2, the Elasticsearch Cloud first attempts to retrieve the IMDSv2 metadata token and then uses it to call AWS metadata endpoints. If it is unable to retrieve the token, the Cloud reverts to IMDSv1.
Note that this method of authentication is only possible with Opensearch Service, and not with Elasticsearch.
To authenticate as an AWS role, set these properties:
If multi-factor authentication is required, specify the following:
Note: If you want to control the duration of the temporary credentials, set the TemporaryTokenDuration property (default: 3600 seconds).
Note: In some circumstances it might be preferable to use an IAM role for authentication, rather than the direct security credentials of an AWS root user. If you are specifying the AWSAccessKey and AWSSecretKey of an AWS root user, you cannot use roles.
Please see Using Kerberos for details on how to authenticate with Kerberos.
To authenticate using APIKey set the following:
You can use the following properties to gain greater control over Elasticsearch API features and the strategies the Cloud uses to surface them:
Multiple indices can be queried by executing a query using one of the following formats:
Query all indices via the _all view: SELECT * FROM [_all]
Query a list of indices: SELECT * FROM [index1,index2,index3]
Query indices matching a wildcard pattern: SELECT * FROM [index*]
Note, index lists can contain wildcards and indices can be excluded by prefixing an index with '-'. For example: SELECT * FROM [index*,-index3]
If you are using the Scroll API, set ScrollDuration instead.
Elasticsearch is a document-oriented database that provides high performance searching, flexibility, and scalability. These features are not necessarily incompatible with a standards-compliant query language like SQL-92. In this section we will show various schemes that the Cloud offers to bridge the gap with relational SQL and an Elasticsearch database.
The Cloud models Elasticsearch objects into relational tables and translates SQL queries into Elasticsearch queries to get the requested data. See Schema Mapping for more details on how Elasticsearch objects are mapped to tables to generate schemas. See Query Mapping for more details on how various Elasticsearch operations are represented as SQL.
The Automatic Schema Discovery scheme automatically finds the data types by retrieving the mapping for the Elasticsearch type. You can use RowScanDepth, FlattenArrays, and FlattenObjects to control the relational representation of the collections in Elasticsearch.
The CData Cloud models the Elasticsearch REST APIs as relational tables and stored procedures that can be accessed with standard SQL. This enables access from standards-based tools.
The table definitions are dynamically retrieved. When you connect, the Cloud connects to Elasticsearch and retrieves the schemas, list of tables, and the metadata for the tables by querying the Elasticsearch REST server. Any changes to the remote data are immediately reflected in your queries.
The following table maps Elasticsearch concepts to relational ones:
Elasticsearch Versions 6 and Above:
| Elasticsearch Concept | SQL Concept |
| Index | Table |
| Alias | View |
| Document | Row (each document is a row and the document's JSON structure is represented as columns) |
| Field | Column |
Note: Starting in Elasticsearch 6, indices are limited to a single type. Therefore the type is no longer treated as a table, since an index and type have a one-to-one relation. Types are hidden and used internally where necessary to issue the proper request to Elasticsearch.
Elasticsearch Versions Prior to Version 6:
| Elasticsearch Concept | SQL Concept |
| Index | Schema |
| Type | Table |
| Alias | View |
| Document | Row (each document is a row and the document's JSON structure is represented as columns) |
| Field | Column |
Elasticsearch contains the ability to establish parent-child relationships. This relationship maps closely to SQL JOIN functionality. The Cloud models these parent-child relationships in a way to enable the ability to perform JOIN queries.
Elasticsearch Versions 6 and Above:
In version 6 and above of Elasticsearch, relationships are established by using the join datatype. Included in this functionality is the ability to define multiple children for a single parent and to create multiple levels of relations.
The Cloud supports all of these relationships and will generate a separate table for each relation in Elasticsearch. The table name will be in the form: [index]_[relation].
All child tables will have an additional column containing the parent table id. The column name will be in the form: _[parent_table]_id. This column is a foreign key to the _id column of the parent table and can be used to perform SQL JOIN queries.
When querying these tables individually, filtering logic is pushed to the server to improve performance by only returning the data relevant to the table selected.
Elasticsearch Versions Prior to Version 6:
In versions prior to 6, a relationship is established between two types via a _parent field. This creates a single parent-child relationship.
The tables identified in this parent-child relationship do not change (they are still based on the Elasticsearch type). However the child table will have an additional column containing the parent id. The column name will be in the form: _[parent_table]_id. This column is a foreign key to the _id column of the parent table and can be used to perform SQL JOIN queries.
Below is the raw data used throughout this chapter. Following is the mapping for the "insured" table (index):
{
"insured": {
"mappings": {
"properties": {
"name": { "type":"string" },
"address": {
"street": { "type":"string" },
"city": { "type":"string" },
"state": { "type":"string" }
},
"insured_ages": { "type": "integer" },
"vehicles": {
"type": "nested",
"properties": {
"year": { "type":"integer" },
"make": { "type":"string" },
"model": { "type":"string" },
"body_style" { "type": "string" }
}
}
}
}
}
}
The following is the sample data set for the "insured" table (index):
{
"hits": {
"total": 2,
"max_score": 1,
"hits": [
{
"_index": "insured",
"_type": "_doc",
"_id": "1",
"_score": 1,
"_source": {
"name": "John Smith",
"address": {
"street": "Main Street",
"city": "Chapel Hill",
"state": "NC"
},
"insured_ages": [ 17, 43, 45 ],
"vehicles": [
{
"year": 2015,
"make": "Dodge",
"model": "RAM 1500",
"body_style": "TK"
},
{
"year": 2015,
"make": "Suzuki",
"model": "V-Strom 650 XT",
"body_style": "MC"
},
{
"year": 1992,
"make": "Harley Davidson",
"model": "FXR",
"body_style": "MC"
}
]
}
},
{
"_index": "insured",
"_type": "_doc",
"_id": "2",
"_score": 1,
"_source": {
"name": "Joseph Newman",
"address": {
"street": "Oak Street",
"city": "Raleigh",
"state": "NC"
},
"insured_ages": [ 23, 25 ],
"vehicles": [
{
"year": 2010,
"make": "Honda",
"model": "Accord",
"body_style": "SD"
},
{
"year": 2008,
"make": "Honda",
"model": "Civic",
"body_style": "CP"
}
]
}
}
]
}
}
The Cloud automatically infers a relational schema by retrieving the mapping of the Elasticsearch type. The columns and data types are generated from the retrieved mapping.
Any field within Elasticsearch can be an array of values, but this is not explicitly defined within the mapping. To account for this, the Cloud will query the data to detect if any fields contain arrays. The number of Elasticsearch documents retrieved during this array scanning is based on the RowScanDepth property.
Elasticsearch nested types are special types that denote an array of objects and thus will always be treated as such when generating the metadata.
The columns identified during the discovery process depend on the FlattenArrays and FlattenObjects properties.
To provide an example of how these options work, consider the following mapping (where 'insured' is the name of the table):
{
"insured": {
"properties": {
"name": { "type":"string" },
"address": {
"street": { "type":"string" },
"city": { "type":"string" },
"state": { "type":"string" }
},
"insured_ages": { "type": "integer" },
"vehicles": {
"type": "nested",
"properties": {
"year": { "type":"integer" },
"make": { "type":"string" },
"model": { "type":"string" },
"body_style" { "type": "string" }
}
}
}
}
}
Also consider the following example data for the above mapping:
{
"_source": {
"name": "John Smith",
"address": {
"street": "Main Street",
"city": "Chapel Hill",
"state": "NC"
},
"insured_ages": [ 17, 43, 45 ],
"vehicles": [
{
"year": 2015,
"make": "Dodge",
"model": "RAM 1500",
"body_style": "TK"
},
{
"year": 2015,
"make": "Suzuki",
"model": "V-Strom 650 XT",
"body_style": "MC"
},
{
"year": 2012,
"make": "Honda",
"model": "Accord",
"body_style": "4D"
}
]
}
}
If FlattenObjects is set, all nested objects will be flattened into a series of columns. The above example will be represented by the following columns:
| Column Name | Data Type | Example Value |
| name | String | John Smith |
| address.street | String | Main Street |
| address.city | String | Chapel Hill |
| address.state | String | NC |
| insured_ages | String | [ 17, 43, 45 ] |
| vehicles | String | [ { "year": "2015", "make": "Dodge", ... }, { "year": "2015", "make": "Suzuki", ... }, { "year": "2012", "make": "Honda", ... } ] |
If FlattenObjects is not set, then the address.street, address.city, and address.state columns will not be broken apart. The address column of type string will instead represent the entire object.
Its value would be the following:
{street: "Main Street", city: "Chapel Hill", state: "NC"} See JSON Functions for more details on working with JSON aggregates.
The FlattenArrays property can be used to flatten array values into columns of their own. This is only recommended for arrays that are expected to be short. It is best to leave unbounded arrays as they are and piece out the data for them as needed using JSON Functions.
Note: Only the top-most array will be flattened. Any subarrays will be represented as the entire array.
The FlattenArrays property can be set to 3 to represent the arrays in the example above as follows (this example is with FlattenObjects not set):
| Column Name | Data Type | Example Value |
| insured_ages | String | [ 17, 43, 45 ] |
| insured_ages.0 | Integer | 17 |
| insured_ages.1 | Integer | 43 |
| insured_ages.2 | Integer | 45 |
| vehicles | String | [ { "year": "2015", "make": "Dodge", ... }, { "year": "2015", "make": "Suzuki", ... }, { "year": "2012", "make": "Honda", ... } ] |
| vehicles.0 | String | { "year": "2015", "make": "Dodge", "model": "RAM 1500", "body_style": "TK" } |
| vehicles.1 | String | { "year": "2015", "make": "Suzuki", "model": "V-Strom 650 XT", "body_style": "MC" } |
| vehicles.2 | String | { "year": "2012", "make": "Honda", "model": "Accord", "body_style": "4D" } |
If FlattenObjects is set along with FlattenArrays (set to 1 for brevity), the vehicles field will be represented as follows:
| Column Name | Data Type | Example Value |
| vehicles | String | [ { "year": "2015", "make": "Dodge", ... }, { "year": "2015", "make": "Suzuki", ... }, { "year": "2012", "make": "Honda", ... } ] |
| vehicles.0.year | String | 2015 |
| vehicles.0.make | String | Dodge |
| vehicles.0.model | String | RAM 1500 |
| vehicles.0.body_style | String | TK |
The Cloud offers three basic configurations to model documents as tables, described in the following sections. The Cloud will parse the Elasticsearch document and identify the nested documents.
For users who need access to the entirety of their nested Elasticsearch data, flattening the data into a single table is the best option. The Cloud will use streaming and only parses the Elasticsearch data once per query in this mode.
With DataModel set to "FlattenedDocuments", nested documents will behave as separate tables and act in the same manner as a SQL JOIN. Any nested documents, at the same height (e.g. sibling documents), will be treated as a SQL CROSS JOIN.
Below is a sample query and the results, based on the sample document in Raw Data. This implicitly JOINs the insured document with the nested vehicles document.
The following query drills into the nested documents in each insured document.
SELECT
[_id],
[name],
[address.street] AS address_street,
[address.city.first] AS address_city,
[address.state.last] AS address_state,
[insured_ages],
[year],
[make],
[model],
[body_style],
[_insured_id],
[_vehicles_c_id]
FROM
[insured]
| _id | name | address_street | address_city | address_state | insured_ages | year | make | model | body_style | _insured_id | _vehicles_c_id | |
| 1 | John Smith | Main Street | Chapel Hill | NC | [ 17, 43, 45 ] | 2015 | Dodge | RAM 1500 | TK | 1 | 1 | |
| 1 | John Smith | Main Street | Chapel Hill | NC | [ 17, 43, 45 ] | 2015 | Suzuki | V-Strom 650 XT | MC | 1 | 2 | |
| 1 | John Smith | Main Street | Chapel Hill | NC | [ 17, 43, 45 ] | 1992 | Harley Davidson | FXR | MC | 1 | 3 | |
| 2 | Joseph Newman | Oak Street | Raleigh | NC | [ 23, 25 ] | 2010 | Honda | Accord | SD | 2 | 4 | |
| 2 | Joseph Newman | Oak Street | Raleigh | NC | [ 23, 25 ] | 2008 | Honda | Civic | CP | 2 | 5 |
Using a top-level document view of the Elasticsearch data provides ready access to top-level elements. The Cloud returns nested elements in aggregate, as single columns.
One aspect to consider is performance. You forego the time and resources to process and parse nested elements -- the Cloud parses the returned data once, using streaming to read the JSON data. Another consideration is your need to access any data stored in nested parent elements, and the ability of your tool or application to process JSON.
With DataModel set to "Document" (the default), the Cloud scans only the top-level object by default. The top-level object elements are available as columns due to the default object flattening. Nested objects are returned as aggregated JSON.
Below is a sample query and the results, based on the sample document in Raw Data. The query results in a single "insured" table.
The following query pulls the top-level object elements and the vehicles array into the results.
SELECT
[_id],
[name],
[address.street] AS address_street,
[address.city] AS address_city,
[address.state] AS address_state,
[insured_ages],
[vehicles]
FROM
[insured]
With a document view of the data, the address object is flattened into 3 columns (when FlattenObjects set to true) and the _id, name, insured_ages, and vehicles elements are returned as individual columns, resulting in a table with 7 columns.
| _id | name | address_street | address_city | address_state | insured_ages | vehicles | |
| 1 | John Smith | Main Street | Chapel Hill | NC | [ 17, 43, 45 ] | [{"year":2015,"make":"Dodge","model":"RAM 1500","body_style":"TK"},{"year":2015,"make":"Suzuki","model":"V-Strom 650 XT","body_style":"MC"},{"year":1992,"make":"Harley Davidson","model":"FXR","body_style":"MC"}]
| |
| 2 | Joseph Newman | Oak Street | Raleigh | NC | [ 23, 25 ] | [{"year":2010,"make":"Honda","model":"Accord","body_style":"SD"},{"year":2008,"make":"Honda","model":"Civic","body_style":"CP"}]
|
The CData Cloud can be configured to create a relational model of the data, treating nested documents as individual tables containing a primary key and a foreign key that links to the parent document. This is particularly useful if you need to work with your Elasticsearch data in existing BI, reporting, and ETL tools that expect a relational data model.
With DataModel set to "Relational", any JOINs are controlled by the query. Any time you perform a JOIN query, the Elasticsearch index will be queried once for each table (nested document) included in the query.
Below is a sample query against the sample document in Raw Data, using a relational model.
The following query explicitly JOINs the insured and vehiclestables.
SELECT
[insured].[_id],
[insured].[name],
[insured].[address.street] AS address_street,
[insured].[address.city.first] AS address_city,
[insured].[address.state.last] AS address_state,
[insured].[insured_ages],
[vehicles].[year],
[vehicles].[make],
[vehicles].[model],
[vehicles].[body_style],
[vehicles].[_insured_id],
[vehicles].[_c_id]
FROM
[insured]
JOIN
[vehicles]
ON
[insured].[_id] = [vehicles].[_insured_id]
In the example query, each vehicle document is JOINed to its parent insured object to produce a table with 5 rows.
| _id | name | address_street | address_city | address_state | insured_ages | year | make | model | body_style | _insured_id | _vehicles_c_id | |
| 1 | John Smith | Main Street | Chapel Hill | NC | [ 17, 43, 45 ] | 2015 | Dodge | RAM 1500 | TK | 1 | 1 | |
| 1 | John Smith | Main Street | Chapel Hill | NC | [ 17, 43, 45 ] | 2015 | Suzuki | V-Strom 650 XT | MC | 1 | 2 | |
| 1 | John Smith | Main Street | Chapel Hill | NC | [ 17, 43, 45 ] | 1992 | Harley Davidson | FXR | MC | 1 | 3 | |
| 2 | Joseph Newman | Oak Street | Raleigh | NC | [ 23, 25 ] | 2010 | Honda | Accord | SD | 2 | 4 | |
| 2 | Joseph Newman | Oak Street | Raleigh | NC | [ 23, 25 ] | 2008 | Honda | Civic | CP | 2 | 5 |
The Cloud can return JSON structures as column values. The Cloud enables you to use standard SQL functions to work with these JSON structures. The examples in this section use the following array:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
SELECT Name, JSON_EXTRACT(grades,'[0].grade') AS Grade, JSON_EXTRACT(grades,'[0].score') AS Score FROM Students;
| Column Name | Example Value |
| Grade | A |
| Score | 2 |
SELECT Name, JSON_COUNT(grades,'[x]') AS NumberOfGrades FROM Students;
| Column Name | Example Value |
| NumberOfGrades | 5 |
SELECT Name, JSON_SUM(score,'[x].score') AS TotalScore FROM Students;
| Column Name | Example Value |
| TotalScore | 41 |
SELECT Name, JSON_MIN(score,'[x].score') AS LowestScore FROM Students;
| Column Name | Example Value |
| LowestScore | 2 |
SELECT Name, JSON_MAX(score,'[x].score') AS HighestScore FROM Students;
| Column Name | Example Value |
| HighestScore | 14 |
The DOCUMENT function can be used to retrieve the entire document as a JSON string. See the following query and its result as an example:
SELECT DOCUMENT(*) FROM Employee;The query above will return the entire document as shown.
{
"_index": "megacorp",
"_type": "employee",
"_id": "2",
"_score": 1,
"_source": {
"first_name": "Jane",
"last_name": "Smith",
"age": 32,
"about": "I like to collect rock albums",
"interests": [
"music"
]
}
}
This section describes how SQL statements are interpreted and translated into Elasticsearch queries. Examples are also provided to explain the behavior of various queries.
To demonstrate this point, an
analyzed field in Elasticsearch was created with a value of 'Bike'. After being analyzed, the value will be stored in the inverted index (using the default analyzer) as 'bike'.
A non-analyzed field, on the other hand, would not analyze the search value and thus would be stored as 'Bike'.
When performing searches, some Elasticsearch query types run the search value through an analyzer (which will make the search case insensitive) and some do not (making the search
case sensitive). Additionally, the default analyzer breaks up fields containing multiple words into separate terms. When performing searches on these fields,
Elasticsearch may return records that contain the same words but in a different order. For example, a search is performed using a value of 'blue sky' but a record with
'sky blue' is returned.
To work around these case-sensitivity and ordering issues, the CData Cloud will identify the column as analyzed or non-analyzed and will issue the appropriate Elasticsearch query based on the specified operator (such as =) and the search value.
Analyzed Columns
Analyzed columns are stored after being run through an analyzer. As a result of that, the search values specified will be run through an analyzer on the Elasticsearch server prior to the search.
This makes the searches case-insensitive (provided the analyzer used handles casing).
| WHERE Clause Examples | Elasticsearch Query Type |
| WHERE analyzed_column='value' | Query String Query |
| WHERE analyzed_column='value with spaces' | Match Phrase Query |
Non-Analyzed Columns
Non-analyzed columns are stored without being run through an analyzer. Thus, non-analyzed columns are case sensitive and thus search values specified for these columns are case sensitive. If the search value is
a single word, the Cloud will check the filter with the original casing specified along with three common forms: uppercase, lowercase, and capitalized. If the search value
contains multiple words, the search value will be sent as-is and thus is case sensitive.
| WHERE Clause Examples | Elasticsearch Query Type |
| WHERE nonanalyzed_column='myValue' | Query String Query: Four cases are checked - myValue OR MYVALUE OR myvalue OR Myvalue |
| WHERE nonanalyzed_column='value with spaces' | Wildcard Query |
| WHERE Clause Examples | Behavior |
| WHERE column IN ('value') | Treated as: column='value' |
| WHERE column NOT IN ('value') | Treated as: column!='value' |
| WHERE column IN ('value1', 'value2') | Treated as: column='value1' OR column='value2' |
| WHERE column NOT IN ('value1', 'value2') | Treated as: column!='value1' AND column!='value2' |
| WHERE Clause Examples | Behavior |
| WHERE column LIKE 'value' | Treated as: column='value' |
| WHERE column NOT LIKE 'value' | Treated as: column!='value' |
| WHERE analyzed_column LIKE 'v_lu%' | Query String Query with wildcards |
| WHERE nonanalyzed_column LIKE 'v_lu%' | Wildcard Query with wildcards |
JSON objects and arrays of objects will be treated as raw strings and all filtering will be performed by the Cloud. Therefore an equals operation must match the entire JSON aggregate to return a result, unless a CONTAINS or LIKE operation is used.
If JSON objects are flattened into individual columns (via FlattenObjects and FlattenArrays), the column for the specific JSON field will be treated as individual columns. Thus the data type will be that as contained in the Elasticsearch mapping and all filters will be pushed to the server (where applicable).
JSON primitive array aggregates will also be treated as raw strings by default and filters will be performed by the Cloud. To filter data based on whether a primitive array contains a single value, the INARRAY function can be used (e.g. INARRAY(column) = 'value'). When performing a search on array fields, Elasticsearch looks at each value individually within an array. Thus when the INARRAY function is specified in a WHERE clause, the filter will be pushed to the server which performs a search within an array.
Primitive arrays may consist of different data types, such as strings or ints. Therefore the INARRAY function supports comparison operators applicable to the data type within the Elasticsearch mapping for the field. For example, INARRAY(int_array) > 5, will return all rows of data in which the int_array contains a value greater than 5. Supported comparison operators include the use of the LIKE operator for string arrays.
By default, the Cloud attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
The Elasticsearch Cloud also supports setting client certificates. Set the following to connect using a client certificate.
To authenticate to an HTTP proxy, set the following:
Set the following properties:
The CData Cloud models Elasticsearch entities in relational Tables, Views, and Stored Procedures.
Searching with SQL describes in further detail how the tables are dynamically retrieved.
Views are treated in a similar manner to Tables and thus exhibit similar behavior. There are some differences in the background though which are a direct result of how aliases work within Elasticsearch. (Note: In the following description, 'alias', 'index', 'type', and 'field' are referring to the Elasticsearch objects and not directly to anything within the Cloud).
Views (aliases) are tied to an index and thus span all the types within an index. Additionally aliases can span multiple indices. Therefore you may see an alias (view) listed multiple times under different schemas (index). When querying the view, regardless of the schema specified, data will be retrieved and returned for all indices and types associated with the corresponding alias. Thus the generated metadata will contain a column for each field within each type of each index associated with the alias.
Searching with SQL describes in further detail how the views are dynamically retrieved.
The ModifyIndexAliases stored procedure can be used to create index aliases within Elasticsearch.
In addition to the Elasticsearch aliases, an '_all' view is returned which enables querying the _all endpoint to retrieve data for all indices in a single query. Given how many indices and documents the _all view could cover, certain queries agains the '_all' view could be very expensive. Additionally, for scanning for table metadata, as governed by RowScanDepth, will be less accurate for '_all' views that cover very large or very heterogenous indices. See Automatic Schema Discovery for more information about this.
The Cloud models the data in Elasticsearch as a list of tables in a relational database that can be queried using standard SQL statements.
| Name | Description |
| IndexTemplates | General information about index templates |
General information about index templates
| Name | Type | ReadOnly | References | Description |
| name [KEY] | String | False |
Name of index template | |
| composed_of | String | False |
Array of index template names from which this template is composed | |
| data_stream_allow_custom_routing | Boolean | False |
Whether data stream allows custom routing | |
| data_stream_hidden | Boolean | False |
Whether data stream is hidden | |
| data_stream_index_mode | String | False |
Type of data stream to create | |
| index_patterns | String | False |
Array of patterns to match index names to this template | |
| _meta | String | False |
Optional user metadata about the index template | |
| priority | Long | False |
Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority) | |
| template_aliases | String | False |
JSON aggregate of aliases info for index or data stream. | |
| template_mappings | String | False |
Mapping for fields in the index | |
| template_settings | String | False |
Index settings for indices matched to templates | |
| version | Integer | False |
Optional version number used to manage index templates externally | |
| deprecated | Boolean | False |
Optional mark of whether this template is deprecated |
Views are similar to tables in the way that data is represented; however, views are read-only.
Queries can be executed against a view as if it were a normal table.
| Name | Description |
| IndexSettings | General information about index settings |
| XPackInfo | General information about the installed X-Pack features |
General information about index settings
| Name | Type | References | Description |
| provided_name | String | ||
| creation_date | String | ||
| uuid | String | ||
| version | String | ||
| routing | String | ||
| lifecycle | String | ||
| mode | String | ||
| routing_path | String | ||
| sort | String | ||
| number_of_shards | String | ||
| number_of_replicas | String | ||
| number_of_routing_shards | String | ||
| check_on_startup | String | ||
| codec | String | ||
| routing_partition_size | String | ||
| load_fixed_bitset_filters_eagerly | Boolean | ||
| hidden | Boolean | ||
| auto_expand_replicas | String | ||
| merge | String | ||
| search.idle.after | String | ||
| refresh_interval | String | ||
| max_result_window | Integer | ||
| max_inner_result_window | Integer | ||
| max_rescore_window | Integer | ||
| max_docvalue_fields_search | Integer | ||
| max_script_fields | Integer | ||
| max_ngram_diff | Integer | ||
| max_shingle_diff | Integer | ||
| max_refresh_listeners | Integer | ||
| max_terms_count | Integer | ||
| max_regex_length | Integer | ||
| gc_deletes | String | ||
| default_pipeline | String | ||
| format | String | ||
| final_pipeline | String | ||
| analyze.max_token_count | Integer | ||
| highlight.max_analyzed_offset | Integer | ||
| analysis | String | ||
| time_series | String | ||
| unassigned.node_left.delayed_timeout | String | ||
| priority | String | ||
| blocks | String | ||
| mapping | String | ||
| similarity | String | ||
| search | String | ||
| indexing | String | ||
| store | String | ||
| translog | String | ||
| soft_deletes | String | ||
| indexing_pressure.memory.limit | Integer |
General information about the installed X-Pack features
| Name | Type | References | Description |
| build_hash | String | ||
| build_date | Datetime | ||
| license_uid | String | ||
| license_type | String | ||
| license_mode | String | ||
| license_status | String | ||
| aggregate_metric_available | Boolean | ||
| aggregate_metric_enabled | Boolean | ||
| analytics_available | Boolean | ||
| analytics_enabled | Boolean | ||
| ccr_available | Boolean | ||
| ccr_enabled | Boolean | ||
| data_streams_available | Boolean | ||
| data_streams_enabled | Boolean | ||
| data_tiers_available | Boolean | ||
| data_tiers_enabled | Boolean | ||
| enrich_available | Boolean | ||
| enrich_enabled | Boolean | ||
| eql_available | Boolean | ||
| eql_enabled | Boolean | ||
| frozen_indices_available | Boolean | ||
| frozen_indices_enabled | Boolean | ||
| graph_available | Boolean | ||
| graph_enabled | Boolean | ||
| ilm_available | Boolean | ||
| ilm_enabled | Boolean | ||
| logstash_available | Boolean | ||
| logstash_enabled | Boolean | ||
| ml_available | Boolean | ||
| ml_enabled | Boolean | ||
| monitoring_available | Boolean | ||
| monitoring_enabled | Boolean | ||
| rollup_available | Boolean | ||
| rollup_enabled | Boolean | ||
| searchable_snapshots_available | Boolean | ||
| searchable_snapshots_enabled | Boolean | ||
| security_available | Boolean | ||
| security_enabled | Boolean | ||
| slm_available | Boolean | ||
| slm_enabled | Boolean | ||
| spatial_available | Boolean | ||
| spatial_enabled | Boolean | ||
| sql_available | Boolean | ||
| sql_enabled | Boolean | ||
| transform_available | Boolean | ||
| transform_enabled | Boolean | ||
| voting_only_available | Boolean | ||
| voting_only_enabled | Boolean | ||
| watcher_available | Boolean | ||
| watcher_enabled | Boolean | ||
| tagline | String |
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with Elasticsearch.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Elasticsearch, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
| CreateIndex | Submits a request to create an index with specified settings. |
| ModifyIndexAliases | Submits an alias request to modify index aliases. |
| UpdateIndexSettings | Procedure for updating index settings. Note that some settings may only be updated on a closed index. |
Submits a request to create an index with specified settings.
EXECUTE Example:
EXECUTE CreateIndex Index = 'firstindex', Alias = 'search', NumberOfShards = '3'
| Name | Type | Description |
| Index | String | The name of the index. |
| Alias | String | The name of the alias to optionally associate the index with. |
| AliasFilter | String | Raw Query DSL object used to limit documents the alias can access. |
| AliasIndexRouting | String | Value used for the alias to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations. |
| AliasIsHidden | Boolean | Boolean value controlling whether or not the alias is hidden. All indices for the alias must have the same is_hidden value. |
| AliasIsWriteIndex | Boolean | Boolean value controlling whether the index is the write index for the alias. |
| AliasRouting | String | Value used for the alias to route indexing and search operations to a specific shard. May be overwritten by AliasIndexRouting or AliasSearchRouting for certain operations. |
| AliasSearchRouting | String | Value used for the alias to route search operations to a specific shard. If specified, this overwrites the routing value for search operations. |
| Mappings | String | Raw JSON object specifying explicit mapping for the index. |
| NumberOfShards | String | The number of primary shards that the created index should have. |
| NumberOfRoutingShards | String | Number used by Elasticsearch internally with the value from NumberOfShards to route documents to a primary shard. |
| OtherSettings | String | Raw JSON object of settings. Cannot be used in conjunction with NumberOfRoutingShards or NumberOfShards. |
| Name | Type | Description |
| CompletedBeforeTimeout | String | Returns True if the index was created before timeout. Note that if this value is false, the index could still be created successfully on Elasticsearch. In this case, completion of creating the index, updating the cluster state, and requisite sharding would occur after the timeout window for the request response elapsed. |
| ShardsAcknowledged | String | Boolean indicating whether the requisite number of shard copies were started for each shard in the index before timing out. |
| IndexName | String | Name in Elasticsearch of the created index. |
Submits an alias request to modify index aliases.
EXECUTE Example:
EXECUTE ModifyIndexAliases Action = 'add;add', Index = 'index_1;index_2', Alias = 'my_alias;my_alias'
Note: The Index parameter supports the asterisk (*) character to perform a pattern match to add all matching indices to the alias.
| Name | Type | Description |
| Action | String | The action to perform such as 'add', 'remove', or 'remove_index'. Multiple actions are semi-colon separated. |
| Index | String | The name of the index. Multiple indexes are semi-colon separated. |
| Alias | String | The name of the alias. Multiple aliases are semi-colon separated. |
| Filter | String | A filter to use when creating the alias. This takes the raw JSON filter using Query DSL. Multiple filters are semi-colon separated. |
| Routing | String | The routing value to associate with the alias. Multiple routing values are semi-colon separated. |
| SearchRouting | String | The routing value to associate with the alias for searching operations. Multiple search routing values are semi-colon separated. |
| IndexRouting | String | The routing value to associate with the alias for indexing operations. Multiple index routing values are semi-colon separated. |
| Name | Type | Description |
| Success | String | Returns True if successful. |
Procedure for updating index settings. Note that some settings may only be updated on a closed index.
See this documentation for information on index settings in Elasticsearch, and what they control and impact across indices and clusters. Make sure to reference documentation for the version of Elasticsearch being connected to.
EXECUTE Example:
EXECUTE UpdateIndexSettings IndexName='traffic', NumberOfReplicas='3'
| Name | Type | Description |
| IndexName | String | |
| NumberOfReplicas | Integer | |
| RoutingAllocationIncludeTierPreference | String | |
| VersionCreated | String | |
| AnalyzeMaxTokenCount | Integer | |
| AutoExpandReplicas | String | |
| BlocksMetadata | Boolean | |
| BlocksRead | Boolean | |
| BlocksReadOnly | Boolean | |
| BlocksReadOnlyAllowDelete | Boolean | |
| BlocksWrite | Boolean | |
| DefaultPipeline | String | |
| FinalPipeline | String | |
| Format | String | |
| GcDeletes | String | |
| Hidden | Boolean | |
| HighlightMaxAnalyzedOffset | Integer | |
| IndexingSlowlogReformat | Boolean | |
| IndexingSlowlogSource | Boolean | |
| IndexingSlowlogThresholdIndexDebug | String | |
| IndexingSlowlogThresholdIndexInfo | String | |
| IndexingSlowlogThresholdIndexTrace | String | |
| IndexingSlowlogThresholdIndexWarn | String | |
| LifecycleIndexingComplete | Boolean | |
| LifecycleName | String | |
| LifecycleOriginationDate | Datetime | |
| LifecycleParseOriginationDate | Boolean | |
| LifecycleRolloverAlias | String | |
| LifecycleStepWaitTimeThreshold | String | |
| MappingCoerce | Boolean | |
| MappingDepthLimit | Integer | |
| MappingDimensionFieldsLimit | Integer | |
| MappingFieldNameLengthLimit | Integer | |
| MappingIgnoreMalformed | Boolean | |
| MappingNestedFieldsLimit | Integer | |
| MappingNestedObjectsLimit | Integer | |
| MappingTotalFieldsLimit | Integer | |
| MaxDocvalueFieldsSearch | Integer | |
| MaxInnerResultWindow | Integer | |
| MaxNgramDiff | Integer | |
| MaxRefreshListeners | Integer | |
| MaxRegexLength | Integer | |
| MaxRescoreWindow | Integer | |
| MaxResultWindow | Integer | |
| MaxScriptFields | Integer | |
| MaxShingleDiff | Integer | |
| MaxSlicesPerScroll | Integer | |
| MaxTermsCount | Integer | |
| MergePolicyDeletesPctAllowed | Boolean | |
| MergePolicyExpungeDeletesAllowed | Boolean | |
| MergePolicyFloorSegment | String | |
| MergePolicyMaxMergeAtOnce | Integer | |
| MergePolicyMaxMergeAtOnceExplicit | Integer | |
| MergePolicyMaxMergedSegment | Integer | |
| MergePolicySegmentsPerTier | Integer | |
| MergeSchedulerAutoThrottle | String | |
| MergeSchedulerMaxMergeCount | Integer | |
| MergeSchedulerMaxThreadCount | Integer | |
| Priority | String | |
| QueriesCacheEnabled | Boolean | |
| QueryStringLenient | Boolean | |
| RefreshInterval | String | |
| RoutingAllocationDiskWatermarkIgnore | Boolean | |
| RoutingAllocationEnable | Boolean | |
| RoutingAllocationTotalShardsPerNode | Integer | |
| RoutingRebalanceEnable | Boolean | |
| SearchIdleAfter | String | |
| SearchSlowlogThresholdFetchDebug | String | |
| SearchSlowlogThresholdFetchInfo | String | |
| SearchSlowlogThresholdFetchTrace | String | |
| SearchSlowlogThresholdFetchWarn | String | |
| SearchSlowlogThresholdQueryDebug | String | |
| SearchSlowlogThresholdQueryInfo | String | |
| SearchSlowlogThresholdQueryTrace | String | |
| SearchSlowlogThresholdQueryWarn | String | |
| SearchThrottled | String | |
| StoreType | String | |
| TimeSeriesEnd | Datetime | |
| TopMetricsMaxSize | Integer | |
| TranslogDurability | String | |
| TranslogFlushThresholdSize | Integer | |
| TranslogGenerationThresholdSize | Integer | |
| TranslogRetentionAge | String | |
| TranslogRetentionSize | Integer | |
| TranslogSyncInterval | String | |
| VerifiedBeforeClose | Boolean |
| Name | Type | Description |
| Success | String | Returns True if successful. |
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for Elasticsearch:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries, including batch operations::
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
| Name | Type | Description |
| CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
| Name | Type | Description |
| CatalogName | String | The database name. |
| SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
| Name | Type | Description |
| CatalogName | String | The database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view. |
| TableType | String | The table type (table or view). |
| Description | String | A description of the table or view. |
| IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the [CData].[Elasticsearch].Employee table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Employee' AND CatalogName='CData' AND SchemaName='Elasticsearch'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view containing the column. |
| ColumnName | String | The column name. |
| DataTypeName | String | The data type name. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The storage size of the column. |
| DisplaySize | Int32 | The designated column's normal maximum width in characters. |
| NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
| IsNullable | Boolean | Whether the column can contain null. |
| Description | String | A brief description of the column. |
| Ordinal | Int32 | The sequence number of the column. |
| IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
| IsGeneratedColumn | String | Whether the column is generated. |
| IsHidden | Boolean | Whether the column is hidden. |
| IsArray | Boolean | Whether the column is an array. |
| IsReadOnly | Boolean | Whether the column is read-only. |
| IsKey | Boolean | Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
| ColumnType | String | The role or classification of the column in the schema. Possible values include SYSTEM, LINKEDCOLUMN, NAVIGATIONKEY, REFERENCECOLUMN, and NAVIGATIONPARENTCOLUMN. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
| Name | Type | Description |
| CatalogName | String | The database containing the stored procedure. |
| SchemaName | String | The schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure. |
| Description | String | A description of the stored procedure. |
| ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the CreateTable stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'CreateTable' AND Direction = 1 OR Direction = 2
To include result set columns in addition to the parameters, set the IncludeResultColumns pseudo column to True:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'CreateTable' AND IncludeResultColumns='True'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the stored procedure. |
| SchemaName | String | The name of the schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure containing the parameter. |
| ColumnName | String | The name of the stored procedure parameter. |
| Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| DataTypeName | String | The name of the data type. |
| NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
| Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
| NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
| IsNullable | Boolean | Whether the parameter can contain null. |
| IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
| IsArray | Boolean | Whether the parameter is an array. |
| Description | String | The description of the parameter. |
| Ordinal | Int32 | The index of the parameter. |
| Values | String | The values you can set in this parameter are limited to those shown in this column. Possible values are comma-separated. |
| SupportsStreams | Boolean | Whether the parameter represents a file that you can pass as either a file path or a stream. |
| IsPath | Boolean | Whether the parameter is a target path for a schema creation operation. |
| Default | String | The value used for this parameter when no value is specified. |
| SpecificName | String | A label that, when multiple stored procedures have the same name, uniquely identifies each identically-named stored procedure. If there's only one procedure with a given name, its name is simply reflected here. |
| IsCDataProvided | Boolean | Whether the procedure is added/implemented by CData, as opposed to being a native Elasticsearch procedure. |
| Name | Type | Description |
| IncludeResultColumns | Boolean | Whether the output should include columns from the result set in addition to parameters. Defaults to False. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the [CData].[Elasticsearch].Employee table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Employee' AND CatalogName='CData' AND SchemaName='Elasticsearch'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
| IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
| ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| KeySeq | String | The sequence number of the primary key. |
| KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the index. |
| SchemaName | String | The name of the schema containing the index. |
| TableName | String | The name of the table containing the index. |
| IndexName | String | The index name. |
| ColumnName | String | The name of the column associated with the index. |
| IsUnique | Boolean | True if the index is unique. False otherwise. |
| IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
| Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
| SortOrder | String | The sort order: A for ascending or D for descending. |
| OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
| Name | Type | Description |
| Name | String | The name of the connection property. |
| ShortDescription | String | A brief description. |
| Type | String | The data type of the connection property. |
| Default | String | The default value if one is not explicitly set. |
| Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
| Value | String | The value you set or a preconfigured default. |
| Required | Boolean | Whether the property is required to connect. |
| Category | String | The category of the connection property. |
| IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
| Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
| PropertyName | String | A camel-cased truncated form of the connection property name. |
| Ordinal | Int32 | The index of the parameter. |
| CatOrdinal | Int32 | The index of the parameter category. |
| Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
| Visible | Boolean | Informs whether the property is visible in the connection UI. |
| ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
| AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
| COUNT | Whether COUNT function is supported. | YES, NO |
| IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
| IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
| SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
| GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
| OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
| OUTER_JOINS | Whether outer joins are supported. | YES, NO |
| SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
| STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
| NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
| TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
| REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
| REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
| IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
| SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
| DIALECT | Indicates the SQL dialect to use. | |
| KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
| SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
| SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
| DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
| DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
| SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
| SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
| SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
| PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
| ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
| PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
| MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
| REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
| REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
| REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
| REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
| REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
| IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
| CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
| CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.
| Name | Type | Description |
| NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
| VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
| Name | Type | Description |
| Id | String | The database-generated Id returned from a data modification operation. |
| Batch | String | An identifier for the batch. 1 for a single operation. |
| Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
| Message | String | SUCCESS or an error message if the update in the batch failed. |
Describes the available system information.
The following query retrieves all columns:
SELECT * FROM sys_information
| Name | Type | Description |
| Product | String | The name of the product. |
| Version | String | The version number of the product. |
| Datasource | String | The name of the datasource the product connects to. |
| NodeId | String | The unique identifier of the machine where the product is installed. |
| HelpURL | String | The URL to the product's help documentation. |
| License | String | The license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.) |
| Location | String | The file path location where the product's library is stored. |
| Environment | String | The version of the environment or rumtine the product is currently running under. |
| DataSyncVersion | String | The tier of CData Sync required to use this connector. |
| DataSyncCategory | String | The category of CData Sync functionality (e.g., Source, Destination). |
The Cloud maps types from the data source to the corresponding data type available in the schema. The table below documents these mappings.
| Elasticsearch | CData Schema |
| array | A JSON structure* |
| binary | binary |
| boolean | boolean |
| byte | string |
| completion | string |
| date | datetime |
| date_range | datetime (one field per value) |
| double | double |
| double_range | double (one field per value) |
| float | float |
| float_range | float (one field per value) |
| geo_point | string |
| geo_shape | string |
| half_float | float |
| integer | integer |
| integer_range | integer (one field per value) |
| ip | string |
| keyword | string |
| long | long |
| long_range | long (one field per value) |
| nested | A JSON structure.* |
| object | Flattened into multiple fields. |
| scaled_float | float |
| short | short |
| text> | string |
*Parsed into multiple fields with individual types (see FlattenArrays)
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| AuthScheme | The scheme used for authentication. Accepted entries are None, Basic, Negotiate (Kerberos), AwsRootKeys, AwsIAMRoles, AwsEC2Roles, APIKey, and TemporaryCredentials. None is the default. |
| User | The user who is authenticating to Elasticsearch. |
| Password | The password used to authenticate to Elasticsearch. |
| UseSSL | This property sets whether the provider attempts to negotiate TLS/SSL connections to the server. |
| Server | The host name or IP address of the Elasticsearch REST server. Alternatively, multiple nodes in a single cluster can be specified, though all such nodes must be able to support REST API calls. |
| Port | The port for the Elasticsearch REST server. |
| APIKey | The APIKey used to authenticate to Elasticsearch. |
| APIKeyId | The APIKey Id to authenticate to Elasticsearch. |
| Property | Description |
| DataModel | Specifies the data model to use when parsing Elasticsearch documents and generating the database metadata. |
| ExposeDotIndices | If false, indices whose name starts with a '.' (dot indices) will not be exposed as tables or views by the provider. If true, dot indices will be exposed as tables or views. |
| AliasesFilter | A comma-separated list of alias names or filters that define the aliases exposed as views. |
| IndicesAndDataStreamsFilter | A comma-separated list of index and data stream names or filters. |
| UseLakeFormation | When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion. |
| Property | Description |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRoleARN | The Amazon Resource Name of the role to use when authenticating. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSSessionToken | Your AWS session token. |
| TemporaryTokenDuration | The amount of time (in seconds) an AWS temporary token will last. |
| AWSExternalId | A unique identifier that might be required when you assume a role in another account. |
| AWSWebIdentityToken | The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider. |
| Property | Description |
| KerberosKDC | Identifies the Kerberos Key Distribution Center (KDC) service used to authenticate the user. (SPNEGO or Windows authentication only). |
| KerberosRealm | Identifies the Kerberos Realm used to authenticate the user. |
| KerberosSPN | Identifies the service principal name (SPN) for the Kerberos Domain Controller. |
| KerberosUser | Confirms the principal name for the Kerberos Domain Controller, which uses the format host/user@realm. |
| KerberosKeytabFile | Identifies the Keytab file containing your pairs of Kerberos principals and encrypted keys. |
| KerberosServiceRealm | Identifies the service's Kerberos realm. (Cross-realm authentication only). |
| KerberosServiceKDC | Identifies the service's Kerberos Key Distribution Center (KDC). |
| KerberosTicketCache | Specifies the full file path to an MIT Kerberos credential cache file. |
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| FlattenObjects | Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. |
| FlattenArrays | Set FlattenArrays to the number of nested array elements you want to return as table columns. By default, nested arrays are returned as strings of JSON. |
| Property | Description |
| ClientSideEvaluation | Set ClientSideEvaluation to true to perform Evaluation client side on nested objects. |
| MaxResults | The maximum number of total results to return from Elasticsearch when using the default Search API. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| PageSize | The number of results to return per request from Elasticsearch. |
| PaginationMode | Specifies whether to use PIT with search_after or scrolls to page through query results. |
| PITDuration | Specifies the time unit to use for keep alive when retrieving results via PIT API. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| ReplaceInvalidUTF8Chars | Specifies whether to replace invalid UTF8 byte sequences found in reads of indexed document content with the U+FFFD replacement character. |
| RowScanDepth | The maximum number of rows to scan when generating table metadata. Set this property to gain more control over how the provider detects arrays. |
| ScrollDuration | Specifies the time unit to use for keep alive when retrieving results via the Scroll API. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| UseFullyQualifiedNestedTableName | Set this to true to set the generated table name as the complete source path when flattening nested documents using Relational DataModel . |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | The scheme used for authentication. Accepted entries are None, Basic, Negotiate (Kerberos), AwsRootKeys, AwsIAMRoles, AwsEC2Roles, APIKey, and TemporaryCredentials. None is the default. |
| User | The user who is authenticating to Elasticsearch. |
| Password | The password used to authenticate to Elasticsearch. |
| UseSSL | This property sets whether the provider attempts to negotiate TLS/SSL connections to the server. |
| Server | The host name or IP address of the Elasticsearch REST server. Alternatively, multiple nodes in a single cluster can be specified, though all such nodes must be able to support REST API calls. |
| Port | The port for the Elasticsearch REST server. |
| APIKey | The APIKey used to authenticate to Elasticsearch. |
| APIKeyId | The APIKey Id to authenticate to Elasticsearch. |
The scheme used for authentication. Accepted entries are None, Basic, Negotiate (Kerberos), AwsRootKeys, AwsIAMRoles, AwsEC2Roles, APIKey, and TemporaryCredentials. None is the default.
string
"Basic"
This field is used to authenticate against the server. Use the following options to select your authentication scheme:
The user who is authenticating to Elasticsearch.
string
""
The user who is authenticating to Elasticsearch.
The password used to authenticate to Elasticsearch.
string
""
The password used to authenticate to Elasticsearch.
This property sets whether the provider attempts to negotiate TLS/SSL connections to the server.
bool
true
This property sets whether the Cloud attempts to negotiate TLS/SSL connections to the server.
When set to false, (the default), for compatibility with the previous method of specifying the protocol prefix in the Server property, the Cloud category respects the protocol behavior set in Server, and then uses the protocol dictated by UseSSL=False. NOTE: This means that if you set UseSSL=False, but also specify Server="https://localhost", the Cloud attempts to connect and communicate over HTTPS, despite UseSSL being set to False.
When UseSSL is set to true, the Cloud attempts to strictly follow the property's specification, and it throws an exception if there is a conflict with the specification in Server. For example, if you set UseSSL=true, but specify Server as "http://localhost", the Cloud generates an exception.
Differences between the new and the old method:
In the new method, Server should now just specify server name, domain name, IP address, or similar. For the previous method of specifying Server as a combination of protocol prefix and hostname, like "http://localhost", this now maps to Server being set to "localhost", and UseSSL to false;. What was formerly set to Server="https://localhost" now maps to Server="localhost";UseSSL=true;.
New users of the driver are encouraged to not specify a protocol in Server.
The host name or IP address of the Elasticsearch REST server. Alternatively, multiple nodes in a single cluster can be specified, though all such nodes must be able to support REST API calls.
string
""
The host name or IP address of the Elasticsearch REST server. Alternatively, multiple nodes in a single cluster can be specified, though all such nodes must be able to support REST API calls.
To use SSL, UseSSL to true; and set SSL connection properties such as SSLServerCert.
To specify multiple nodes, set the property to a comma delimited list of addresses, with ports optionally specified after the address and delimited from the address by a colon. For example, you could specify two dedicated, coordinating nodes for your cluster with '01.01.01.01:1234,02.02.02.02:5678'. If a port is specified with a node, that port will take precedence over the Port property for connections to that node only.
The port for the Elasticsearch REST server.
string
"9200"
The port the Elasticsearch REST server is bound to.
The APIKey used to authenticate to Elasticsearch.
string
""
The APIKey used to authenticate to Elasticsearch.
The APIKey Id to authenticate to Elasticsearch.
string
""
The APIKey Id to authenticate to Elasticsearch.
This section provides a complete list of the Connection properties you can configure in the connection string for this provider.
| Property | Description |
| DataModel | Specifies the data model to use when parsing Elasticsearch documents and generating the database metadata. |
| ExposeDotIndices | If false, indices whose name starts with a '.' (dot indices) will not be exposed as tables or views by the provider. If true, dot indices will be exposed as tables or views. |
| AliasesFilter | A comma-separated list of alias names or filters that define the aliases exposed as views. |
| IndicesAndDataStreamsFilter | A comma-separated list of index and data stream names or filters. |
| UseLakeFormation | When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion. |
Specifies the data model to use when parsing Elasticsearch documents and generating the database metadata.
string
"Document"
Select a DataModel configuration to configure how the Cloud models nested documents into tables. See Parsing Hierarchical Data for examples of querying the data in the different configurations.
The following DataModel configurations are available. See Parsing Hierarchical Data for examples of querying the data in the different configurations.
Document
Returns a single table representing a row for each document. In this data model, any nested documents will not be flattened and will be returned as aggregates.
FlattenedDocuments
Returns a single table representing a JOIN of the parent and nested documents. In this data model, nested documents will act in the same manner as a SQL JOIN. Additionally, nested sibling documents (nested documents at same height), will be treated as a SQL CROSS JOIN. The Cloud will identify the nested documents available by parsing the returned document.
Relational
Returns multiple tables, one for each nested document (including the parent document) in the document. In this data model, any nested documents will be returned as relational tables that contain a primary key and a foreign key that links to the parent table.
If false, indices whose name starts with a '.' (dot indices) will not be exposed as tables or views by the provider. If true, dot indices will be exposed as tables or views.
bool
false
In most standard scenarios with newer versions of Elasticsearch, dot indices are system indices or hidden indices. These are indices that usually should not be directly interacted with by users, or whose indexed documents will not usually be returned in the results of queries that search over sets of multiple indices. As such, the Cloud does not expose dot indices by default in its table or view metadata.
A comma-separated list of alias names or filters that define the aliases exposed as views.
string
""
The alias names provided should match existing aliases in Elasticsearch. Filters can use parts of alias names and the wildcard character *.
For example, the following value matches the aliases "qa," "sprint_testing," and "sprint_metrics":
qa,sprint_*
A comma-separated list of index and data stream names or filters.
string
""
Depending on the version of Elasticsearch connected to, this filter limits the indices and data streams exposed as tables or schemas. See Schema Mapping for more details.
The values provided should match existing index or data stream names in Elasticsearch. Filters for indices or data streams can include parts of their names and the wildcard character *.
This filter applies only to open, non-hidden indices and data streams.
For example, the following value matches the data streams "my_logs_0" and "my_logs_1" and the index "sources":
sources,my_logs_*.
When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion.
bool
false
When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion.
This section provides a complete list of the AWS Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRoleARN | The Amazon Resource Name of the role to use when authenticating. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSSessionToken | Your AWS session token. |
| TemporaryTokenDuration | The amount of time (in seconds) an AWS temporary token will last. |
| AWSExternalId | A unique identifier that might be required when you assume a role in another account. |
| AWSWebIdentityToken | The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider. |
Specifies your AWS account access key. This value is accessible from your AWS security credentials page.
string
""
To find your AWS account access key:
Your AWS account secret key. This value is accessible from your AWS security credentials page.
string
""
Your AWS account secret key. This value is accessible from your AWS security credentials page:
The Amazon Resource Name of the role to use when authenticating.
string
""
When authenticating outside of AWS, it is common to use a Role for authentication instead of your direct AWS account credentials. Entering the AWSRoleARN will cause the CData Cloud to perform a role based authentication instead of using the AWSAccessKey and AWSSecretKey directly. The AWSAccessKey and AWSSecretKey must still be specified to perform this authentication. You cannot use the credentials of an AWS root user when setting RoleARN. The AWSAccessKey and AWSSecretKey must be those of an IAM user.
The hosting region for your Amazon Web Services.
string
"NORTHERNVIRGINIA"
The hosting region for your Amazon Web Services. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, TAIPEI, HYDERABAD, JAKARTA, MALAYSIA, MELBOURNE, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, THAILAND, TOKYO, CENTRAL, CALGARY, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, SPAIN, STOCKHOLM, ZURICH, TELAVIV, MEXICOCENTRAL, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST, ISOLATEDUSEAST, ISOLATEDUSEASTB, ISOLATEDUSEASTF, ISOLATEDUSSOUTHF, ISOLATEDUSWEST and ISOLATEDEUWEST.
Your AWS session token.
string
""
Your AWS session token. This value can be retrieved in different ways. See this link for more info.
The amount of time (in seconds) an AWS temporary token will last.
string
"3600"
Temporary tokens are used with Role based authentication. Temporary tokens will eventually time out, at which time a new temporary token must be obtained. The CData Cloud will internally request a new temporary token once the temporary token has expired.
For Role based authentication, the minimum duration is 900 seconds (15 minutes) while the maximum if 3600 (1 hour).
A unique identifier that might be required when you assume a role in another account.
string
""
A unique identifier that might be required when you assume a role in another account.
The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider.
string
""
The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider. An application can get this token by authenticating a user with a web identity provider. If not specified, the value for this connection property is automatically obtained from the value of the 'AWS_WEB_IDENTITY_TOKEN_FILE' environment variable.
This section provides a complete list of the Kerberos properties you can configure in the connection string for this provider.
| Property | Description |
| KerberosKDC | Identifies the Kerberos Key Distribution Center (KDC) service used to authenticate the user. (SPNEGO or Windows authentication only). |
| KerberosRealm | Identifies the Kerberos Realm used to authenticate the user. |
| KerberosSPN | Identifies the service principal name (SPN) for the Kerberos Domain Controller. |
| KerberosUser | Confirms the principal name for the Kerberos Domain Controller, which uses the format host/user@realm. |
| KerberosKeytabFile | Identifies the Keytab file containing your pairs of Kerberos principals and encrypted keys. |
| KerberosServiceRealm | Identifies the service's Kerberos realm. (Cross-realm authentication only). |
| KerberosServiceKDC | Identifies the service's Kerberos Key Distribution Center (KDC). |
| KerberosTicketCache | Specifies the full file path to an MIT Kerberos credential cache file. |
Identifies the Kerberos Key Distribution Center (KDC) service used to authenticate the user. (SPNEGO or Windows authentication only).
string
""
The Kerberos properties are used when using SPNEGO or Windows Authentication. The Cloud requests session tickets and temporary session keys from the Kerberos KDC service, which is usually co-located with the domain controller.
If KerberosKDC is not specified, the Cloud tries to detect these properties automatically from the following locations:
Identifies the Kerberos Realm used to authenticate the user.
string
""
A realm is a logical network, similar to a domain, that defines a group of systems under the same master KDC. Some realms are hierarchical, where one realm is a superset of the other realm, but usually realms are nonhierarchical (or “direct”) and the mapping between the two realms must be defined. Kerberos cross-realm authentication enables authentication across realms. Each realm only needs to have a principal entry for the other realm in its KDC.
The Kerberos properties are used when using SPNEGO or Windows Authentication. The Cloud requests session tickets and temporary session keys from the Kerberos KDC service, which is usually co-located with the domain controller. The Kerberos Realm can be configured by an administrator to be any string, but it is usually based on the domain name.
If Kerberos Realm is not specified, the Cloud will attempt to detect these properties automatically from the following locations:
Identifies the service principal name (SPN) for the Kerberos Domain Controller.
string
""
If the SPN on the Kerberos Domain Controller is not the same as the URL that you are authenticating to, use this property to set the SPN to the KDC's URL.
Confirms the principal name for the Kerberos Domain Controller, which uses the format host/user@realm.
string
""
If there is a Kerberos principal, that Kerberos principal name should always be used to authenticate to the database.
Identifies the Keytab file containing your pairs of Kerberos principals and encrypted keys.
string
""
A keytab (short for “key table”) stores long-term keys for one or more principals. In most cases, end users authenticate to the KDC using their client secret (password). However, in situations where authentication or re-authentication happen using automated scripts and applications, it may be more efficient to use a keytab, which sends passwords to the KDC in encrypted form, automatically.
Keytabs are normally represented by files in a standard format, and named using the format type:value. Usually type is FILE and value is the absolute pathname of the file. The other possible value for type is MEMORY, which indicates a temporary keytab stored in the memory of the current process.
A keytab contains one or more entries, where each entry consists of a timestamp (indicating when the entry was written to the keytab), a principal name, a key version number, an encryption type, and the encryption key itself. They can be generated using kutil.
For example:
[admin@myhost]# ktutil ktutil: addent -password -p starlord/[email protected] -k 1 -e aes256-cts-hmac-sha1-96 Password for starlord/myhost.galaxy.com: ktutil: addent -password -p starlord/[email protected] -k 1 -e aes128-cts-hmac-sha1-96 Password for starlord/myhost.galaxy.com: ktutil: addent -password -p starlord/[email protected] -k 1 -e des3-cbc-sha1 Password for starlord/myhost.galaxy.com: ktutil: wkt /path/to/starlord.keytab
Note: You must create principals for all authentication methods (encryption types) you want to support.
To display a keytab, use klist -k.
Identifies the service's Kerberos realm. (Cross-realm authentication only).
string
""
The KerberosServiceRealm is used to specify a service's KerberosRealm when using cross-realm Kerberos authentication.
In most cases, a single realm and KDC machine are used to perform the Kerberos authentication, which means that this property would not be required. However, the property is available for complex setups where a different realm and KDC machine are used to obtain an authentication ticket (AS request) and a service ticket (TGS request).
Identifies the service's Kerberos Key Distribution Center (KDC).
string
""
The KerberosServiceKDC is used to specify the service Kerberos KDC when using cross-realm Kerberos authentication.
In most cases, a single realm and KDC machine are used to perform the Kerberos authentication, which means that this property would not be required. However, the property is available for complex setups where a different realm and KDC machine are used to obtain an authentication ticket (AS request) and a service ticket (TGS request).
Specifies the full file path to an MIT Kerberos credential cache file.
string
""
Set this property if you want to use a credential cache file that was created using the MIT Kerberos Ticket Manager or kinit command.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If you are using a TLS/SSL connection, use this property to specify the TLS/SSL certificate to be accepted from the server. If you specify a value for this property, all other certificates that are not trusted by the machine are rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space- or colon-separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space- or colon-separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
Note: It is possible to use '*' to signify that all certificates should be accepted, but due to security concerns this is not recommended.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.
string
"1"
This property defines the level of detail the Cloud includes in the log file. Higher verbosity levels increase the detail of the logged information, but may also result in larger log files and slower performance due to the additional data being captured.
The default verbosity level is 1, which is recommended for regular operation. Higher verbosity levels are primarily intended for debugging purposes. For more information on each level, refer to Logging.
When combined with the LogModules property, Verbosity can refine logging to specific categories of information.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| FlattenObjects | Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. |
| FlattenArrays | Set FlattenArrays to the number of nested array elements you want to return as table columns. By default, nested arrays are returned as strings of JSON. |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
string
""
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.
bool
true
Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. The property name is concatenated onto the object name with a period to generate the column name.
For example, you can flatten the nested objects below at connection time:
"manager": {
"name": "Alice White",
"age": 30
}
When FlattenObjects is set to true, the preceding object is flattened into the following table:
| Column Name | Column Value |
| manager.name | Alice White |
| manager.age | 30 |
Set FlattenArrays to the number of nested array elements you want to return as table columns. By default, nested arrays are returned as strings of JSON.
string
""
By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. This is only recommended for arrays that are expected to be short.
Set FlattenArrays to the number of elements you want to return from nested arrays. The specified elements are returned as columns. The zero-based index is concatenated to the column name. Other elements are ignored.
For example, you can return an arbitrary number of elements from an array of strings:
"employees": [
{
"name": "John Smith",
"age": 34
},
{
"name": "Peter Brown",
"age": 26
},
{
"name": "Paul Jacobs",
"age": 30
}
]
When FlattenArrays is set to 2, the preceding array is flattened into the following table:
| Column Name | Column Value |
| employees.0.name | John Smith |
| employees.0.age | 34 |
| employees.1.name | Peter Brown |
| employees.1.age | 26 |
See JSON Functions to use JSON paths to work with unbounded arrays.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| ClientSideEvaluation | Set ClientSideEvaluation to true to perform Evaluation client side on nested objects. |
| MaxResults | The maximum number of total results to return from Elasticsearch when using the default Search API. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| PageSize | The number of results to return per request from Elasticsearch. |
| PaginationMode | Specifies whether to use PIT with search_after or scrolls to page through query results. |
| PITDuration | Specifies the time unit to use for keep alive when retrieving results via PIT API. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| ReplaceInvalidUTF8Chars | Specifies whether to replace invalid UTF8 byte sequences found in reads of indexed document content with the U+FFFD replacement character. |
| RowScanDepth | The maximum number of rows to scan when generating table metadata. Set this property to gain more control over how the provider detects arrays. |
| ScrollDuration | Specifies the time unit to use for keep alive when retrieving results via the Scroll API. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| UseFullyQualifiedNestedTableName | Set this to true to set the generated table name as the complete source path when flattening nested documents using Relational DataModel . |
Set ClientSideEvaluation to true to perform Evaluation client side on nested objects.
bool
false
Set ClientSideEvaluation to true to perform Evaluation (GROUP BY, filtering) client side on nested objects.
For example, with ClientSideEvaluation set to false(default value), GROUP BY on nested object 'property.0.name' would be grouped as 'property.*.name', while if set to true, results would be grouped as 'property.0.name'.
Similarly, with ClientSideEvaluation set to false(default value), filtering on nested object 'property.0.name' would be filtered as 'property.*.name', while if set to true, results would be filtered as 'property.0.name'.
This would affect performance as query is evaluated client side.
The maximum number of total results to return from Elasticsearch when using the default Search API.
string
"10000"
This property corresponds to the Elasticsearch index.max_result_window index setting. Thus the default value is 10000, which is Elasticsearch's default limit.
This value is not applicable when using the Scroll API. Set ScrollDuration to use this API.
When a LIMIT is specified in a query, the LIMIT will be taken into account provided it is less than MaxResults. Otherwise the number of results returned will be limited to the MaxResults value.
If you receive an error stating that the result window is too large, this is caused by the MaxResults value being greater than the Elasticsearch index.max_result_window index setting. You can either change the MaxResults value to match the index.max_result_window index setting or use the Scroll API by setting ScrollDuration.
Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY.
int
-1
The default value for this property, -1, means that no row limit is enforced unless the query explicitly includes a LIMIT clause. (When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting.)
Setting MaxRows to a whole number greater than 0 ensures that queries do not return excessively large result sets by default.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
The number of results to return per request from Elasticsearch.
string
"10000"
The PageSize can control the number of results received per request from Elasticsearch on a given query.
The default value is 10000, which is Elasticsearch's default limit (based on the Elasticsearch index.max_result_window index setting).
Specifies whether to use PIT with search_after or scrolls to page through query results.
string
"Scroll"
PIT with search_after can only be used with Elasticsearch 7.10+ or OpenSearch 2.4.0+.
Specifies the time unit to use for keep alive when retrieving results via PIT API.
string
"1m"
When a nonzero value is specified alongside setting PaginationMode to 'PIT', the PIT API will be used.
The time unit specified will be sent in each request made to Elasticsearch to specify how long the server should keep the PIT search context alive. The value specified only needs to be long enough to process the previous batch of results (not to process all the data). This is because the PITDuration value will be sent in each request, which will extend the context time.
Once all the results have been retrieved, the search context will be cleared.
The format for this value is: [integer][time unit]. For example: 1m = 1 minute.
Setting this property and ScrollDuration to '0' will cause the default Search API to be used. In such a case, the maximum number of results that can be returned are equal to MaxResults.
Supported Time Units:
| Value | Description |
| y | Year |
| M | Month |
| w | Week |
| d | Day |
| h | Hour |
| m | Minute |
| s | Second |
| ms | Milli-second |
Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'.
string
""
This property allows you to define which pseudocolumns the Cloud exposes as table columns.
To specify individual pseudocolumns, use the following format:
Table1=Column1;Table1=Column2;Table2=Column3
To include all pseudocolumns for all tables use:
*=*
Specifies whether to replace invalid UTF8 byte sequences found in reads of indexed document content with the U+FFFD replacement character.
bool
false
Specifies whether to replace invalid UTF8 byte sequences found in reads of indexed document content with the U+FFFD replacement character.
The maximum number of rows to scan when generating table metadata. Set this property to gain more control over how the provider detects arrays.
string
"100"
This property is used when generating table metadata and specifically is used to identify arrays within the data. Elasticsearch allows any field to be an array and does not identify which fields are arrays in the mapping data. Thus RowScanDepth rows will be queried and scanned to identify if any of the fields contain arrays.
When QueryPassthrough is set to True, the columns in a table must be determined by scanning the data returned in the request. This value determines the maximum number of rows that will be scanned to determine the table metadata. The default value is 100.
Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data or when the scanned documents are very heterogenous.
Specifies the time unit to use for keep alive when retrieving results via the Scroll API.
string
"1m"
When a nonzero value is specified, the Scroll API will be used.
The time unit specified will be sent in each request made to Elasticsearch to specify how long the server should keep the Scroll search context alive. The value specified only needs to be long enough to process the previous batch of results (not to process all the data). This is because the ScrollDuration value will be sent in each request, which will extend the context time.
Once all the results have been retrieved, the search context will be cleared.
The format for this value is: [integer][time unit]. For example: 1m = 1 minute.
Setting this property and PITDuration to '0' will cause the default Search API to be used. In such a case, the maximum number of results that can be returned are equal to MaxResults.
Supported Time Units:
| Value | Description |
| y | Year |
| M | Month |
| w | Week |
| d | Day |
| h | Hour |
| m | Minute |
| s | Second |
| ms | Milli-second |
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error.
int
60
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.
Timeout is set to 60 seconds by default. To disable timeouts, set this property to 0.
Disabling the timeout allows operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server.
Note: Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
Set this to true to set the generated table name as the complete source path when flattening nested documents using Relational DataModel .
bool
false
Set this to true to set the generated table name as the complete source path when flattening nested documents using Relational DataModel.
LZMA from 7Zip LZMA SDK
LZMA SDK is placed in the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original LZMA SDK code, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
LZMA2 from XZ SDK
Version 1.9 and older are in the public domain.
Xamarin.Forms
Xamarin SDK
The MIT License (MIT)
Copyright (c) .NET Foundation Contributors
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NSIS 3.10
Copyright (C) 1999-2025 Contributors THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
1. DEFINITIONS
"Contribution" means:
a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and b) in the case of each subsequent Contributor:
i) changes to the Program, and
ii) additions to the Program;
where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
"Contributor" means any person or entity that distributes the Program.
"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
"Program" means the Contributions distributed in accordance with this Agreement.
"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
2. GRANT OF RIGHTS
a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
3. REQUIREMENTS
A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
a) it complies with the terms and conditions of this Agreement; and
b) its license agreement:
i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
When the Program is made available in source code form:
a) it must be made available under this Agreement; and
b) a copy of this Agreement must be included with each copy of the Program.
Contributors may not remove or alter any copyright notices contained within the Program.
Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
4. COMMERCIAL DISTRIBUTION
Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
5. NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
6. DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. GENERAL
If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.