CData Cloud offers access to GraphQL across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a SQL Server database can connect to GraphQL through CData Cloud.
CData Cloud allows you to standardize and configure connections to GraphQL as though it were any other OData endpoint or standard SQL Server.
This page provides a guide to Establishing a Connection to GraphQL in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to GraphQL and configure any necessary connection properties to create a database in CData Cloud
Accessing data from GraphQL through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to GraphQL by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
Set the following to connect:
To create GraphQL data sources on headless servers or other machines on which the Cloud cannot open a browser, you need to authenticate from another machine. Authentication is a two-step process.
Option 1: Obtain and Exchange a Verifier Code
Set the following properties on the headless machine:
You can then follow the steps below to authenticate from another machine and obtain the OAuthVerifier connection property.
On the headless machine, set the following connection properties to obtain the OAuth authentication values:
Connect to Data
After the OAuth settings file is generated, set the following properties to connect to data:
Option 2: Transfer OAuth Settings
Follow the steps below to install the Cloud on another machine, authenticate, and then transfer the resulting OAuth values.
On a second machine, install the Cloud and connect with the following properties set:
Test the connection to authenticate. The resulting authentication values are written, encrypted, to the location specified by OAuthSettingsLocation. After you have successfully tested the connection, copy the OAuth settings file to your headless machine. On the headless machine, set the following connection properties to connect to data:
If you want to use the Cloud with a user registered in a User Pool in AWS Cognito, set the following properties to authenticate:
This section shows how to control the various schemas that the Cloud offers to bridge the gap between relational SQL and GraphQL services.
GraphQL services offer a introspection query service which the Cloud can use to obtain view and column names.
All SCALAR mutation fields are exposed directly, and all object fields are expanded.
The Cloud will automatically scan for available Using Mutations. Given that there is no method provided by GraphQL for determining which mutations can be used for each table, each mutation is exposed as a stored procedure.
LIST fields are exposed as temporary tables (GraphQL tables of type TEMPORARY_TABLE). The discovered temporary tables can be obtained by querying the sys_tables and sys_tablecolumns system tables.
Operations details the process for configuring custom schema files. Setting up these custom schema files is a required step in establishing a connection to GraphQL data.
See System Tables to query the current table metadata.
By default, the Cloud will automatically read metadata from GraphQL.
GraphQL services offer a introspection query service which the Cloud can use to obtain view and column names.
A GraphQL introspection query service has a query object at its root. Other objects are nested into the root query object, which can in turn have their own nested objects.
The Cloud reads LIST or Relay Connection type objects as views. If a field is SCALAR, it's read as a column, and if a field is a simple OBJECT, it is expanded.
Set the metadata introspection depth as follows:
The Cloud will automatically scan for available mutations. Given that there is no method provided by GraphQL for determining which mutations can be used for each table, each mutation is exposed as a stored procedure. This replaces the traditional use of INSERT, UPDATE, and DELETE SQL statements when working with GraphQL.
All SCALAR mutation fields are exposed directly, and all object fields are expanded.
LIST fields are exposed as temporary tables (GraphQL tables of type TEMPORARY_TABLE). The discovered temporary tables can be obtained by querying the sys_tables and sys_tablecolumns system tables. These tables contain a RowId and ParentId field to denote the row and housing (parent) table of a given child table.
An example of a mutation is productCreate. Invoke mutations as a stored procedure after first loading the relevant child tables needed for the operation:
INSERT INTO productCreate_metafields#TEMP (namespace,key,value,type) VALUES ('MRproductInfo','ALU','449788022','string')
INSERT INTO productCreate_variants#TEMP (RowId,price,sku,inventoryManagement,weightUnit,weight,options,metafields,inventoryQuantities) VALUES (1,'39.99','38536314-0acb-4d3f-b8ff-a0f2014d2c75','SHOPIFY','POUNDS',1,'L,XL,XXL','productCreate_variants_metafields','productCreate_variants_inventoryQuantities')
INSERT INTO productCreate_variants_metafields#TEMP (ParentId,namespace,key,value,type) VALUES ('1','MRproductInfo','ALU','449788022-M-','string')
INSERT INTO productCreate_variants_metafields#TEMP (ParentId,namespace,key,value,type) VALUES ('1','MRproductInfo','ItemNumber','400000881201','string')
INSERT INTO productCreate_variants_inventoryQuantities#TEMP (ParentId,locationId,availableQuantity) VALUES ('1','gid://shopify/Location/1448280087',5)
INSERT INTO productCreate_media#TEMP (originalSource,alt,mediaContentType) VALUES ('https://static.nike.com/a/images/t_PDP_1280_v1/f_auto,q_auto:eco/qwqfyddzikcgc4ozwigp/revolution-5-road-running-shoes-szF7CS.png','Magic Shoes','IMAGE')
EXECUTE productCreate title='NIKE - 449788022', descriptionHtml='MEN''S SHOES 42-MENS L/S TEES',productType='Staging', vendor='NIKE', published='false', options='size,width',metafields='productCreate_metafields#TEMP', variants='productCreate_variants#TEMP', media='productCreate_media#TEMP'
Custom schemas are defined in configuration files. This chapter outlines the structure of these files.
Note: The GenerateSchemaFiles property enables you to persist table metadata in static schema files that are easy to customize (to persist your changes to column data types, for example). Set this property to "OnStart" to generate schema files for all tables in your database at connection.
Alternatively, set this property to "OnUse" to generate schemas as you execute SELECT queries to tables. It is also possible to create a specific schema file for a table using the CreateSchema stored procedure.
Tables and views are defined by authoring schema files in APIScript. APIScript is a simple configuration language that allows you to define the columns and the behavior of the table. It also has built-in Operations that enable you to process GraphQL. In addition to these data processing primitives, APIScript is a full-featured language with constructs for conditionals, looping, etc. However, as shown by the example schema, for most table definitions you will not need to use these features.
Below is a fully functional table schema that models the Labels table and contains all the components you will need to execute SQL to GraphQL data sources.
You can find more information on each of the components of a schema in Column Definitions, SELECT Execution.
<rsb:script xmlns:rsb="http://apiscript.com/ns?v1" xmlns:xs="http://www.cdata.com/ns/rsbscript/2" xmlns:other="http://apiscript.com/ns?v1">
<rsb:info title="Labels" desc="Lists information about the different labels you can apply on an issue." other:possiblePaths="{'path':'/repository/labels/edges/node','Name':{'path':'/repository/label'}}" other:paginationObjects="{'labels':{'cursorPath':'after','cursorType':'String','pageSizeArgumentPath':'first','pageSizeArgumentType':'Int','depth':'1','paginationType':'Cursor','isConnection':'True','pageInfo':['endCursor','hasNextPage','hasPreviousPage','startCursor']}}">
<attr name="Id" xs:type="string" key="true" other:relativePath="id" desc="The ID of the label." />
<attr name="RepositoryName" xs:type="string" other:relativePath="name" desc="The name of the repository." other:filter="name:=" other:argumenttype="String!" other:depth="1" references="Repositories.Name" />
<attr name="UserLogin" xs:type="string" desc="The login name of the user." other:filter="owner:=" other:argumenttype="String!" other:depth="1" references="Users.Login" other:mirror="true" other:canBeSliced="true" />
<attr name="Color" xs:type="string" other:relativePath="color" desc="Identifies the label color." />
<attr name="CreatedAt" xs:type="datetime" other:relativePath="createdAt" desc="Identifies the date and time when the label was created." other:orderby="CREATED_AT" />
<attr name="Description" xs:type="string" other:relativePath="description" desc="A brief description of this label." />
<attr name="IsDefault" xs:type="boolean" other:relativePath="isDefault" desc="Indicates whether or not this is a default label." />
<attr name="Name" xs:type="string" other:relativePath="name" desc="Identifies the label name." other:filter="name:=" other:argumenttype="String!" other:orderby="NAME" other:isPathFilter="true" />
<attr name="ResourcePath" xs:type="string" other:relativePath="resourcePath" desc="The HTTP path for this label." />
<attr name="UpdatedAt" xs:type="datetime" other:relativePath="updatedAt" desc="Identifies the date and time when the label was last updated." />
<attr name="Url" xs:type="string" other:relativePath="url" desc="The HTTP URL for this label." />
</rsb:info>
<rsb:script method="GET">
<rsb:push op="graphqladoSelect" />
</rsb:script>
</rsb:script>
The following example shows how to add static headers in the schema file. These headers are added to the request every time the schema file is called.
<rsb:script xmlns:rsb="http://apiscript.com/ns?v1" xmlns:xs="http://www.cdata.com/ns/rsbscript/2" xmlns:other="http://apiscript.com/ns?v1">
...
<input name="Ship1" other:headerName="DynamicValuedHeader" />
<input name="Ship2" other:headerName="DynamicValuedHeader" />
</rsb:info>
<api:set attr="Header:Name#1" value="StaticValuedHeader" />
<api:set attr="Header:Value#1" value="StaticValuedHeader__Value" />
The following example shows how to add dynamic headers in the schema file. These headers are added to the request every time the schema file is called.
<rsb:script xmlns:rsb="http://apiscript.com/ns?v1" xmlns:xs="http://www.cdata.com/ns/rsbscript/2" xmlns:other="http://apiscript.com/ns?v1">
...
<input name="Ship1" other:headerName="DynamicValuedHeader" />
<input name="Ship2" other:headerName="DynamicValuedHeader" />
<input name="Ship3" other:headerName="DynamicValuedHeader2" />
</rsb:info>
<api:set attr="Header:Name#1" value="DynamicValuedHeader" />
<api:set attr="Header:Value#1" value="[_input.Ship1] - [_input.Ship2]" />
SELECT * FROM [Table] WHERE [Ship1] = "Value1" AND [Ship2] = "Value2" AND [DynamicValuedHeader2] = "custom value"
In the above example, the value format of DynamicValuedHeader is parsed by the driver, but for DynamicValuedHeader2, it is the same as the value specified in the query.
The basic attributes of a column are the name of the column, the data type, whether the column is a primary key, the relative path and the depth. The Cloud uses the depth attribute to extract nodes from hierarchical data.
Mark up column attributes in the block of the schema file. You can also provide a description of each attribute using the desc property.
<rsb:script xmlns:rsb="http://apiscript.com/ns?v1" xmlns:xs="http://www.cdata.com/ns/rsbscript/2" xmlns:other="http://apiscript.com/ns?v1">
<rsb:info title="Labels" desc="Lists information about the different labels you can apply on an issue." other:possiblePaths="{'path':'/repository/labels/edges/node','Name':{'path':'/repository/label'}}" other:paginationObjects="{'labels':{'cursorPath':'after','cursorType':'String','pageSizeArgumentPath':'first','pageSizeArgumentType':'Int','depth':'1','paginationType':'Cursor','isConnection':'True','pageInfo':['endCursor','hasNextPage','hasPreviousPage','startCursor']}}">
<attr name="Id" xs:type="string" key="true" other:relativePath="id" desc="The ID of the label." />
<attr name="RepositoryName" xs:type="string" other:relativePath="name" desc="The name of the repository." other:filter="name:=" other:argumenttype="String!" other:depth="1" references="Repositories.Name" />
<attr name="UserLogin" xs:type="string" desc="The login name of the user." other:filter="owner:=" other:argumenttype="String!" other:depth="1" references="Users.Login" other:mirror="true" other:canBeSliced="true" />
<attr name="Color" xs:type="string" other:relativePath="color" desc="Identifies the label color." />
<attr name="CreatedAt" xs:type="datetime" other:relativePath="createdAt" desc="Identifies the date and time when the label was created." other:orderby="CREATED_AT" />
<attr name="Description" xs:type="string" other:relativePath="description" desc="A brief description of this label." />
<attr name="IsDefault" xs:type="boolean" other:relativePath="isDefault" desc="Indicates whether or not this is a default label." />
<attr name="Name" xs:type="string" other:relativePath="name" desc="Identifies the label name." other:filter="name:=" other:argumenttype="String!" other:orderby="NAME" other:isPathFilter="true" />
<attr name="ResourcePath" xs:type="string" other:relativePath="resourcePath" desc="The HTTP path for this label." />
<attr name="UpdatedAt" xs:type="datetime" other:relativePath="updatedAt" desc="Identifies the date and time when the label was last updated." />
<attr name="Url" xs:type="string" other:relativePath="url" desc="The HTTP URL for this label." />
</rsb:info>
<rsb:script method="GET">
<rsb:push op="graphqladoSelect" />
</rsb:script>
</rsb:script>
The following sections provide more detail on using paths to extract columns and rows. To see the column definitions in a complete schema, refer to Customizing Schemas.
The other:possiblePaths property is used to specify the base paths that select the column's value.
Base paths start with a '/' and contain the full path to the last GraphQL nested object.
<rsb:info title="Labels" desc="Lists information about the different labels you can apply to an issue." other:possiblePaths="{'path':'/repository/labels/edges/node','Name':{'path':'/repository/label'}}" other:paginationObjects="{'labels':{'cursorPath':'after','cursorType':'String','pageSizeArgumentPath':'first','pageSizeArgumentType':'Int','depth':'1''paginationType':'Cursor','isConnection':'True','pageInfo':['endCursor','hasNextPage','hasPreviousPage','startCursor']}}">
The following GraphQL query is based on the above script example:
{ # base path=/repository/labels/edges/node
repository {
labels {
edges {
node {
...
}
}
}
}
}
The other:relativePath property must be specified for each column. This property is used in conjuction with the other:possiblePaths property to build the GraphQL field path.
<attr name="Name" xs:type="string" other:relativePath="name" desc="Identifies the label name." />
Based on the above script example the Cloud will build the following GraphQL query:
{ # base path=/repository/labels/edges/node
repository { # depth=1
labels { # depth=2
edges {
node {
name # path=base path + relative path.
}
}
}
}
}
Use the other:depth property to specify an element inside a specific GraphQL object. The indexes are 1-based. If this attribute is not specified then the default value will be equal to the last nested GraphQL object.
<attr name="RepositoryName" xs:type="string" other:relativePath="name" desc="The name of the repository." other:depth="1" />
The following GraphQL query is built from the above script example:
{ # base path=/repository/labels/edges/node
repository { # depth=1
name # This is mapped to the RepositoryName column
labels { # depth=2
edges {
node {
...
}
}
}
}
}
Use the other:fragment property to specify a group of fields. This property can be used when the GraphQL server returns an array of objects and the Cloud may need to push this info as an aggregate.
<attr name="ColumnValues" xs:type="string" other:relativePath="column_values" desc="Column values." other:fragment="fragment ItemColumnValues on ColumnValue { id \\r\\n value }" />
Based on the above script example, the Cloud will build the following GraphQL query:
query {
items {
column_values {
...ItemColumnValues
}
}
}
fragment ItemColumnValues on ColumnValue {
id
value
}
Use the other:canbesliced property enable slicing behavior in the Cloud
For example,
SELECT * FROM Table WHERE Col IN ('1','2','3')
becomes
SELECT * FROM Table WHERE Col=1 SELECT * FROM Table WHERE Col=2 SELECT * FROM Table WHERE Col=3
Use the other:mirror property to reflect the value specified in the criteria. Use on columns that are not specified in the server response.
For example:
SELECT * FROM Table WHERE Col=X (If other:mirror=true the Cloud will artificially set the value of Col to X for every row.)
Use references to reference the key column of the parent table. Example: If there are two tables Orders and OrderLineItems and the OrderLineItems has a column OrderId, the references field for this column will be "Orders.Id".
Notes:
When a SELECT query is issued, the Cloud executes the GET method of the schema, which invokes the Cloud's built-in operations to process GraphQL. In the GET method, you have control over the request for data. The following procedures show several ways to use this: search the remote data, server-side, with SELECT WHERE, or implement paging.
The following sections show how to translate a SELECT WHERE statement into a GraphQL query to GraphQL APIs. The procedure uses the following statement:
SELECT * FROM <table> WHERE ModifiedAt < '2019-10-30 05:05:36.001'
If this filter is supported on the server via query parameters, you can use the other:filter property of the api:info column definition to specify the desired mapping. For the above query, the Cloud uses this property to map the modifiedAt < '<date>' filter to the query parameter that returns results modified before a given date, and the modifedAt > '<date>' filter to the query parameter that filters the results modified after that date.
To perform this mapping, the Cloud would use the following markup for the modifedAt column definition:
<attr name="ModifiedAt" xs:type="datetime" other:relativePath="modifiedAt" other:argumentType="DateTime" description="When the vendor was last modified." other:filter="modifiedAtAfter:>;modifiedAtBefore:<" />
This query results in the following postdata:
{
"variables": {
"ModifiedAt_modifiedAtBefore": "2019-10-30T09:05:36.001Z"
},
"query": "query($ModifiedAt_modifiedAtBefore:DateTime) {\r\nbusinesses {\r\nedges {\r\nnode {\r\ncustomers(modifiedAtBefore:$ModifiedAt_modifiedAtBefore) {\r\nedges {\r\nnode {\r\nid\r\nmodifiedAt\r\n}\r\n}\r\npageInfo {\r\ntotalPages\r\ncurrentPage\r\n}\r\n}\r\nid\r\n}\r\n}\r\npageInfo {\r\ntotalPages\r\ncurrentPage\r\n}\r\n}\r\n}\r\n"
}
Ex: other:possiblepaths="{'path':'/businesses/edges/node','id':{'path':'/business'}}"
<attr name="Id" xs:type="string" key="true" other:relativePath="id" other:isPathFilter="true" other:filter="id:=" />
SELECT Id, Name, CreatedAt FROM Businesses WHERE Id = 'QnVzaW5M6ZTY4ZDA2MmQtYzkzZS00MGZkLTk4YWUtNDg2YzcxMmExNzFl'is converted to the postdata:
{
"variables": {
"Id_id": "QnVzaW5M6ZTY4ZDA2MmQtYzkzZS00MGZkLTk4YWUtNDg2YzcxMmExNzFl"
},
"query": "query($Id_id:ID) {\r\nbusiness(id:$Id_id) {\r\nid\r\nname\r\ncreatedAt\r\n}\r\n}\r\n"
}
The driver supports two pagination modes.
other:paginationObjects = "{
'labels': {
'cursorPath': 'after',
'cursorType': 'String',
'pageSizeArgumentPath': 'first',
'pageSizeArgumentType': 'Int',
'depth':'1',
'paginationType': 'Cursor',
'isConnection': 'True',
'pageInfo': ['endCursor', 'hasNextPage', 'hasPreviousPage', 'startCursor']
}
}"
The following postdata is generated after processing the other:paginationObjects table extra info specified above:
{
"variables": {
"UserLogin_owner": "testaccount71",
"RepositoryName_name": "test",
"first": <Pagesize>
},
"query": "query($UserLogin_owner:String!, $RepositoryName_name:String!, $first:Int) {\r\nrepository(owner:$UserLogin_owner, name:$RepositoryName_name) {\r\nlabels(first:$first) {\r\nedges {\r\nnode {\r\nid\r\ncolor\r\ncreatedAt\r\ndescription\r\nisDefault\r\nname\r\nresourcePath\r\nupdatedAt\r\nurl\r\n}\r\n}\r\npageInfo {\r\nendCursor\r\nhasNextPage\r\n}\r\n}\r\nname\r\n}\r\n}\r\n"
}
other:paginationObjects="{
'businesses': {
'offsetArgumentPath': 'page',
'offsetArgumentType': 'Int',
'pageSizeArgumentPath': 'pageSize',
'pageSizeArgumentType': 'Int',
'depth':'1',
'paginationType': 'Offset',
'isConnectionObject': 'True',
'pageInfo': ['currentPage', 'totalPages', 'totalCount']
}
}"
The following postdata is generated after processing the other:paginationObjects table extra info specified above:
{
"variables": {
"pageSize_1": <Pagesize>
},
"query": "query($pageSize_1:Int) {\r\nbusinesses(pageSize:$pageSize_1) {\r\nedges {\r\nnode {\r\nid\r\n}\r\n}\r\npageInfo {\r\ntotalPages\r\ncurrentPage\r\n}\r\n}\r\n}\r\n"
}other:paginationObjects="{
'businesses': {
'offsetArgumentPath': 'query/pagination/page',
'offsetArgumentType': 'custom_query',
'pageSizeArgumentPath': 'query/pagination/pageSize',
'pageSizeArgumentType': 'custom_query',
'depth':'1',
'paginationType': 'Offset',
'isConnectionObject': 'True',
'pageInfo': ['currentPage', 'totalPages', 'totalCount']
}
}"
The following postdata is generated after processing the other:paginationObjects table extra info specified above:
{
"variables": {
"query": {
"pagination": {
"pageSize":<Pagesize>
}
}
},
"query": "query($query:custom_query) {\r\nbusinesses(query:$query) {\r\nedges {\r\nnode {\r\nid\r\n}\r\n}\r\npageInfo {\r\ntotalPages\r\ncurrentPage\r\n}\r\n}\r\n}\r\n"
}
<rsb:info title="Labels" desc="Lists information about the different labels you can apply on an issue." other:orderByFormat="{field: {orderByArgumentValue}, direction: {sortOrder}}">
<attr name="CreatedAt" xs:type="datetime" other:relativePath="createdAt" other:orderByFormat="{field: {orderByArgumentValue}, direction: {sortOrder}}" other:orderBy="orderBy:CREATED_AT" /><attr name="CreatedAt" xs:type="datetime" other:relativePath="createdAt" other:orderBy="orderBy:CREATED_AT" />
SELECT Id FROM Labels ORDER BY CreatedAt ASCis converted to this postdata:
{
"variables": {
"first": <Pagesize>
},
"query": "query($first:Int) {\r\nrepository {\r\nlabels(sort:{field: CREATED_AT, direction: ASC}, first:$first) {\r\nedges {\r\nnode {\r\nid\r\n}\r\n}\r\npageInfo {\r\nendCursor\r\nhasNextPage\r\n}\r\n}\r\n}\r\n}\r\n"
}
The Cloud has high-performance operations for processing GraphQL data sources. These operations are platform neutral: Schema files that invoke these operations can be used in both .NET and Java. You can also extend the Cloud with your own operations written in .NET or Java.
The Cloud has the following operations:
| Operation Name | Description | |
| OAuthGetAccessToken | For OAuth 1.0, exchange a request token for an access token. For OAuth 2.0, get an access token or get a new access token with the refresh token. | |
| OAuthGetUserAuthorizationURL | Generates the user authorization URL. OAuth 2.0 will not access the network in this operation. |
The OAuthGetAccessToken operation is an APIScript operation that is used to facilitate the OAuth authentication and refresh flows.
The Cloud includes stored procedures that invoke this operation to complete the OAuth exchange. The following example schema briefly lists some of the typically required inputs before the following sections explain them in more detail.
Invoke the OAuthGetAccessToken with the GetOAuthAccessToken stored procedure. The following inputs are required for most data sources and will provide default values for the connection properties of the same name.
<api:script xmlns:api="http://www.rssbus.com/ns/rsbscript/2">
<api:info title="GetOAuthAccessToken" description="Obtains the OAuth access token to be used for authentication with various APIs." >
<input name="AuthMode" desc="The OAuth flow. APP or WEB." />
<input name="CallbackURL" desc="The URL to be used as a trusted redirect URL, where the user will return with the token that verifies that they have granted your app access. " />
<input name="OAuthAccessToken" desc="The request token. OAuth 1.0 only." />
<input name="OAuthAccessTokenSecret" desc="The request token secret. OAuth 1.0 only." />
<input name="Verifier" desc="The verifier code obtained when the user grants permissions to your app." />
<output name="OAuthAccessToken" desc="The access token." />
<output name="OAuthTokenSecret" desc="The access token secret." />
<output name="OAuthRefreshToken" desc="A token that may be used to obtain a new access token." />
</api:info>
<!-- Set OAuthVersion to 1.0 or 2.0. -->
<api:set attr="OAuthVersion" value="MyOAuthVersion" />
<!-- Set RequestTokenURL to the URL where the request for the request token is made. OAuth 1.0 only.-->
<api:set attr="OAuthRequestTokenURL" value="http://MyOAuthRequestTokenURL" />
<!-- Set OAuthAuthorizationURL to the URL where the user logs into the service and grants permissions to the application. -->
<api:set attr="OAuthAuthorizationURL" value="http://MyOAuthAuthorizationURL" />
<!-- Set OAuthAccessTokenURL to the URL where the request for the access token is made. -->
<api:set attr="OAuthAccessTokenURL" value="http://MyOAuthAccessTokenURL" />
<!-- Set GrantType to the authorization grant type. OAuth 2.0 only. -->
<api:set attr="GrantType" value="CODE" />
<!-- Set SignMethod to the signature method used to calculate the signature of the request. OAuth 1.0 only.-->
<api:set attr="SignMethod" value="HMAC-SHA1" />
<api:call op="oauthGetAccessToken">
<api:push/>
</api:call>
</api:script>
You can also use OAuthGetAccessToken to refresh the access token by providing the following inputs:
<api:script xmlns:api="http://www.rssbus.com/ns/rsbscript/2">
<api:info title="RefreshOAuthAccessToken" description="Refreshes the OAuth access token used for authentication." >
<input name="OAuthRefreshToken" desc="A token that may be used to obtain a new access token." />
<output name="OAuthAccessToken" desc="The authentication token returned." />
<output name="OAuthTokenSecret" desc="The authentication token secret returned. OAuth 1.0 only." />
<output name="OAuthRefreshToken" desc="A token that may be used to obtain a new access token." />
<output name="ExpiresIn" desc="The remaining lifetime on the access token." />
</api:info>
<!-- Set OAuthVersion to 1.0 or 2.0. -->
<api:set attr="OAuthVersion" value="MyOAuthVersion" />
<!-- Set GrantType to REFRESH. OAuth 2.0 only. -->
<api:set attr="GrantType" value="REFRESH" />
<!-- Set SignMethod to the signature method used to calculate the signature of the request. OAuth 1.0 only.-->
<api:set attr="SignMethod" value="HMAC-SHA1" />
<!-- Set OAuthAccessTokenURL to the URL where the request for the access token is made. -->
<api:set attr="OAuthAccessTokenURL" value="http://MyOAuthAccessTokenURL" />
<!-- Set AuthMode to 'WEB' when calling RefreshOAuthAccessToken -->
<api:set attr="AuthMode" value="WEB"/>
<api:call op="oauthGetAccessToken">
<api:push/>
</api:call>
</api:script>
The OAuthGetUserAuthorizationURL is an APIScript operation that is used to facilitate the OAuth authentication flow for Web apps, for offline apps, and in situations where the Cloud is not allowed to open a Web browser. To pass the needed inputs to this operation, define the GetOAuthAuthorizationURL stored procedure. The Cloud can call this internally.
Define stored procedures in .rsb files with the same file name as the schema's title. The example schema briefly lists some of the typically required inputs before the following sections explain them in more detail.
Call OAuthGetUserAuthorizationURL in the GetOAuthAuthorizationURL stored procedure.
<api:script xmlns:api="http://www.rssbus.com/ns/rsbscript/2">
<api:info title="Get OAuth Authorization URL" description="Obtains the OAuth authorization URL used for authentication with various APIs." >
<input name="CallbackURL" desc="The URL to be used as a trusted redirect URL, where the user will return with the token that verifies that they have granted your app access. " />
<output name="URL" desc="The URL where the user logs in and is prompted to grant permissions to the app. " />
<output name="OAuthAccessToken" desc="The request token. OAuth 1.0 only." />
<output name="OAuthTokenSecret" desc="The request token secret. OAuth 1.0 only." />
</api:info>
<!-- Set OAuthVersion to 1.0 or 2.0. -->
<api:set attr="OAuthVersion" value="MyOAuthVersion" />
<!-- Set ResponseType to the desired authorization grant type. OAuth 2.0 only.-->
<api:set attr="ResponseType" value="code" />
<!-- Set SignMethod to the signature method used to calculate the signature. OAuth 1.0 only.-->
<api:set attr="SignMethod" value="HMAC-SHA1" />
<!-- Set OAuthAuthorizationURL to the URL where the user logs into the service and grants permissions to the application. -->
<api:set attr="OAuthAuthorizationURL" value="http://MyOAuthAuthorizationURL" />
<!-- Set OAuthAccessTokenURL to the URL where the request for the access token is made. -->
<api:set attr="OAuthAccessTokenURL" value="http://MyOAuthAccessTokenURL"/>
<!-- Set RequestTokenURL to the URL where the request for the request token is made. OAuth 1.0 only.-->
<api:set attr="OAuthRequestTokenURL" value="http://MyOAuthRequestTokenURL" />
<api:call op="oauthGetUserAuthorizationUrl">
<api:push/>
</api:call>
</api:script>
<p>
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for GraphQL:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries:
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
| Name | Type | Description |
| CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
| Name | Type | Description |
| CatalogName | String | The database name. |
| SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
| Name | Type | Description |
| CatalogName | String | The database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view. |
| TableType | String | The table type (table or view). |
| Description | String | A description of the table or view. |
| IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the Users table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Users'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view containing the column. |
| ColumnName | String | The column name. |
| DataTypeName | String | The data type name. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The storage size of the column. |
| DisplaySize | Int32 | The designated column's normal maximum width in characters. |
| NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
| IsNullable | Boolean | Whether the column can contain null. |
| Description | String | A brief description of the column. |
| Ordinal | Int32 | The sequence number of the column. |
| IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
| IsGeneratedColumn | String | Whether the column is generated. |
| IsHidden | Boolean | Whether the column is hidden. |
| IsArray | Boolean | Whether the column is an array. |
| IsReadOnly | Boolean | Whether the column is read-only. |
| IsKey | Boolean | Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
| ColumnType | String | The role or classification of the column in the schema. Possible values include SYSTEM, LINKEDCOLUMN, NAVIGATIONKEY, REFERENCECOLUMN, and NAVIGATIONPARENTCOLUMN. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
| Name | Type | Description |
| CatalogName | String | The database containing the stored procedure. |
| SchemaName | String | The schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure. |
| Description | String | A description of the stored procedure. |
| ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the SelectEntries stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'SelectEntries' AND Direction = 1 OR Direction = 2
To include result set columns in addition to the parameters, set the IncludeResultColumns pseudo column to True:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'SelectEntries' AND IncludeResultColumns='True'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the stored procedure. |
| SchemaName | String | The name of the schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure containing the parameter. |
| ColumnName | String | The name of the stored procedure parameter. |
| Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| DataTypeName | String | The name of the data type. |
| NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
| Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
| NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
| IsNullable | Boolean | Whether the parameter can contain null. |
| IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
| IsArray | Boolean | Whether the parameter is an array. |
| Description | String | The description of the parameter. |
| Ordinal | Int32 | The index of the parameter. |
| Values | String | The values you can set in this parameter are limited to those shown in this column. Possible values are comma-separated. |
| SupportsStreams | Boolean | Whether the parameter represents a file that you can pass as either a file path or a stream. |
| IsPath | Boolean | Whether the parameter is a target path for a schema creation operation. |
| Default | String | The value used for this parameter when no value is specified. |
| SpecificName | String | A label that, when multiple stored procedures have the same name, uniquely identifies each identically-named stored procedure. If there's only one procedure with a given name, its name is simply reflected here. |
| IsCDataProvided | Boolean | Whether the procedure is added/implemented by CData, as opposed to being a native GraphQL procedure. |
| Name | Type | Description |
| IncludeResultColumns | Boolean | Whether the output should include columns from the result set in addition to parameters. Defaults to False. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the Users table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Users'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
| IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
| ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| KeySeq | String | The sequence number of the primary key. |
| KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the index. |
| SchemaName | String | The name of the schema containing the index. |
| TableName | String | The name of the table containing the index. |
| IndexName | String | The index name. |
| ColumnName | String | The name of the column associated with the index. |
| IsUnique | Boolean | True if the index is unique. False otherwise. |
| IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
| Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
| SortOrder | String | The sort order: A for ascending or D for descending. |
| OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
| Name | Type | Description |
| Name | String | The name of the connection property. |
| ShortDescription | String | A brief description. |
| Type | String | The data type of the connection property. |
| Default | String | The default value if one is not explicitly set. |
| Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
| Value | String | The value you set or a preconfigured default. |
| Required | Boolean | Whether the property is required to connect. |
| Category | String | The category of the connection property. |
| IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
| Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
| PropertyName | String | A camel-cased truncated form of the connection property name. |
| Ordinal | Int32 | The index of the parameter. |
| CatOrdinal | Int32 | The index of the parameter category. |
| Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
| Visible | Boolean | Informs whether the property is visible in the connection UI. |
| ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
| AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
| COUNT | Whether COUNT function is supported. | YES, NO |
| IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
| IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
| SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
| GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
| OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
| OUTER_JOINS | Whether outer joins are supported. | YES, NO |
| SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
| STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
| NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
| TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
| REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
| REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
| IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
| SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
| DIALECT | Indicates the SQL dialect to use. | |
| KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
| SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
| SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
| DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
| DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
| SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
| SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
| SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
| PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
| ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
| PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
| MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
| REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
| REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
| REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
| REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
| REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
| IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
| CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
| CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.
| Name | Type | Description |
| NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
| VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
| Name | Type | Description |
| Id | String | The database-generated Id returned from a data modification operation. |
| Batch | String | An identifier for the batch. 1 for a single operation. |
| Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
| Message | String | SUCCESS or an error message if the update in the batch failed. |
Describes the available system information.
The following query retrieves all columns:
SELECT * FROM sys_information
| Name | Type | Description |
| Product | String | The name of the product. |
| Version | String | The version number of the product. |
| Datasource | String | The name of the datasource the product connects to. |
| NodeId | String | The unique identifier of the machine where the product is installed. |
| HelpURL | String | The URL to the product's help documentation. |
| License | String | The license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.) |
| Location | String | The file path location where the product's library is stored. |
| Environment | String | The version of the environment or rumtine the product is currently running under. |
| DataSyncVersion | String | The tier of CData Sync required to use this connector. |
| DataSyncCategory | String | The category of CData Sync functionality (e.g., Source, Destination). |
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT operations with GraphQL.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from GraphQL, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
To enable TLS, set the following:
With this configuration, the Cloud attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
The GraphQL Cloud also supports setting client certificates. Set the following to connect using a client certificate.
To authenticate to an HTTP proxy, set the following:
Set the following properties:
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| AuthScheme | Specifies the authentication method to use when connecting to remote services. |
| URL | Specifies the endpoint URL of the GraphQL service. |
| User | Specifies the authenticating user's user ID. |
| Password | Specifies the authenticating user's password. |
| Property | Description |
| AWSCognitoRegion | The hosting region for AWS Cognito. |
| AWSUserPoolId | The User Pool Id. |
| AWSUserPoolClientAppId | The User Pool Client App Id. |
| AWSUserPoolClientAppSecret | Optional. The User Pool Client App Secret. |
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| Scope | Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created. |
| OAuthAuthorizationURL | The authorization URL for the OAuth service. |
| OAuthAccessTokenURL | The URL from which the OAuth access token is retrieved. |
| AuthToken | The authentication token used to request and obtain the OAuth Access Token. |
| AuthKey | The authentication secret used to request and obtain the OAuth Access Token. |
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| ExpandArgumentsDepth | Specifies the depth the provider searches for columns within nested GraphQL arguments of type INPUT_OBJECT. Higher values expand deeper levels of nested fields, while lower values limit the expansion. |
| ExpandTablesDepth | Specifies how deeply the provider explores nested child tables in the GraphQL schema when building the relational model. This setting only takes effect if the ExposeObjectTables property is set to DEEP. |
| ExpandTemporaryTablesDepth | Specifies the depth at which the provider includes nested child temporary tables in the schema. This property only takes effect when the ExposeDynamicProcedures property is set to true. |
| ExpandColumnsDepth | Specifies the depth at which the provider searches for columns within nested GraphQL objects, exposing those fields as columns. |
| IncludeDeprecatedMetadata | Specifies whether the provider includes deprecated tables and columns in the schema. |
| ExposeDynamicProcedures | Specifies whether the provider exposes GraphQL mutations as dynamic procedures in the schema. |
| ExposeObjectTables | Specifies the scope of GraphQL object type fields that the provider exposes as tables in the schema. |
| ExposeAbstractTypes | Specifies the scope of GraphQL abstract types (interfaces and unions) that the provider exposes in the schema. |
| Property | Description |
| CustomHeaders | Specifies additional HTTP headers to append to the request headers created from other properties, such as ContentType and From. Use this property to customize requests for specialized or nonstandard APIs. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| Pagesize | Specifies the maximum number of results returned per page from GraphQL. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | Specifies the authentication method to use when connecting to remote services. |
| URL | Specifies the endpoint URL of the GraphQL service. |
| User | Specifies the authenticating user's user ID. |
| Password | Specifies the authenticating user's password. |
Specifies the authentication method to use when connecting to remote services.
string
"Basic"
This property determines the type of authentication used during connection. The available options depend on the remote service’s requirements and the level of security needed for your application.
Specifies the endpoint URL of the GraphQL service.
string
""
This property defines the URL used to connect to the GraphQL service. The endpoint URL typically follows the format: "https://[domain]/graphql"
This property is essential for establishing the connection and must be correctly configured to enable API communication.
Specifies the authenticating user's user ID.
string
""
The authenticating server requires both User and Password to validate the user's identity.
Specifies the authenticating user's password.
string
""
The authenticating server requires both User and Password to validate the user's identity.
This section provides a complete list of the AWS Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AWSCognitoRegion | The hosting region for AWS Cognito. |
| AWSUserPoolId | The User Pool Id. |
| AWSUserPoolClientAppId | The User Pool Client App Id. |
| AWSUserPoolClientAppSecret | Optional. The User Pool Client App Secret. |
The hosting region for AWS Cognito.
string
"NORTHERNVIRGINIA"
The hosting region for AWS Cognito. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, TAIPEI, HYDERABAD, JAKARTA, MALAYSIA, MELBOURNE, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, THAILAND, TOKYO, CENTRAL, CALGARY, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, SPAIN, STOCKHOLM, ZURICH, TELAVIV, MEXICOCENTRAL, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST, ISOLATEDUSEAST, ISOLATEDUSEASTB, ISOLATEDUSEASTF, ISOLATEDUSSOUTHF, ISOLATEDUSWEST and ISOLATEDEUWEST.
The User Pool Id.
string
""
You can find this in AWS Cognito -> Manage User Pools -> select your user pool -> General settings -> Pool Id.
The User Pool Client App Id.
string
""
You can find this in AWS Cognito -> Manage Identity Pools -> select your user pool -> General settings -> App clients -> App client Id.
Optional. The User Pool Client App Secret.
string
""
You can find this in AWS Cognito -> Manage Identity Pools -> select your user pool -> General settings -> App clients -> App client secret.
This section provides a complete list of the OAuth properties you can configure in the connection string for this provider.
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| Scope | Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created. |
| OAuthAuthorizationURL | The authorization URL for the OAuth service. |
| OAuthAccessTokenURL | The URL from which the OAuth access token is retrieved. |
| AuthToken | The authentication token used to request and obtain the OAuth Access Token. |
| AuthKey | The authentication secret used to request and obtain the OAuth Access Token. |
Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication.
string
""
This property is required in two cases:
(When the driver provides embedded OAuth credentials, this value may already be provided by the Cloud and thus not require manual entry.)
OAuthClientId is generally used alongside other OAuth-related properties such as OAuthClientSecret and OAuthSettingsLocation when configuring an authenticated connection.
OAuthClientId is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can usually find this value in your identity provider’s application registration settings. Look for a field labeled Client ID, Application ID, or Consumer Key.
While the client ID is not considered a confidential value like a client secret, it is still part of your application's identity and should be handled carefully. Avoid exposing it in public repositories or shared configuration files.
For more information on how this property is used when configuring a connection, see Establishing a Connection.
Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.).
string
""
This property (sometimes called the application secret or consumer secret) is required when using a custom OAuth application in any flow that requires secure client authentication, such as web-based OAuth, service-based connections, or certificate-based authorization flows. It is not required when using an embedded OAuth application.
The client secret is used during the token exchange step of the OAuth flow, when the driver requests an access token from the authorization server. If this value is missing or incorrect, authentication fails with either an invalid_client or an unauthorized_client error.
OAuthClientSecret is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can obtain this value from your identity provider when registering the OAuth application.
Notes:
For more information on how this property is used when configuring a connection, see Establishing a Connection
Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created.
string
""
Scopes are set to define what kind of access the authenticating user will have; for example, read, read and write, restricted access to sensitive information. System administrators can use scopes to selectively enable access by functionality or security clearance.
When InitiateOAuth is set to GETANDREFRESH, you must use this property if you want to change which scopes are requested.
When InitiateOAuth is set to either REFRESH or OFF, you can change which scopes are requested using either this property or the Scope input.
The authorization URL for the OAuth service.
string
""
The authorization URL for the OAuth service. At this URL, the user logs into the server and grants permissions to the application. In OAuth 1.0, if permissions are granted, the request token is authorized.
The URL from which the OAuth access token is retrieved.
string
""
In OAuth 1.0, the authorized request token is exchanged for the access token at this URL.
The authentication token used to request and obtain the OAuth Access Token.
string
""
This property is required only when performing headless authentication in OAuth 1.0. It can be obtained from the GetOAuthAuthorizationUrl stored procedure.
It can be supplied alongside the AuthKey in the GetOAuthAccessToken stored procedure to obtain the OAuthAccessToken.
The authentication secret used to request and obtain the OAuth Access Token.
string
""
This property is required only when performing headless authentication in OAuth 1.0. It can be obtained from the GetOAuthAuthorizationUrl stored procedure.
It can be supplied alongside the AuthToken in the GetOAuthAccessToken stored procedure to obtain the OAuthAccessToken.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If you are using a TLS/SSL connection, use this property to specify the TLS/SSL certificate to be accepted from the server. If you specify a value for this property, all other certificates that are not trusted by the machine are rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space- or colon-separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space- or colon-separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
Note: It is possible to use '*' to signify that all certificates should be accepted, but due to security concerns this is not recommended.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.
string
"1"
This property defines the level of detail the Cloud includes in the log file. Higher verbosity levels increase the detail of the logged information, but may also result in larger log files and slower performance due to the additional data being captured.
The default verbosity level is 1, which is recommended for regular operation. Higher verbosity levels are primarily intended for debugging purposes. For more information on each level, refer to Logging.
When combined with the LogModules property, Verbosity can refine logging to specific categories of information.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| ExpandArgumentsDepth | Specifies the depth the provider searches for columns within nested GraphQL arguments of type INPUT_OBJECT. Higher values expand deeper levels of nested fields, while lower values limit the expansion. |
| ExpandTablesDepth | Specifies how deeply the provider explores nested child tables in the GraphQL schema when building the relational model. This setting only takes effect if the ExposeObjectTables property is set to DEEP. |
| ExpandTemporaryTablesDepth | Specifies the depth at which the provider includes nested child temporary tables in the schema. This property only takes effect when the ExposeDynamicProcedures property is set to true. |
| ExpandColumnsDepth | Specifies the depth at which the provider searches for columns within nested GraphQL objects, exposing those fields as columns. |
| IncludeDeprecatedMetadata | Specifies whether the provider includes deprecated tables and columns in the schema. |
| ExposeDynamicProcedures | Specifies whether the provider exposes GraphQL mutations as dynamic procedures in the schema. |
| ExposeObjectTables | Specifies the scope of GraphQL object type fields that the provider exposes as tables in the schema. |
| ExposeAbstractTypes | Specifies the scope of GraphQL abstract types (interfaces and unions) that the provider exposes in the schema. |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
string
""
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
Specifies the depth the provider searches for columns within nested GraphQL arguments of type INPUT_OBJECT. Higher values expand deeper levels of nested fields, while lower values limit the expansion.
int
2
The ExpandArgumentsDepth property determines how many levels of nested input objects are traversed and expanded into separate SQL columns by the Cloud. This property directly impacts which fields from your GraphQL input are accessible in SQL queries and can affect both query complexity and performance.
For example, consider the following GraphQL schema:
type Query {
filteredCompanies(input: FilteredCompaniesInput!): [Company]
}
input FilteredCompaniesInput {
filters: FiltersInput
}
input FiltersInput {
type: String
details: DetailsInput
}
input DetailsInput {
region: String
category: String
}
In this schema, the nesting levels are as follows:
| Level 0: FilteredCompaniesInput | Contains only the nested filters field. No primitive fields exist at this level to flatten. |
| Level 1: FiltersInput | Exposes the type field. |
| Level 2: DetailsInput | Exposes the region and category fields. |
In the following GraphQL operation, the filters argument is an INPUT_OBJECT:
{
"variables": {
"input": {
"filters": {
"details": {
"category": "RETAILER"
},
"type": "SUPPLIER"
}
}
},
"query": "query($input:FilteredCompaniesInput!) {\r\nfilteredCompanies(input:$input) {\r\nid:id\r\nvalue:value\r\n}\r\n}\r\n"
}
With ExpandArgumentsDepth=2, you can run a SQL query that leverages those expanded fields. For example:
SELECT id, value FROM filteredCompanies WHERE input_filters_type='SUPPLIER' AND input_filters_details_category='RETAILER'
Increasing the depth exposes more nested fields but may increase the complexity and processing time for queries against complex schemas. Reducing the depth may improve performance but can limit access to deeply nested fields. Set this property based on your application’s requirements to balance data accessibility and performance.
Specifies how deeply the provider explores nested child tables in the GraphQL schema when building the relational model. This setting only takes effect if the ExposeObjectTables property is set to DEEP.
int
2
The ExpandTablesDepth property determines how many levels of nested objects are converted into separate child tables in the relational model. This property controls the granularity of the resulting schema by defining whether nested objects beyond a certain level are exposed as individual tables or remain part of a parent table.
For example, consider the following GraphQL schema:
type Query {
companies: [Company]
}
type Company {
id: ID!
name: String
details: [Details]
}
type Details {
state: String
addresses: [Address]
}
type Address {
city: String
state: String
}
In this schema, the nesting levels are as follows:
| Level 0: Company | Exposed by the root query. |
| Level 1: Details | A list within Company. |
| Level 2: Address | A list within Details. |
Set this property to a higher value if your application needs access to deeply nested data. However, be cautious as increasing this value may result in higher processing times and more complex schema representations.
Specifies the depth at which the provider includes nested child temporary tables in the schema. This property only takes effect when the ExposeDynamicProcedures property is set to true.
int
5
The ExpandTemporaryTablesDepth property controls how many levels of nested input objects in GraphQL mutations are converted into separate temporary tables in the relational model. This ensures that the hierarchical structure of your mutation input is preserved when dynamic procedures are exposed.
For example, consider the following GraphQL schema:
type Mutation {
createOrder(input: CreateOrderInput!): CreateOrderPayload
}
input CreateOrderInput {
userId: ID!
orderItems: [OrderItemInput!]!
}
input OrderItemInput {
productId: ID!
quantity: Int!
shippingAddress: [ShippingAddressInput!]
}
input ShippingAddressInput {
street: String!
city: String!
}
type CreateOrderPayload {
order: Order
}
type Order {
id: ID!
orderItems: [OrderItem!]!
}
type OrderItem {
id: ID!
shippingAddress: ShippingAddress
}
type ShippingAddress {
street: String!
city: String!
}
In this schema, the nesting levels are as follows:
| Level 0: createOrder | Contains the top-level mutation input. |
| Level 1: CreateOrderInput | Contains userId and orderItems fields. |
| Level 2: OrderItemInput | Represents each order item, with fields such as productId and quantity, plus a nested shippingAddress. |
| Level 3: ShippingAddressInput | Contains address details like street and city. |
This property controls how many levels of nested child temporary tables the Cloud includes in the relational schema when dynamic procedures are exposed. It is most relevant to GraphQL mutations that include nested input objects.
Consider the following GraphQL mutation:
mutation {
createOrder(input: {
userId: 123,
orderItems: [
{
productId: 456,
quantity: 2,
shippingAddress: {
street: "123 Main St",
city: "Seattle"
}
}
]
}) {
order {
id,
orderItems {
id,
shippingAddress {
street,
city
}
}
}
}
}
With ExpandTemporaryTablesDepth set appropriately, the Cloud examines each level of nested input objects in your mutation and creates a temporary table for any field that returns a list at that level. This means that the hierarchical structure of your mutation input is preserved in the relational model and each nested object up to the specified depth is mapped to its own table. For instance, if your mutation input includes a top-level object with a nested array of order items and each order item contains a nested shipping address, setting the property to 2 ensures that there is a temporary table for the order items as well as for the shipping addresses.
Increasing the depth enables access to more deeply nested mutation inputs but can result in a more complex schema and higher processing overhead. A lower depth simplifies the schema and improves performance but may limit access to deeply nested data. Adjust this property based on your application's requirements.
Specifies the depth at which the provider searches for columns within nested GraphQL objects, exposing those fields as columns.
int
2
The ExpandColumnsDepth property controls how many levels of nested objects in your GraphQL schema are traversed and converted into individual SQL columns. This property directly affects the granularity of your relational schema, determining how much of the nested structure is flattened into separate columns.
For example, consider the following GraphQL schema:
type Query {
company: Company
}
type Company {
id: ID!
details: Details
}
type Details {
address: Address
phoneNumber: String
}
type Address {
city: String
state: String
}
In this schema, the nesting levels are as follows:
| Level 0: Company | Exposed by the root query. |
| Level 1: Details | An object within Company. |
| Level 2: Address | Nested within Details. |
For instance, a SQL query at depth 3 might look like:
SELECT id, details_address_city, details_address_state FROM company
Note: If a nested field returns a single object, that object is traversed and its fields surfaced as columns if the depth allows. If a nested field returns a list of objects, the Cloud aggregates the data into a JSON array.
Increasing the depth enables access to deeply nested fields, but may result in a more complex schema and increased processing time. A lower depth simplifies the schema and improves performance, but limits access to deeply nested data.
Specifies whether the provider includes deprecated tables and columns in the schema.
bool
false
This property determines if the provider should expose metadata elements, such as tables and columns, that have been marked as deprecated in the GraphQL schema. Deprecation typically indicates that the element is outdated or scheduled for removal in future API versions.
This property is useful for managing compatibility with evolving APIs and ensuring that deprecated elements are visible when necessary.
Specifies whether the provider exposes GraphQL mutations as dynamic procedures in the schema.
bool
true
The ExposeDynamicProcedures property determines if GraphQL mutations are represented as dynamic procedures in the schema.
For example, consider the following GraphQL schema:
type Mutation {
createUser(input: CreateUserInput!): User
}
input CreateUserInput {
name: String!
email: String!
}
type User {
id: ID!
name: String!
email: String!
}
When this property is set to true, mutations are exposed as dynamic procedures, allowing them to be invoked like standard callable operations. For example, a mutation such as the following would be exposed as a dynamic procedure:
mutation {
createUser(input: { name: ""John"", email: ""[email protected]"" }) {
id
name
}
}
This enables you to easily call the mutation by passing parameters in a structured format, simplifying integration with GraphQL APIs.
When set to false, mutations are not exposed as dynamic procedures. This can simplify the schema structure by excluding mutation-based operations, which might be useful in scenarios where you only need read access to data or want a simpler schema for certain tools.
Enabling this property is useful in scenarios requiring robust interaction with GraphQL APIs, where you need to perform complex operations like resource creation, or deletions.
Specifies the scope of GraphQL object type fields that the provider exposes as tables in the schema.
string
"SHALLOW"
This property determines the extent to which GraphQL object type fields are exposed as tables in the schema. It applies only to object fields that meet the following conditions:
This property offers three modes of exposure:
The default setting, SHALLOW, simplifies schema representation by exposing only top-level query objects as tables. Use DEEP to include more deeply nested objects for advanced use cases.
Specifies the scope of GraphQL abstract types (interfaces and unions) that the provider exposes in the schema.
string
"NONE"
This property determines the extent to which GraphQL abstract types (interfaces and unions) are exposed in the schema. This property offers four modes of exposure:
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| CustomHeaders | Specifies additional HTTP headers to append to the request headers created from other properties, such as ContentType and From. Use this property to customize requests for specialized or nonstandard APIs. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| Pagesize | Specifies the maximum number of results returned per page from GraphQL. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
Specifies additional HTTP headers to append to the request headers created from other properties, such as ContentType and From. Use this property to customize requests for specialized or nonstandard APIs.
string
""
Use this property to add custom headers to HTTP requests sent by the Cloud.
This property is useful when fine-tuning requests to interact with APIs that require additional or nonstandard headers. Headers must follow the format "header: value" as described in the HTTP specifications and each header line must be separated by the carriage return and line feed (CRLF) characters. Important: Use caution when setting this property. Supplying invalid headers may cause HTTP requests to fail.
Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY.
int
-1
The default value for this property, -1, means that no row limit is enforced unless the query explicitly includes a LIMIT clause. (When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting.)
Setting MaxRows to a whole number greater than 0 ensures that queries do not return excessively large result sets by default.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
Specifies the maximum number of results returned per page from GraphQL.
string
""
This property controls the maximum number of results the Cloud retrieves per page when querying the GraphQL service. Adjusting the page size can impact performance and resource usage:
You can provide a single page size or a comma-separated list for multiple pagination levels. In the latter case, the Cloud applies different page sizes at each nested level in the GraphQL data.
The effective page size directly influences the query cost in GraphQL. If the query cost exceeds server-imposed limits, the request may fail. Adjust this property cautiously to balance performance and resource utilization. This property is useful for optimizing data retrieval strategies, particularly for applications requiring large datasets or constrained by server-side limitations.
Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'.
string
""
This property allows you to define which pseudocolumns the Cloud exposes as table columns.
To specify individual pseudocolumns, use the following format:
Table1=Column1;Table1=Column2;Table2=Column3
To include all pseudocolumns for all tables use:
*=*
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error.
int
60
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.
Timeout is set to 60 seconds by default. To disable timeouts, set this property to 0.
Disabling the timeout allows operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server.
Note: Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
LZMA from 7Zip LZMA SDK
LZMA SDK is placed in the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original LZMA SDK code, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
LZMA2 from XZ SDK
Version 1.9 and older are in the public domain.
Xamarin.Forms
Xamarin SDK
The MIT License (MIT)
Copyright (c) .NET Foundation Contributors
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NSIS 3.10
Copyright (C) 1999-2025 Contributors THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
1. DEFINITIONS
"Contribution" means:
a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and b) in the case of each subsequent Contributor:
i) changes to the Program, and
ii) additions to the Program;
where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
"Contributor" means any person or entity that distributes the Program.
"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
"Program" means the Contributions distributed in accordance with this Agreement.
"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
2. GRANT OF RIGHTS
a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
3. REQUIREMENTS
A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
a) it complies with the terms and conditions of this Agreement; and
b) its license agreement:
i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
When the Program is made available in source code form:
a) it must be made available under this Agreement; and
b) a copy of this Agreement must be included with each copy of the Program.
Contributors may not remove or alter any copyright notices contained within the Program.
Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
4. COMMERCIAL DISTRIBUTION
Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
5. NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
6. DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. GENERAL
If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.