CData Cloud offers access to Amazon DynamoDB across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a SQL Server database can connect to Amazon DynamoDB through CData Cloud.
CData Cloud allows you to standardize and configure connections to Amazon DynamoDB as though it were any other OData endpoint or standard SQL Server.
This page provides a guide to Establishing a Connection to Amazon DynamoDB in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to Amazon DynamoDB and configure any necessary connection properties to create a database in CData Cloud
Accessing data from Amazon DynamoDB through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to Amazon DynamoDB by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
Specify the following to connect to data:
To authenticate using account root credentials, set these parameters:
Note: Amazon discourages the use of this authentication scheme for anything but simple tests. The account root credentials have the full permissions of the user, making this the least secure authentication method.
If multi-factor authentication is required, specify the following:
Note: If you want to control the duration of the temporary credentials, set the TemporaryTokenDuration property (default: 3600 seconds).
To authenticate using temporary credentials, specify the following:
The Cloud can now request resources using the same permissions provided by long-term credentials (such as IAM user credentials) for the lifespan of the temporary credentials.
To authenticate using both temporary credentials and an IAM role, set all the parameters described above, and specify these additional parameters:
If multi-factor authentication is required, specify the following:
Note: If you want to control the duration of the temporary credentials, set the TemporaryTokenDuration property (default: 3600 seconds).
Set AuthScheme to AwsEC2Roles.
If you are using the Cloud from an EC2 Instance and have an IAM Role assigned to the instance, you can use the IAM Role to authenticate. Since the Cloud automatically obtains your IAM Role credentials and authenticates with them, it is not necessary to specify AWSAccessKey and AWSSecretKey.
If you are also using an IAM role to authenticate, you must additionally specify the following:
The Amazon DynamoDB Cloud now supports IMDSv2. Unlike IMDSv1, the new version requires an authentication token. Endpoints and response are the same in both versions.
In IMDSv2, the Amazon DynamoDB Cloud first attempts to retrieve the IMDSv2 metadata token and then uses it to call AWS metadata endpoints. If it is unable to retrieve the token, the Cloud reverts to IMDSv1.
Set AuthScheme to AwsWebIdentity.
If you are either using Amazon DynamoDB from a container configured to assume role with web identity (such as a Pod in an EKS cluster with an OpenID Provider) OR have authenticated with a web identity provider associated with an IAM role (and have thus obtained an identity token), you can exchange the web identity token and IAM role information for temporary security credentials to authenticate and access AWS services.
If the container has AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE specified in the environment variables, Amazon DynamoDB automatically obtains the credentials.
You can also authenticate by specifying both AWSRoleARN and AWSWebIdentityToken to execute the AssumeRoleWithWebIdentity API operation.
To authenticate as an AWS role, set these properties:
If multi-factor authentication is required, specify the following:
Note: If you want to control the duration of the temporary credentials, set the TemporaryTokenDuration property (default: 3600 seconds).
Note: In some circumstances it might be preferable to use an IAM role for authentication, rather than the direct security credentials of an AWS root user. If you are specifying the AWSAccessKey and AWSSecretKey of an AWS root user, you cannot use roles.
To connect to ADFS, set these properties:
Example connection string:
AuthScheme=ADFS; AWSRegion=Ireland; [email protected]; Password=CH8WerW121235647iCa6; SSOLoginURL='https://adfs.domain.com'; AWSRoleArn=arn:aws:iam::1234:role/ADFS_SSO; AWSPrincipalArn=arn:aws:iam::1234:saml-provider/ADFSProvider; S3StagingDirectory=s3://athena/staging;
The ADFS Integrated flow indicates you are connecting with the user credentials of the currently logged in Windows user. To use the ADFS Integrated flow, do not specify the User and Password, but otherwise follow the same steps noted above under ADFS.
To connect to Okta, set these properties:
If you are either using a trusted application or proxy that overrides the Okta client request OR configuring MFA, you must use combinations of SSOProperties to authenticate using Okta. Set any of the following, as applicable:
Example connection string:
AuthScheme=Okta; AWSRegion=Ireland; [email protected]; Password=CH8WerW121235647iCa6; SSOLoginURL='https://cdata-us.okta.com/home/amazon_aws/0oa35m8arsAL5f5NrE6NdA356/272'; SSOProperties='ApiToken=01230GGG2ceAnm_tPAf4MhiMELXZ0L0N1pAYrO1VR-hGQSf;'; AWSRoleArn=arn:aws:iam::1234:role/Okta_SSO; AWSPrincipalARN=arn:aws:iam::1234:saml-provider/OktaProvider; S3StagingDirectory=s3://athena/staging;
To enable mutual SSL authentication for SSOLoginURL, the WS-Trust STS endpoint, configure these SSOProperties:
Example connection string:
authScheme=pingfederate;SSOLoginURL=https://mycustomserver.com:9033/idp/sts.wst;SSOExchangeUrl=https://us-east-1.signin.aws.amazon.com/platform/saml/acs/764ef411-xxxxxx;user=admin;password=PassValue;AWSPrincipalARN=arn:aws:iam::215338515180:saml-provider/pingFederate;AWSRoleArn=arn:aws:iam::215338515180:role/SSOTest2;
You can use any credentials file to authenticate, including any configurations related to AccessKey/SecretKey authentication, temporary credentials, role authentication, or MFA.
To do this, set these properties:
If you want to use the Cloud with a user registered in a User Pool in AWS Cognito, set these properties:
You can use the following properties to configure automatic data type detection, which is enabled by default.
You can use the following properties to gain greater control over Amazon DynamoDB API features and the strategies the Cloud uses to surface them:
UseSimpleNames: Amazon DynamoDB supports attribute names with special characters that many database-oriented tools do not support.
In addition, Amazon DynamoDB table names can include dots and dashes -- the Cloud interprets dots within table names as hierarchy separators that enable you to drill down to nested fields, similar to XPath.
You can use this property to replace any nonalphanumeric character with an underscore.
You can set the following properties to retry queries instead of returning a temporary error such as "maximum throughput exceeded":
The CData Cloud also has two seperate APIs that may be used depending on the query, PartiQL and Scan. The API that is used depends on the query that is executed.
You can use the Pagesize property to optimize use of your provisioned throughput, based on the size of your items and Amazon DynamoDB's 1MB page size. Set this property to the number of items to return.
Generally, a smaller page size reduces spikes in throughput that cause throttling. A smaller page size also inserts pauses between requests. This interval evens out the distribution of requests and allows more requests to be successful by avoiding throttling.
The ThreadCount connection property may be set to influence how many threads will be used when executing a Scan request. Using more threads will cause more memory to be taken up, but will result in faster results per thread. The default is 4. This works best on tables where a high or variable throughput is provisioned.
In cases where the maximum throughput for a table would be exceeded on a single thread, there is no benefit to using a Scan over the single threaded PartiQL API. The Amazon DynamoDB will simply throttle all threads until the maximum throughput is no longer exceeded.
We recommend using predefined roles for services rather than creating custom IAM policies. Predefined roles for Amazon DynamoDB are
| IAM Role | Description | |
| dynamodb:ListTables | Required for getting a list of your DynamoDB tables. Used during metadata retrieval to dynamically determine the list of your tables. Note that this action does not support resource-level permissions and requires you to choose All resources (hence the * for "Resource"). In other words, the action dynamodb:ListTables needs a * Resource, and the other actions can be given permission to all the tables arn:aws:dynamodb:us-east-1:987654321098:table/* or to a list of specific tables:
"Resource": [
"arn:aws:dynamodb:us-east-1:987654321098:table/Customers",
"arn:aws:dynamodb:us-east-1:987654321098:table/Orders"
] | |
| dynamodb:DescribeTable | Required for getting metadata about the selected table. Used during table metadata retrieval to dynamically determine the list of the columns. This action supports resource-level permissions, so you can specify the tables you want to get the metadata from. For example, for the table Customers and Orders in the region Northern Virginia us-east-1, for account 987654321098:
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:987654321098:table/Customers",
"arn:aws:dynamodb:us-east-1:987654321098:table/Orders"
]
}
To give permissions to all the tables in the region you specified in the connection property AWSRegion, use an * instead of the table name: "Resource": "arn:aws:dynamodb:us-east-1:987654321098:table/*" | |
| dynamodb:Scan | Required for getting one or more items by accessing every item in the table. Used for most of the SELECT queries, for example, SELECT * FROM [Customers]. This action supports resource-level permissions, so you can specify the tables you want to get data from, similar to dynamodb:DescribeTable. | |
| dynamodb:PartiQLSelect | Required for getting specific items from a table when using SELECT queries and filtering by the primary key column, for example, SELECT * FROM [Customers] WHERE id=1234. This action supports resource-level permissions, so you can specify the tables you want to get data from, similar to dynamodb:DescribeTable. | |
| dynamodb:PartiQLInsert | Required for inserting data to a table. This action supports resource-level permissions, so you can specify the tables you want to insert data to, similar to dynamodb:DescribeTable. | |
| dynamodb:PartiQLUpdate | Required for modifying data in a table. This action supports resource-level permissions, so you can specify the tables you want to modify data on, similar to dynamodb:DescribeTable. | |
| dynamodb:PartiQLDelete | Required for deleting data from a table. This action supports resource-level permissions, so you can specify the tables you want to delete data from, similar to dynamodb:DescribeTable. | |
| dynamodb:CreateTable | Required for creating a table. This action supports resource-level permissions, so you can specify the table names you can create. |
Amazon DynamoDB is a schemaless database that provides high performance, availability, and scalability. These features are not necessarily incompatible with a standards-compliant query language like SQL-92. In this section we will show various schemes that the Cloud offers to bridge the gap with relational SQL and a document database.
The Cloud models the schemaless Amazon DynamoDB tables into relational tables and translates SQL queries into Amazon DynamoDB queries to get the requested data.
The Automatic Schema Discovery scheme automatically finds the data types in a Amazon DynamoDB table by scanning a configured number of rows of the table. You can use RowScanDepth, FlattenArrays, and FlattenObjects to control the relational representation of the tables in Amazon DynamoDB.
The Cloud automatically infers a relational schema by inspecting a series of Amazon DynamoDB documents in a collection. You can use the RowScanDepth property to define the number of documents the Cloud will scan to do so. The columns identified during the discovery process depend on the FlattenArrays and FlattenObjects properties.
If FlattenObjects is set, all nested objects will be flattened into a series of columns. For example, consider the following document:
{
id: 12,
name: "Lohia Manufacturers Inc.",
address: {street: "Main Street", city: "Chapel Hill", state: "NC"},
offices: ["Chapel Hill", "London", "New York"],
annual_revenue: 35,600,000
}
This document will be represented by the following columns:
| Column Name | Data Type | Example Value |
| id | Integer | 12 |
| name | String | Lohia Manufacturers Inc. |
| address.street | String | Main Street |
| address.city | String | Chapel Hill |
| address.state | String | NC |
| offices | String | ["Chapel Hill", "London", "New York"] |
| annual_revenue | Double | 35,600,000 |
If FlattenObjects is not set, then the address.street, address.city, and address.state columns will not be broken apart. The address column of type string will instead represent the entire object. Its value would be {street: "Main Street", city: "Chapel Hill", state: "NC"}. See JSON Functions for more details on working with JSON aggregates.
You can change the separator character in the column name from a dot by setting SeparatorCharacter.
The FlattenArrays property can be used to flatten array values into columns of their own. This is only recommended for arrays that are expected to be short, for example the coordinates below:
"coord": [ -73.856077, 40.848447 ]The FlattenArrays property can be set to 2 to represent the array above as follows:
| Column Name | Data Type | Example Value |
| coord.0 | Float | -73.856077 |
| coord.1 | Float | 40.848447 |
It is best to leave other unbounded arrays as they are and piece out the data for them as needed using JSON Functions.
It is possible to retrieve an array of objects as if it were a separate table. Take the following JSON structure from the restaurants table for example:
{
"restaurantid" : "30075445",
"address" : {
"building" : "1007",
"coord" : [-73.856077, 40.848447],
"street" : "Morris Park Ave",
"zipcode" : "10462"
},
"borough" : "Bronx",
"cuisine" : "Bakery",
"grades" : [{
"date" : 1393804800000,
"grade" : "B",
"score" : 2
}, {
"date" : 1378857600000,
"grade" : "A",
"score" : 6
}, {
"date" : 1358985600000,
"grade" : "A",
"score" : 10
}],
"name" : "Morris Park Bake Shop"
}
Vertical flattening will allow you to retrieve the grades array as a separate table by using the syntax below:
SELECT * FROM [restaurants.grades]This query returns the following data set:
| date | grade | score | _index |
| 1393804800000 | B | 2 | 1 |
| 1378857600000 | A | 6 | 2 |
| 1358985600000 | A | 10 | 3 |
SELECT * FROM [restaurants.cuisine.bakery.grades]There are also cases where the nested structure includes another array in a higher level. Take the following JSON as an example:
{
"restaurantid" : "30075445",
"reviews": [
{
"grades": [
{
"date": 1393804800000,
"score": 2,
"grade": "B"
},
{
"date": 1378857600000,
"score": 6,
"grade": "A"
},
{
"date": 1358985600000,
"score": 10,
"grade": "A"
}]
}],
"name" : "Morris Park Bake Shop"
}
For this structure, the index of the reviews array will need to get wrapped in square brackets. If they are already being used as escape characters in the SQL query, the square brackets will need to be escaped themselves as shown in the query below:
SELECT * FROM [restaurants.reviews.\[0\].grades]This query will return the same data set as the JSON structure at the top. Note that this syntax is case sensitive, so make sure to write the field names the same way that they're saved in DynamoDB.
The Cloud can return JSON structures as column values. The Cloud enables you to use standard SQL functions to work with these JSON structures. The examples in this section use the following array:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
SELECT Name, JSON_EXTRACT(grades,'[0].grade') AS Grade, JSON_EXTRACT(grades,'[0].score') AS Score FROM Students;
| Column Name | Example Value |
| Grade | A |
| Score | 2 |
SELECT Name, JSON_COUNT(grades,'[x]') AS NumberOfGrades FROM Students;
| Column Name | Example Value |
| NumberOfGrades | 5 |
SELECT Name, JSON_SUM(score,'[x].score') AS TotalScore FROM Students;
| Column Name | Example Value |
| TotalScore | 41 |
SELECT Name, JSON_MIN(score,'[x].score') AS LowestScore FROM Students;
| Column Name | Example Value |
| LowestScore | 2 |
SELECT Name, JSON_MAX(score,'[x].score') AS HighestScore FROM Students;
| Column Name | Example Value |
| HighestScore | 14 |
The DOCUMENT function can be used to retrieve the entire document as a JSON string. See the following query and its result as an example:
SELECT DOCUMENT(*) FROM Customers;The query above will return the entire document as shown.
{ "id": 12, "name": "Lohia Manufacturers Inc.", "address": { "street": "Main Street", "city": "Chapel Hill", "state": "NC"}, "offices": [ "Chapel Hill", "London", "New York" ], "annual_revenue": 35,600,000 }
Because Amazon DynamoDB is a NoSQL data source, queries need to be handled a bit differently than standard relational databases.
The lack of a required data type for a given column means that you could store different types of data in a single column. For instance, one row could have a String called EmailAddresses and another could have a StringSet also called EmailAddresses. For these and other kinds of cases, the Cloud largely determines what data type to use based on the values in the query.
For instance, say you have an Items table where the PartNumber could store either a String or a Number. To get back a part with the PartNumber of the number value 12345, you would issue the following query:
SELECT Name, Location, Quantity, PartNumber FROM Items WHERE PartNumber = 12345
Alternatively, the PartNumber could have been stored as the string "12345". To get back a part with the PartNumber of the literal string 12345, issue the following query:
SELECT Name, Location, Quantity, PartNumber FROM Items WHERE PartNumber = '12345'If the data type of the specified value is not ambiguous, it is always used before the autodetected data type. In both of these cases if a parameter was used instead of of a hardcoded value, then the data type of the parameter would be used to determine what type to submit to Amazon DynamoDB.
If a value is not obvious based purely on the detected data type, the Cloud compares it to the autodetected column. For instance, if you want to insert a column called Coordinates into the Location table, your INSERT would look like:
INSERT INTO Locations (Address, Coordinates) VALUES ('123 Fake Street', '[40.7127, 74.0059]')
Based on the input value alone, the detected data type is a string. However, because a Coordinates column was previously autodetected, the Cloud inserts a NumberSet and not a simple String.
If a Coordinates column was not autodetected when scanning the Locations table, the data type of the inserted value is used.
In this case, we could still resolve that the INSERT is a NumberSet, but it will cost a bit more overhead to do this.
Amazon DynamoDB supports 2 different methods of of using the COUNT aggregate function. To simply return the number of Items in you table, issue the following query:
SELECT COUNT(*) FROM MyTableThe CData Cloud will read the ItemCount from the DescribeTable Action. This avoids using too many read units to scan the full table. However, DynamoDB updates this value approximately every six hours and recent changes might not be reflected in this value.
Issuing the below example queries will instead scan the full table for count:
SELECT COUNT(*) FROM MyTable WHERE MyInt > 10 SELECT COUNT(MyInt) FROM MyTable
Amazon DynamoDB documents and lists are supported with the CData Cloud. You can access documents and lists directly at the root level or use the '.' character as a hierarchy divider to drill down to documents and lists.
When data types are autodetected, they are reported down to the lowest level that can be reliably detected. For instance, a document called Customer with a child called Address and a child on Address called Street would be represented by the column Customer.Address.Street.
However, this process does not apply to Lists since a list could have any number of entries. Once a List or a Set is detected, additional values are not reported as being available in the table schema.
If there are attributes that frequently do not have a value and thus are not autodetected, these can still be retrieved by specifying the correct path to them. For instance, to get the Special attribute from the Customer document:
SELECT [Customer.Address.Street], [Customer.Special] FROM MyTableOnce a List has been detected, additional values are not reported. But individual values on the list can be referenced by specifying '.' and a number. For instance:
SELECT [MyList.0], [MyList.1.Email], [MyList.1.Age] FROM MyTableThis will retrieve the first value on the list and the second value's Email and Age attributes.
INSERTs in Amazon DynamoDB require that the full object is specified. Insert a document or list at the root. Pass in the full JSON aggregate. For instance:
INSERT INTO MyTable (PrimaryKey, EmailAddresses, Address, MyList) VALUES ('uniquekey', '["[email protected]", "[email protected]"]', '{"Street":"123 Fake Street", "City":"Chapel Hill", "Zip":"27713"}', '[{"S":"somestr"},{"NS":[1,2]},{"N":4}]')
In this case, the EmailAddress is inserted as a StringSet, Address is inserted as a document, and MyList is inserted as a list.
Updates are supported using the same syntax that is available during selects. Documents and Lists can be specified using the '.' character to specify hierarchy. For instance:
UPDATE MyTable SET [EmailAddress.0]='[email protected]', [EmailAddress.1]='[email protected]', [Address.Street]='123 Fake Street', [Address.City]='Chapel Hill', [Address.Zip]='27713', [MyList.0]='somestr', [MyList.1]='[1,2]', [MyList.2]=4 WHERE PrimaryKey='uniquekey'Note that EmailAddress and MyList must be autodetected to resolve how to handle EmailAddress differently from MyList. If you are in doubt about whether or not something will be automatically detected, specifying the full JSON to update will always work.
By default, the Cloud attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
To authenticate to an HTTP proxy, set the following:
Set the following properties:
Amazon DynamoDB is a highly scalable NoSQL cloud database that works differently than a regular database. The CData Cloud enables you to access Amazon DynamoDB data using a standard database-like interface. The following topics describe how we model schemaless Amazon DynamoDB tables as regular Tables and Stored Procedures.
The list of Tables is dynamically retrieved from your Amazon DynamoDB account. You can use the CreateTable stored procedure to create a table, or you can create tables using the Amazon Web Services Admin Console.
The Cloud can dynamically detect table schemas at connection time. See Automatic Schema Discovery for more information. This method is useful if the structure of your data is volatile.
The list of tables is dynamically retrieved from your Amazon DynamoDB account. You can use the stored procedure to create a new table, or you can create a table using the Amazon Web Services Admin Console.
Because DynamoDB tables are partitioned based on their key, you should take care in selecting a proper key based on the query requirements of your table. Refer to the documentation for DynamoDB for more information about using best practices to model data in DynamoDB tables. DynamoDB supports two types of primary keys:
Since Amazon DynamoDB tables are schemaless, the Cloud offers the following two mechanisms to uncover the schema.
The columns of a table are dynamically determined by scanning data in the first few rows. You can adjust the number of rows that are used by modifying the RowScanDepth property. In addition to the name of the column, the row scan also determines the data type. The following table shows how the different data types supported by Amazon DynamoDB are modeled in the Cloud.
| Amazon DynamoDB Type | Modeled Type | Encoding | Sample Value | |
| Boolean | Boolean | Not Required | True | |
| String | String | Not Required | USA | |
| Blob | String | Not Required | ||
| Number | Double | Not Required | 24.0 | |
| String Array | String | JSON Array | ["USA","Canada","UK"] | |
| Number Array | String | JSON Array | [20,200.5,500] | |
| Blob Array | JSON Array | JSON Array | ["ABCD","EFGH"] | |
| Document | JSON Object | JSON Object | {"Address":"123 Fake Street","City":"Chapel Hill","Zip":"27516"} | |
| List | JSON Array | JSON Array | [{"S":"mystring"},{"NS":[1,2]},{"N":4}] |
Instead of using dynamically discovered schemas, you can define your own schemas. This will give you more control over the projected columns and also enable you to use other data types such as boolean, datetime, etc. Refer to the CreateSchema Stored Procedure in order to create your own schema. You can simply specify the FileName (fullpath) and TableName of the new schema file, which should match with the name of the Amazon DynamoDB table, and edit the column listing to use it for your own table.
While the schema of the table is necessary to report metadata, data may be selected, inserted, updated, or deleted from columns that do not exist in the schema. Columns that do not already exist in the table schema will have their data types dynamically determined based on the data that is specified. See DynamoDB Queries for more information.
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with Amazon DynamoDB.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Amazon DynamoDB, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
| CreateTable | Creates a new table in DynamoDB with specified partition and sort keys, along with optional billing mode and capacity settings. |
Creates a new table in DynamoDB with specified partition and sort keys, along with optional billing mode and capacity settings.
| Name | Type | Required | Description |
| TableName | String | True | The name of the table to create, which must be between 3 and 255 characters. This is a required parameter for table creation. |
| PartitionKeyName | String | True | Specifies the name of the partition key, which is mandatory for uniquely identifying items in the table. |
| PartitionKeyType | String | True | Defines the data type of the partition key, such as 'String', 'Number', or 'Binary'. This determines how the partition key will be stored and indexed.
The allowed values are S, N, B. |
| SortKeyName | String | False | Specifies the name of the sort key, which is optional and used for secondary organization of data within a partition. |
| SortKeyType | String | False | Defines the data type of the sort key, such as 'String', 'Number', or 'Binary', if a sort key is provided.
The allowed values are S, N, B. |
| BillingMode | String | False | Specifies how you are billed for throughput capacity. Options include 'PROVISIONED' for manual capacity management or 'PAY_PER_REQUEST' for on-demand scaling.
The allowed values are PROVISIONED, PAY_PER_REQUEST. The default value is PROVISIONED. |
| ReadCapacityUnits | String | False | Defines the maximum number of strongly consistent read operations per second, applicable only when 'BillingMode' is set to 'PROVISIONED'.
The default value is 5. |
| WriteCapacityUnits | String | False | Defines the maximum number of write operations per second, applicable only when 'BillingMode' is set to 'PROVISIONED'.
The default value is 5. |
| Name | Type | Description |
| Success | String | Indicates the outcome of the operation. Returns 'True' if the table was created successfully, otherwise 'False'. |
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for Amazon DynamoDB:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries, including batch operations::
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
| Name | Type | Description |
| CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
| Name | Type | Description |
| CatalogName | String | The database name. |
| SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
| Name | Type | Description |
| CatalogName | String | The database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view. |
| TableType | String | The table type (table or view). |
| Description | String | A description of the table or view. |
| IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the Account table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Account'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view containing the column. |
| ColumnName | String | The column name. |
| DataTypeName | String | The data type name. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The storage size of the column. |
| DisplaySize | Int32 | The designated column's normal maximum width in characters. |
| NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
| IsNullable | Boolean | Whether the column can contain null. |
| Description | String | A brief description of the column. |
| Ordinal | Int32 | The sequence number of the column. |
| IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
| IsGeneratedColumn | String | Whether the column is generated. |
| IsHidden | Boolean | Whether the column is hidden. |
| IsArray | Boolean | Whether the column is an array. |
| IsReadOnly | Boolean | Whether the column is read-only. |
| IsKey | Boolean | Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
| ColumnType | String | The role or classification of the column in the schema. Possible values include SYSTEM, LINKEDCOLUMN, NAVIGATIONKEY, REFERENCECOLUMN, and NAVIGATIONPARENTCOLUMN. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
| Name | Type | Description |
| CatalogName | String | The database containing the stored procedure. |
| SchemaName | String | The schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure. |
| Description | String | A description of the stored procedure. |
| ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the CreateSchema stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'CreateSchema' AND Direction = 1 OR Direction = 2
To include result set columns in addition to the parameters, set the IncludeResultColumns pseudo column to True:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'CreateSchema' AND IncludeResultColumns='True'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the stored procedure. |
| SchemaName | String | The name of the schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure containing the parameter. |
| ColumnName | String | The name of the stored procedure parameter. |
| Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| DataTypeName | String | The name of the data type. |
| NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
| Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
| NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
| IsNullable | Boolean | Whether the parameter can contain null. |
| IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
| IsArray | Boolean | Whether the parameter is an array. |
| Description | String | The description of the parameter. |
| Ordinal | Int32 | The index of the parameter. |
| Values | String | The values you can set in this parameter are limited to those shown in this column. Possible values are comma-separated. |
| SupportsStreams | Boolean | Whether the parameter represents a file that you can pass as either a file path or a stream. |
| IsPath | Boolean | Whether the parameter is a target path for a schema creation operation. |
| Default | String | The value used for this parameter when no value is specified. |
| SpecificName | String | A label that, when multiple stored procedures have the same name, uniquely identifies each identically-named stored procedure. If there's only one procedure with a given name, its name is simply reflected here. |
| IsCDataProvided | Boolean | Whether the procedure is added/implemented by CData, as opposed to being a native Amazon DynamoDB procedure. |
| Name | Type | Description |
| IncludeResultColumns | Boolean | Whether the output should include columns from the result set in addition to parameters. Defaults to False. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the Account table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Account'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
| IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
| ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| KeySeq | String | The sequence number of the primary key. |
| KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the index. |
| SchemaName | String | The name of the schema containing the index. |
| TableName | String | The name of the table containing the index. |
| IndexName | String | The index name. |
| ColumnName | String | The name of the column associated with the index. |
| IsUnique | Boolean | True if the index is unique. False otherwise. |
| IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
| Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
| SortOrder | String | The sort order: A for ascending or D for descending. |
| OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
| Name | Type | Description |
| Name | String | The name of the connection property. |
| ShortDescription | String | A brief description. |
| Type | String | The data type of the connection property. |
| Default | String | The default value if one is not explicitly set. |
| Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
| Value | String | The value you set or a preconfigured default. |
| Required | Boolean | Whether the property is required to connect. |
| Category | String | The category of the connection property. |
| IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
| Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
| PropertyName | String | A camel-cased truncated form of the connection property name. |
| Ordinal | Int32 | The index of the parameter. |
| CatOrdinal | Int32 | The index of the parameter category. |
| Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
| Visible | Boolean | Informs whether the property is visible in the connection UI. |
| ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
| AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
| COUNT | Whether COUNT function is supported. | YES, NO |
| IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
| IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
| SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
| GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
| OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
| OUTER_JOINS | Whether outer joins are supported. | YES, NO |
| SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
| STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
| NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
| TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
| REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
| REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
| IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
| SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
| DIALECT | Indicates the SQL dialect to use. | |
| KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
| SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
| SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
| DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
| DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
| SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
| SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
| SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
| PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
| ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
| PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
| MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
| REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
| REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
| REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
| REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
| REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
| IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
| CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
| CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the NoSQL Database section for more information.
| Name | Type | Description |
| NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
| VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
| Name | Type | Description |
| Id | String | The database-generated Id returned from a data modification operation. |
| Batch | String | An identifier for the batch. 1 for a single operation. |
| Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
| Message | String | SUCCESS or an error message if the update in the batch failed. |
Describes the available system information.
The following query retrieves all columns:
SELECT * FROM sys_information
| Name | Type | Description |
| Product | String | The name of the product. |
| Version | String | The version number of the product. |
| Datasource | String | The name of the datasource the product connects to. |
| NodeId | String | The unique identifier of the machine where the product is installed. |
| HelpURL | String | The URL to the product's help documentation. |
| License | String | The license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.) |
| Location | String | The file path location where the product's library is stored. |
| Environment | String | The version of the environment or rumtine the product is currently running under. |
| DataSyncVersion | String | The tier of CData Sync required to use this connector. |
| DataSyncCategory | String | The category of CData Sync functionality (e.g., Source, Destination). |
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| UseLakeFormation | When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion. |
| Property | Description |
| AuthScheme | Specifies the type of authentication to use when connecting to Amazon DynamoDB. If this property is left blank, the default authentication is used. |
| Domain | Specifies your AWS domain name. Use this property to set a custom domain name if your organization has associated one with AWS. |
| DynamoDBVPCEndpoint | Specifies the Amazon DynamoDB VPC endpoint to use when connecting through AWS PrivateLink. |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRoleARN | The Amazon Resource Name of the role to use when authenticating. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSSessionToken | Your AWS session token. |
| AWSExternalId | A unique identifier that might be required when you assume a role in another account. |
| MFASerialNumber | The serial number of the MFA device if one is being used. |
| MFAToken | The temporary token available from your MFA device. |
| TemporaryTokenDuration | The amount of time (in seconds) a temporary token will last. |
| AWSCognitoRegion | The hosting region for AWS Cognito. |
| AWSUserPoolId | The User Pool Id. |
| AWSUserPoolClientAppId | The User Pool Client App Id. |
| AWSUserPoolClientAppSecret | Optional. The User Pool Client App Secret. |
| AWSIdentityPoolId | The Identity Pool Id. |
| AWSWebIdentityToken | The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider. |
| Property | Description |
| User | The IDP user used to authenticate the IDP via SSO. |
| Password | The password used to authenticate the IDP user via SSO. |
| SSOLoginURL | The identity provider's login URL. |
| SSOProperties | Additional properties required to connect to the identity provider in a semicolon-separated list. |
| SSOExchangeURL | The URL used for consuming the SAML response and exchanging it for service specific credentials. |
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| Property | Description |
| AutoDetectIndex | Specifies whether the provider should automatically detect and use secondary indexes based on the query criteria. |
| FlattenArrays | This property flattens nested array elements into individual columns. By default, nested arrays are returned as JSON strings. Set this property to the number of elements to extract from nested arrays. |
| FlattenObjects | Specifies whether nested object properties are flattened into individual columns. |
| FlexibleSchema | Specifies whether the provider should dynamically scan query result sets for additional metadata. Set to true to enable scanning or false to use a static metadata structure. |
| IgnoreTypes | Specifies which data types should be ignored and reported as strings. |
| MaximumRequestRetries | Specifies the maximum number of times the provider retries a request when a temporary issue is detected. Temporary issues include network interruptions, transient errors, or exceeding operational thresholds. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| Pagesize | Specifies the maximum number of items provider evaluates per API request. The default value, -1, allows the server to calculate the page size automatically. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| QueryMode | Specifies the mode used by the provider to retrieve results from Amazon DynamoDB. |
| RetryWaitTime | Specifies the minimum number of milliseconds the provider waits before retrying a request. The wait time doubles with each retry. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| SeparatorCharacter | Specifies the character or characters used to denote hierarchy in flattened structures, such as Maps and List attributes in DynamoDB. |
| ThreadCount | Specifies the number of threads to allocate for parallel scans during data selection. A value of 1 disables parallel scanning, while higher values increase parallelism. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| TypeDetectionScheme | Specifies the method used to determine the data type of columns. |
| UseBatchWriteItemOperation | Specifies the use of the BatchWriteItem operation for updates and inserts. This is required for handling binary or binary-set data, as the default operations (ExecuteStatement/BatchExecuteStatement) do not support these field types. |
| UseConsistentReads | Specifies whether consistent reads should always be used when querying DynamoDB. Consistent reads provide the most up-to-date data, but consume more read capacity. |
| UseSimpleNames | Specifies whether or not simple names should be used for tables and columns. |
This section provides a complete list of the Connection properties you can configure in the connection string for this provider.
| Property | Description |
| UseLakeFormation | When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion. |
When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion.
bool
false
When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion.
This section provides a complete list of the AWS Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | Specifies the type of authentication to use when connecting to Amazon DynamoDB. If this property is left blank, the default authentication is used. |
| Domain | Specifies your AWS domain name. Use this property to set a custom domain name if your organization has associated one with AWS. |
| DynamoDBVPCEndpoint | Specifies the Amazon DynamoDB VPC endpoint to use when connecting through AWS PrivateLink. |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRoleARN | The Amazon Resource Name of the role to use when authenticating. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSSessionToken | Your AWS session token. |
| AWSExternalId | A unique identifier that might be required when you assume a role in another account. |
| MFASerialNumber | The serial number of the MFA device if one is being used. |
| MFAToken | The temporary token available from your MFA device. |
| TemporaryTokenDuration | The amount of time (in seconds) a temporary token will last. |
| AWSCognitoRegion | The hosting region for AWS Cognito. |
| AWSUserPoolId | The User Pool Id. |
| AWSUserPoolClientAppId | The User Pool Client App Id. |
| AWSUserPoolClientAppSecret | Optional. The User Pool Client App Secret. |
| AWSIdentityPoolId | The Identity Pool Id. |
| AWSWebIdentityToken | The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider. |
Specifies the type of authentication to use when connecting to Amazon DynamoDB. If this property is left blank, the default authentication is used.
string
"AwsRootKeys"
Specifies your AWS domain name. Use this property to set a custom domain name if your organization has associated one with AWS.
string
"amazonaws.com"
This property specifies the AWS domain name to use when connecting to services. If your organization uses a custom AWS domain, provide it here. If you do not have a unique domain, use the default value, "amazonaws.com". Ensure the domain name matches your AWS setup to avoid connection errors.
Specifies the Amazon DynamoDB VPC endpoint to use when connecting through AWS PrivateLink.
string
""
Use this property to connect to DynamoDB through a private VPC endpoint instead of the public service endpoint (the default DynamoDB URL). When set, this property overrides the default regional endpoint (dynamodb.{region}.amazonaws.com).
An AWS PrivateLink endpoint for DynamoDB follows the format: vpce-abcdef12-3455.dynamodb.{region}.vpce.amazonaws.com.
Specifies your AWS account access key. This value is accessible from your AWS security credentials page.
string
""
To find your AWS account access key:
Your AWS account secret key. This value is accessible from your AWS security credentials page.
string
""
Your AWS account secret key. This value is accessible from your AWS security credentials page:
The Amazon Resource Name of the role to use when authenticating.
string
""
When authenticating outside of AWS, it is common to use a Role for authentication instead of your direct AWS account credentials. Entering the AWSRoleARN will cause the CData Cloud to perform a role based authentication instead of using the AWSAccessKey and AWSSecretKey directly. The AWSAccessKey and AWSSecretKey must still be specified to perform this authentication. You cannot use the credentials of an AWS root user when setting RoleARN. The AWSAccessKey and AWSSecretKey must be those of an IAM user.
The hosting region for your Amazon Web Services.
string
"NORTHERNVIRGINIA"
The hosting region for your Amazon Web Services. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, TAIPEI, HYDERABAD, JAKARTA, MALAYSIA, MELBOURNE, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, THAILAND, TOKYO, CENTRAL, CALGARY, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, SPAIN, STOCKHOLM, ZURICH, TELAVIV, MEXICOCENTRAL, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST, ISOLATEDUSEAST, ISOLATEDUSEASTB, ISOLATEDUSEASTF, ISOLATEDUSSOUTHF, ISOLATEDUSWEST and ISOLATEDEUWEST.
Your AWS session token.
string
""
Your AWS session token. This value can be retrieved in different ways. See this link for more info.
A unique identifier that might be required when you assume a role in another account.
string
""
A unique identifier that might be required when you assume a role in another account.
The serial number of the MFA device if one is being used.
string
""
You can find the device for an IAM user by going to the AWS Management Console and viewing the user's security credentials. For virtual devices, this is actually an Amazon Resource Name (such as arn:aws:iam::123456789012:mfa/user).
The temporary token available from your MFA device.
string
""
If MFA is required, this value will be used along with the MFASerialNumber to retrieve temporary credentials to login. The temporary credentials available from AWS will only last up to 1 hour by default (see TemporaryTokenDuration). Once the time is up, the connection must be updated to specify a new MFA token so that new credentials may be obtained.
The amount of time (in seconds) a temporary token will last.
string
"3600"
Temporary tokens are used with both MFA and Role based authentication. Temporary tokens will eventually time out, at which time a new temporary token must be obtained. For situations where MFA is not used, this is not a big deal. The CData Cloud will internally request a new temporary token once the temporary token has expired.
However, for MFA required connection, a new MFAToken must be specified in the connection to retrieve a new temporary token. This is a more intrusive issue since it requires an update to the connection by the user. The maximum and minimum that can be specified will depend largely on the connection being used.
For Role based authentication, the minimum duration is 900 seconds (15 minutes) while the maximum if 3600 (1 hour). Even if MFA is used with role based authentication, 3600 is still the maximum.
For MFA authentication by itself (using an IAM User or root user), the minimum is 900 seconds (15 minutes), the maximum is 129600 (36 hours).
The hosting region for AWS Cognito.
string
"NORTHERNVIRGINIA"
The hosting region for AWS Cognito. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, TAIPEI, HYDERABAD, JAKARTA, MALAYSIA, MELBOURNE, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, THAILAND, TOKYO, CENTRAL, CALGARY, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, SPAIN, STOCKHOLM, ZURICH, TELAVIV, MEXICOCENTRAL, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST, ISOLATEDUSEAST, ISOLATEDUSEASTB, ISOLATEDUSEASTF, ISOLATEDUSSOUTHF, ISOLATEDUSWEST and ISOLATEDEUWEST.
The User Pool Id.
string
""
You can find this in AWS Cognito -> Manage User Pools -> select your user pool -> General settings -> Pool Id.
The User Pool Client App Id.
string
""
You can find this in AWS Cognito -> Manage Identity Pools -> select your user pool -> General settings -> App clients -> App client Id.
Optional. The User Pool Client App Secret.
string
""
You can find this in AWS Cognito -> Manage Identity Pools -> select your user pool -> General settings -> App clients -> App client secret.
The Identity Pool Id.
string
""
You can find this in AWS Cognito -> Manage Identity Pools -> select your identity pool -> Edit identity pool -> Identity Pool Id
The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider.
string
""
The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider. An application can get this token by authenticating a user with a web identity provider. If not specified, the value for this connection property is automatically obtained from the value of the 'AWS_WEB_IDENTITY_TOKEN_FILE' environment variable.
This section provides a complete list of the SSO properties you can configure in the connection string for this provider.
| Property | Description |
| User | The IDP user used to authenticate the IDP via SSO. |
| Password | The password used to authenticate the IDP user via SSO. |
| SSOLoginURL | The identity provider's login URL. |
| SSOProperties | Additional properties required to connect to the identity provider in a semicolon-separated list. |
| SSOExchangeURL | The URL used for consuming the SAML response and exchanging it for service specific credentials. |
The IDP user used to authenticate the IDP via SSO.
string
""
Together with Password, this field is used to authenticate in SSO connections against the Amazon DynamoDB server.
The password used to authenticate the IDP user via SSO.
string
""
The User and Password are together used in SSO connections to authenticate with the server.
The identity provider's login URL.
string
""
The identity provider's login URL.
Additional properties required to connect to the identity provider in a semicolon-separated list.
string
""
Additional properties required to connect to the identity provider in a semicolon-separated list. SSOProperties is used in conjunction with the the AWSRoleARN and AWSPrincipalARN. The following section provides an example using the OKTA identity provider.
To connect to ADFS, set these properties:
Example connection string:
AuthScheme=ADFS; AWSRegion=Ireland; [email protected]; Password=CH8WerW121235647iCa6; SSOLoginURL='https://adfs.domain.com'; AWSRoleArn=arn:aws:iam::1234:role/ADFS_SSO; AWSPrincipalArn=arn:aws:iam::1234:saml-provider/ADFSProvider; S3StagingDirectory=s3://athena/staging;
The ADFS Integrated flow indicates you are connecting with the user credentials of the currently logged in Windows user. To use the ADFS Integrated flow, do not specify the User and Password, but otherwise follow the same steps noted above under ADFS.
To connect to Okta, set these properties:
If you are either using a trusted application or proxy that overrides the Okta client request OR configuring MFA, you must use combinations of SSOProperties to authenticate using Okta. Set any of the following, as applicable:
Example connection string:
AuthScheme=Okta; AWSRegion=Ireland; [email protected]; Password=CH8WerW121235647iCa6; SSOLoginURL='https://cdata-us.okta.com/home/amazon_aws/0oa35m8arsAL5f5NrE6NdA356/272'; SSOProperties='ApiToken=01230GGG2ceAnm_tPAf4MhiMELXZ0L0N1pAYrO1VR-hGQSf;'; AWSRoleArn=arn:aws:iam::1234:role/Okta_SSO; AWSPrincipalARN=arn:aws:iam::1234:saml-provider/OktaProvider; S3StagingDirectory=s3://athena/staging;
The URL used for consuming the SAML response and exchanging it for service specific credentials.
string
""
The CData Cloud will use the URL specified here to consume a SAML response and exchange it for service specific credentials. The retrieved credentials are the final piece during the SSO connection that are used to communicate with Amazon DynamoDB.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If you are using a TLS/SSL connection, use this property to specify the TLS/SSL certificate to be accepted from the server. If you specify a value for this property, all other certificates that are not trusted by the machine are rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space- or colon-separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space- or colon-separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
Note: It is possible to use '*' to signify that all certificates should be accepted, but due to security concerns this is not recommended.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.
string
"1"
This property defines the level of detail the Cloud includes in the log file. Higher verbosity levels increase the detail of the logged information, but may also result in larger log files and slower performance due to the additional data being captured.
The default verbosity level is 1, which is recommended for regular operation. Higher verbosity levels are primarily intended for debugging purposes. For more information on each level, refer to Logging.
When combined with the LogModules property, Verbosity can refine logging to specific categories of information.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
string
""
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| AutoDetectIndex | Specifies whether the provider should automatically detect and use secondary indexes based on the query criteria. |
| FlattenArrays | This property flattens nested array elements into individual columns. By default, nested arrays are returned as JSON strings. Set this property to the number of elements to extract from nested arrays. |
| FlattenObjects | Specifies whether nested object properties are flattened into individual columns. |
| FlexibleSchema | Specifies whether the provider should dynamically scan query result sets for additional metadata. Set to true to enable scanning or false to use a static metadata structure. |
| IgnoreTypes | Specifies which data types should be ignored and reported as strings. |
| MaximumRequestRetries | Specifies the maximum number of times the provider retries a request when a temporary issue is detected. Temporary issues include network interruptions, transient errors, or exceeding operational thresholds. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| Pagesize | Specifies the maximum number of items provider evaluates per API request. The default value, -1, allows the server to calculate the page size automatically. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| QueryMode | Specifies the mode used by the provider to retrieve results from Amazon DynamoDB. |
| RetryWaitTime | Specifies the minimum number of milliseconds the provider waits before retrying a request. The wait time doubles with each retry. |
| RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
| SeparatorCharacter | Specifies the character or characters used to denote hierarchy in flattened structures, such as Maps and List attributes in DynamoDB. |
| ThreadCount | Specifies the number of threads to allocate for parallel scans during data selection. A value of 1 disables parallel scanning, while higher values increase parallelism. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| TypeDetectionScheme | Specifies the method used to determine the data type of columns. |
| UseBatchWriteItemOperation | Specifies the use of the BatchWriteItem operation for updates and inserts. This is required for handling binary or binary-set data, as the default operations (ExecuteStatement/BatchExecuteStatement) do not support these field types. |
| UseConsistentReads | Specifies whether consistent reads should always be used when querying DynamoDB. Consistent reads provide the most up-to-date data, but consume more read capacity. |
| UseSimpleNames | Specifies whether or not simple names should be used for tables and columns. |
Specifies whether the provider should automatically detect and use secondary indexes based on the query criteria.
bool
true
This property controls the automatic detection of secondary indexes, which can optimize data selection in DynamoDB tables. By default, this property is set to true, enabling the provider to analyze the query criteria and choose an appropriate secondary index automatically.
This property is useful for scenarios where the default behavior does not align with your query optimization strategy, giving you flexibility to fine-tune index usage for your DynamoDB tables.
This property flattens nested array elements into individual columns. By default, nested arrays are returned as JSON strings. Set this property to the number of elements to extract from nested arrays.
string
""
Use this property to extract elements from nested arrays and represent them as individual columns.
This property is useful for simplifying the representation of short arrays in tabular output.
The extracted elements are assigned column names with their zero-based index appended. Any remaining elements in the array are ignored.
For example, the following array is flattened into two columns when FlattenArrays is set to 2:
["FLOW-MATIC", "LISP", "COBOL"]
| Column Name | Column Value |
| languages_0 | FLOW-MATIC |
| languages_1 | LISP |
Flattening longer arrays may result in unused elements being discarded, so it is recommended for arrays expected to contain a small number of items.
Specifies whether nested object properties are flattened into individual columns.
bool
true
When this property is set to true, object properties are extracted as separate columns. When it is set to false, nested objects within arrays are represented as JSON-formatted strings. Flattening nested objects into individual columns simplifies working with structured data. When enabled, the provider appends the property name to the parent object name to generate column names. This is useful for tabularizing predictable and manageable object structures.
For deeply nested or large JSON objects, consider the performance implications of flattening, as excessive flattening may create an unmanageable number of columns. For objects with unpredictable properties or varying schemas, leaving this property disabled may provide a more flexible representation.
For example, you can flatten the nested objects below at connection time:
[
{ "grade": "A", "score": 2 },
{ "grade": "A", "score": 6 },
{ "grade": "A", "score": 10 },
{ "grade": "A", "score": 9 },
{ "grade": "B", "score": 14 }
]
When FlattenObjects is set to true and FlattenArrays is set to 1, the preceding array is flattened into the following table:
| Column Name | Column Value |
| grades_0_grade | A |
| grades_0_score | 2 |
Specifies whether the provider should dynamically scan query result sets for additional metadata. Set to true to enable scanning or false to use a static metadata structure.
bool
true
When enabled, this property allows the provider to dynamically analyze query result sets for additional metadata, ensuring the result schema reflects any changes or variations in the queried data. This property is useful when working with data sources where schema details may vary or are not fully known in advance.
Disabling this property preserves a static metadata structure, which may improve performance when querying data with a consistent schema. Use this property based on the predictability of your data source and performance considerations.
Specifies which data types should be ignored and reported as strings.
string
"Datetime,Date,Time"
This property allows you to exclude specific data types from being processed as their native types. When a type is ignored, it is treated as a string. By default, Datetime, Date, and Time are ignored and reported as string values instead of their native types.
This property is useful when compatibility issues or downstream processing requirements necessitate treating certain types as text. For example, applications that do not handle Time data types may benefit from converting them to strings. Note: Changes to this property take effect on the next connection.
Specifies the maximum number of times the provider retries a request when a temporary issue is detected. Temporary issues include network interruptions, transient errors, or exceeding operational thresholds.
string
"4"
This property controls the number of retries the driver attempts when a temporary issue, such as network instability or rate limits, is encountered. For each retry, the Cloud follows an exponential backoff strategy: the wait time between retries starts at the value specified by RetryWaitTime and doubles with each subsequent retry until the maximum number of retries is reached.
For example, if RetryWaitTime is set to 2 seconds and MaximumRequestRetries is set to 5, the Cloud waits as follows: 0 seconds (initial attempt), 2 seconds, 4 seconds, 8 seconds, 16 seconds, and 32 seconds.
This property is useful in scenarios where temporary issues are expected, such as high-latency networks or environments with strict API quotas.
Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY.
int
-1
The default value for this property, -1, means that no row limit is enforced unless the query explicitly includes a LIMIT clause. (When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting.)
Setting MaxRows to a whole number greater than 0 ensures that queries do not return excessively large result sets by default.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
Specifies the maximum number of items provider evaluates per API request. The default value, -1, allows the server to calculate the page size automatically.
int
-1
Note that this limit applies to the number of items evaluated, not the number of matching items returned. If the dataset size exceeds 1 MB or the number of evaluated items reaches the specified page size, the operation stops and returns the matching results along with a pagination token to retrieve the remaining data. Set this property to a specific value to control the size of each API request and optimize performance. Adjust this property based on your application’s performance and memory requirements.
Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'.
string
""
This property allows you to define which pseudocolumns the Cloud exposes as table columns.
To specify individual pseudocolumns, use the following format:
Table1=Column1;Table1=Column2;Table2=Column3
To include all pseudocolumns for all tables use:
*=*
Specifies the mode used by the provider to retrieve results from Amazon DynamoDB.
string
"Adaptive"
This property determines the query execution strategy for retrieving results from DynamoDB:
Use Adaptive for optimal performance, as it dynamically selects the most efficient query mode. Choose PartiQL for precise query translation or SCAN when a complete table scan is required.
Specifies the minimum number of milliseconds the provider waits before retrying a request. The wait time doubles with each retry.
string
"2000"
This property defines the base wait time, in milliseconds, between retries when a temporary issue is detected like a network failures or rate-limiting. With each retry, the wait time doubles, following an exponential backoff strategy.
The total number of retries is controlled by the MaximumRequestRetries property. For example, if RetryWaitTime is set to 2000 milliseconds and MaximumRequestRetries is set to 3, the driver waits 2000, 4000, and 8000 milliseconds before subsequent retries.
The maximum number of rows to scan to look for the columns available in a table.
int
50
The columns in a table must be determined by scanning table rows. This value determines the maximum number of rows that will be scanned.
Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.
Specifies the character or characters used to denote hierarchy in flattened structures, such as Maps and List attributes in DynamoDB.
string
"."
This property defines the delimiter used to represent hierarchical relationships in flattened structures within DynamoDB. For example, when SeparatorCharacter is set to ".", an attribute named address.city indicates that address is a parent attribute with a child attribute called city.
If your data includes attribute names containing the specified separator, for example, a period (.), you should choose a different SeparatorCharacter to prevent ambiguity in column naming. This property is useful for handling complex, nested data structures where clear delineation of hierarchy is required.
Specifies the number of threads to allocate for parallel scans during data selection. A value of 1 disables parallel scanning, while higher values increase parallelism.
string
"5"
Parallel scans allow the retrieval process to run across multiple threads, improving performance when scanning large datasets in Amazon DynamoDB. The number of threads specified by ThreadCount determines how data is split for processing. While increasing ThreadCount can significantly speed up scans, it also accelerates the consumption of read units for the table.
Higher values for ThreadCount require more system resources, such as CPU cores and bandwidth. Excessive parallelism may exhaust read capacity units quickly, potentially incurring additional costs or impacting other operations on the table. It is important to evaluate your system’s available resources and the read units allocated to your DynamoDB tables before adjusting this property.
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error.
int
60
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.
Timeout is set to 60 seconds by default. To disable timeouts, set this property to 0.
Disabling the timeout allows operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server.
Note: Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
Specifies the method used to determine the data type of columns.
string
"RowScan"
This property defines the strategy for determining column data types:
By default, RowScanDepth is used if no value is explicitly specified. Use None for simplicity when data type inference is not required or when consistent string typing is preferred.
Specifies the use of the BatchWriteItem operation for updates and inserts. This is required for handling binary or binary-set data, as the default operations (ExecuteStatement/BatchExecuteStatement) do not support these field types.
bool
false
By default, the Cloud uses the ExecuteStatement or BatchExecuteStatement operation to handle updates and inserts. However, these operations do not support manipulating binary or binary-set fields. To handle these data types, enable this property to switch to the BatchWriteItem operation.
Using BatchWriteItem may alter the behavior and performance characteristics of updates and inserts. This property should only be enabled when your dataset includes binary or binary-set data that needs to be inserted or updated. For other use cases, the default operations are sufficient.
Specifies whether consistent reads should always be used when querying DynamoDB. Consistent reads provide the most up-to-date data, but consume more read capacity.
bool
false
When this property is set to true, the Cloud performs consistent reads, ensuring the most up-to-date data is returned for queries and scans. However, consistent reads consume twice as many read capacity units as eventually consistent reads. Use this property only when accurate and immediate data consistency is critical for your use case.
Note: Consistent reads are not supported for global secondary indexes. If you scan or query using a secondary index, the property is ignored even if set to true.
Specifies whether or not simple names should be used for tables and columns.
bool
false
Amazon DynamoDB tables can include special characters in their names that are typically not allowed in standard databases. This property makes the Cloud easier to use with traditional database tools.
Setting UseSimpleNames to True simplifies the names of the columns that are returned. It enforces a naming scheme where only alphanumeric characters and underscores are valid for displayed column names.
Notes:
LZMA from 7Zip LZMA SDK
LZMA SDK is placed in the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original LZMA SDK code, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
LZMA2 from XZ SDK
Version 1.9 and older are in the public domain.
Xamarin.Forms
Xamarin SDK
The MIT License (MIT)
Copyright (c) .NET Foundation Contributors
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NSIS 3.10
Copyright (C) 1999-2025 Contributors THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
1. DEFINITIONS
"Contribution" means:
a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and b) in the case of each subsequent Contributor:
i) changes to the Program, and
ii) additions to the Program;
where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
"Contributor" means any person or entity that distributes the Program.
"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
"Program" means the Contributions distributed in accordance with this Agreement.
"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
2. GRANT OF RIGHTS
a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
3. REQUIREMENTS
A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
a) it complies with the terms and conditions of this Agreement; and
b) its license agreement:
i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
When the Program is made available in source code form:
a) it must be made available under this Agreement; and
b) a copy of this Agreement must be included with each copy of the Program.
Contributors may not remove or alter any copyright notices contained within the Program.
Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
4. COMMERCIAL DISTRIBUTION
Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
5. NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
6. DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. GENERAL
If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.