CData Cloud offers access to CSV across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a SQL Server database can connect to CSV through CData Cloud.
CData Cloud allows you to standardize and configure connections to CSV as though it were any other OData endpoint or standard SQL Server.
The CData Cloud is designed for streaming CSV only.
This streamed file content does not include all of the metadata associated with remotely stored CSV files, such as file and folder name.
If access to both the file metadata and the actual file content is needed, then the CData Cloud must be used in tandem with the associated file system driver(s) for the service the CSV files are remotely stored in.
The following file system drivers are available:
See the relevant CData file system driver's documentation for a configuration guide for connecting to stored CSV file metadata.
This page provides a guide to Establishing a Connection to CSV in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to CSV and configure any necessary connection properties to create a database in CData Cloud
Accessing data from CSV through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to CSV by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
The CData Cloud allows connecting to local and remote CSV resources. Set the URI property to the CSV resource location, in addition to any other properties necessary to connect to your data source.
Set the ConnectionType to Local. Local files support SELECT\INSERT\UPDATE\DELETE queries.
Set the URI to a folder containing CSV files: C:\folder1.
You can also connect to multiple CSV files which share the same schema. Below is an example connection string:
URI=C:\folder; AggregateFiles=True;
If you would prefer to expose all of the individual CSV files as tables instead, leave this property False.
URI=C:\folder; AggregateFiles=False;
If you need INSERT/UPDATE/DELETE cloud files, you can download the corresponding CData Cloud for that cloud host (supported via stored procedures), make changes with the local file's corresponding Cloud, then upload the file using the cloud source's stored procedures.
As an example, if you wanted to update a file stored on SharePoint, you could use the CData SharePoint Cloud's DownloadDocument procedure to download the CSV file, update the local CSV file with the CData CSV Cloud, then use the SharePoint Cloud's UploadDocument procedure to upload the changed file to SharePoint.
A unique prefix at the beginning of the URI connection property is used to identify the cloud data store being targed by the Cloud and the remainder of the path is a relative path to the desired folder (one table per file) or single file (a single table).
Set the following to identify your CSV resources stored on Amazon S3:
See Connecting to Amazon S3 for more information regarding how to connect and authenticate to CSV files hosted on Amazon S3.
Set the following to identify your CSV resources stored on Azure Blob Storage:
See Connecting to Azure Blob Storage for more information regarding how to connect and authenticate to CSV files hosted on Amazon Blob Storage.
Set the following to identify your CSV resources stored on Azure Data Lake Storage:
See Connecting to Azure Data Lake Storage for more information regarding how to connect and authenticate to CSV files hosted on Azure Data Lake Storage.
Set the following properties to connect:
You can authenticate either an Azure access key or an Azure shared access signature. Set one of the following:
Set the following to identify your CSV resources stored on Box:
See Connecting to Box for more information regarding how to connect and authenticate to CSV files hosted on Box.
Set the following to identify your CSV resources stored on Dropbox:
See Connecting to Dropbox for more information regarding how to connect and authenticate to CSV files hosted on Dropbox.
The Cloud supports both plaintext and SSL/TLS connections to FTP servers.
Set the following connection properties to connect:
Set the following to identify your CSV resources stored on Google Cloud Storage:
See Connecting to Google Cloud Storage for more information regarding how to connect and authenticate to CSV files hosted on Google Cloud Storage.
Set the following to identify your CSV resources stored on Google Drive:
See Connecting to Google Drive for more information regarding how to connect and authenticate to CSV files hosted on Google Drive.
Set the following to identify your CSV resources stored on HDFS:
There are two authentication methods available for connecting to HDFS data source, Anonymous Authentication and Negotiate (Kerberos) Authentication.
Anonymous Authentication
In some situations, you can connect to HDFS without any authentication connection properties. To do so, set the AuthScheme property to None (default).
Authenticate using Kerberos
When authentication credentials are required, you can use Kerberos for authentication. See Using Kerberos for details on how to authenticate with Kerberos.
Set the following to identify your CSV resources stored on HTTP streams:
See Connecting to HTTP Streams for more information regarding how to connect and authenticate to CSV files hosted on HTTP Streams.
Set the following to identify your CSV resources stored on IBM Cloud Object Storage:
See Connecting to IBM Object Storage for more information regarding how to connect and authenticate to CSV files hosted on IBM Cloud Object Storage.
Set the following to identify your CSV resources stored on OneDrive:
See Connecting to OneDrive for more information regarding how to connect and authenticate to CSV files hosted on OneDrive.
Set the following to identify your CSV resources stored on OneLake:
See Connecting to OneLake for more information regarding how to connect and authenticate to CSV files hosted on OneLake.
Set the following properties to authenticate with IAMSecretKey:
Set the following to identify your CSV resources stored on SFTP:
See Connecting to SFTP for more information regarding how to connect and authenticate to CSV files hosted on SFTP.
Set the following to identify your CSV resources stored on SharePoint Online:
Use the Sharepoint URL as the remote path. Not the display name.
If your files are stored in a non-root-level SharePoint Online site (for example, under /sites/<your site>/), be sure to set the StorageBaseURL property to the full path of the SharePoint site.
Using the full SharePoint site URL ensures the Cloud can properly locate files stored in non-root-level locations within your organization's SharePoint Online environment.
See Connecting to SharePoint Online for more information regarding how to connect and authenticate to CSV files hosted on SharePoint Online.
Set the following to identify your CSV resources stored on SharePoint On Premise:
Use the Sharepoint URL as the remote path. Not the display name.
See Connecting to SharePoint On Premise for more information regarding how to connect and authenticate to CSV files hosted on SharePoint On Premise.
You can also read from and write to system streams. Reference the stream from code with the ExtendedProperties connection property.
By default, the Cloud attempts to negotiate SSL/TLS by checking the server's certificate against the system's trusted certificate store. To specify another certificate, see the SSLServerCert property for the available formats to do so.
Below are example connection strings to CSV files or streams, using the Cloud's default data modeling configuration (see below)
| Service provider | URI formats | Connection example |
| Local | Single File Path (One table) file://localPath | URI=C:/folder1; |
| Directory Path (one table per file) file://localPath | ||
| HTTP or HTTPS | http://remoteStream https://remoteStream | URI=http://www.host1.com/streamname1; |
| Amazon S3 | Single File Path (One table) s3://remotePath | URI=s3://bucket1/folder1; AWSSecretKey=secret1; AWSRegion=OHIO; |
| Directory Path (one table per file) s3://remotePath | ||
| Azure Blob Storage | azureblob://mycontainer/myblob | URI=azureblob://mycontainer/myblob; AzureStorageAccount=myAccount; AzureAccessKey=myKey; URI=azureblob://mycontainer/myblob; AzureStorageAccount=myAccount; AuthScheme=OAuth; |
| Google Drive | Single File Path (One table) gdrive://remotePath gdrive://SharedWithMe/remotePath | URI=gdrive://folder1; AuthScheme=OAuth; URI=gdrive://SharedWithMe/folder1; AuthScheme=OAuth; |
| Directory Path (one table per file) gdrive://remotePath gdrive://SharedWithMe/remotePath | ||
| One Drive | Single File Path (One table) onedrive://remotePath onedrive://SharedWithMe/remotePath | URI=onedrive://folder1; AuthScheme=OAuth; URI=onedrive://SharedWithMe/folder1; AuthScheme=OAuth; |
| Directory Path (one table per file) onedrive://remotePath onedrive://SharedWithMe/remotePath | ||
| Box | Single File Path (One table) box://remotePath | URI=box://folder1; AuthScheme=OAuth; |
| Directory Path (one table per file) box://remotePath | ||
| Dropbox | Single File Path (One table) dropbox://remotePath | URI=dropbox://folder1; AuthScheme=OAuth; OAuthClientId=oauthclientid1; OAuthClientSecret=oauthcliensecret1; CallbackUrl=http://localhost:12345; |
| Directory Path (one table per file) dropbox://remotePath | ||
| SharePoint SOAP | Single File Path (One table) sp://remotePath | URI=sp://Documents/folder1; User=user1; Password=password1; StorageBaseURL=https://subdomain.sharepoint.com; |
| Directory Path (one table per file) sp://remotePath | ||
| SharePoint REST | Single File Path (One table) sprest://remotePath | URI=sprest://Documents/folder1; AuthScheme=OAuth; StorageBaseURL=https://subdomain.sharepoint.com; |
| Directory Path (one table per file) sprest://remotePath | ||
| FTP or FTPS | Single File Path (One table) ftp://server:port/remotePath ftps://server:port/remotepath | URI=ftps://localhost:990/folder1; User=user1; Password=password1; |
| Directory Path (one table per file) ftp://server:port/remotePath ftps://server:port/remotepath; | ||
| SFTP | Single File Path (One table) sftp://server:port/remotePath | URI=sftp://127.0.0.1:22/folder1; User=user1; Password=password1; URI=sftp://127.0.0.1:22/folder1; SSHAuthmode=PublicKey; SSHClientCert=myPrivateKey |
| Directory Path (one table per file) sftp://server:port/remotePath | ||
| Azure Data Lake Store Gen1 | adl://remotePath adl://Account.azuredatalakestore.net@remotePath | URI=adl://folder1; AuthScheme=OAuth; AzureStorageAccount=myAccount; AzureTenant=tenant; URI=adl://myAccount.azuredatalakestore.net@folder1; AuthScheme=OAuth; AzureTenant=tenant; |
| Azure Data Lake Store Gen2 | abfs://myfilesystem/remotePath abfs://[email protected]/remotepath | URI=abfs://myfilesystem/folder1; AzureStorageAccount=myAccount; AzureAccessKey=myKey; URI=abfs://[email protected]/folder1; AzureAccessKey=myKey; |
| Azure Data Lake Store Gen2 with SSL | abfss://myfilesystem/remotePath abfss://[email protected]/remotepath | URI=abfss://myfilesystem/folder1; AzureStorageAccount=myAccount; AzureAccessKey=myKey; URI=abfss://[email protected]/folder1; AzureAccessKey=myKey; |
| Wasabi | Single File Path (One table) wasabi://bucket1/remotePath | URI=wasabi://bucket/folder1; AccessKey=token1; SecretKey=secret1; Region='us-west-1'; |
| Directory Path (one table per file) wasabi://bucket1/remotePath | ||
| Google Cloud Storage | Single File Path (One table) gs://bucket/remotePath | URI=gs://bucket/folder1; AuthScheme=OAuth; ProjectId=test; |
| Directory Path (one table per file) gs://bucket/remotePath | ||
| Oracle Cloud Storage | Single File Path (One table) os://bucket/remotePath | URI=os://bucket/folder1; AccessKey='myKey'; SecretKey='mySecretKey'; OracleNameSpace='myNameSpace' Region='us-west-1'; |
| Directory Path (one table per file) os://bucket/remotePath | ||
| Azure File | Single File Path (One table) azurefile://fileShare/remotePath | URI=azurefile://bucket/folder1; AzureStorageAccount='myAccount'; AzureAccessKey='mySecretKey'; URI=azurefile://bucket/folder1; AzureStorageAccount='myAccount'; AzureSharedAccessSignature='mySharedAccessSignature'; |
| Directory Path (one table per file) azurefile://fileShare/remotePath | ||
| IBM Object Storage Source | Single File Path (One table) ibmobjectstorage://bucket1/remotePath | URI=ibmobjectstorage://bucket/folder1; AuthScheme='IAMSecretKey'; AccessKey=token1; SecretKey=secret1; Region='eu-gb'; URI=ibmobjectstorage://bucket/folder1; ApiKey=key1; Region='eu-gb'; AuthScheme=OAuth; InitiateOAuth=GETANDREFRESH; |
| Directory Path (one table per file) ibmobjectstorage://bucket1/remotePath | ||
| Hadoop Distributed File System | Single File Path (One table) webhdfs://host:port/remotePath | URI=webhdfs://host:port/folder1 |
| Directory Path (one table per file) webhdfs://host:port/remotePath | ||
| Secure Hadoop Distributed File System | Single File Path (One table) webhdfss://host:port/remotePath | URI=webhdfss://host:port/folder1 |
| Directory Path (one table per file) webhdfss://host:port/remotePath |
The following properties control how the Cloud automatically models CSV as tables when you connect:
When working with local CSV, you can also use Schema.ini files, compatible with the Microsoft Jet driver, to define columns and data types. See Using Schema.ini for a guide.
To customize column data types and other aspects of the schemas, you can save the schemas to static configuration files. The configuration files have a simple format that makes them easy to extend. For more information on extending the Cloud schemas, see Generating Schema Files.
Set the following properties to model subfolders as views:
When IncludeSubdirectories is set, the automatically detected table names follow the convention below:
| File Path | Root\subfolder1\tableA | Root\subfolder1\subfolder2\tableA |
| Table Name | subfolder1_tableA | subfolder1_subfolder2_tableA |
To obtain the credentials for an IAM user:
To obtain the credentials for your AWS root account:
Specify the following to connect to data:
There are several authentication methods available for connecting to CSV including:
To authenticate using account root credentials, set these parameters:
Note: Amazon discourages using root credentials for anything beyond simple testing. The account root credentials have the full permissions of the user, posing a security risk and making this the least secure authentication method.
If multi-factor authentication is required, specify the following:
Note: If you want to control the duration of the temporary credentials, set the TemporaryTokenDuration property (default: 3600 seconds).
Set AuthScheme to AwsEC2Roles.
If you are using the Cloud from an EC2 Instance and have an IAM Role assigned to the instance, you can use the IAM Role to authenticate. Since the Cloud automatically obtains your IAM Role credentials and authenticates with them, it is not necessary to specify AWSAccessKey and AWSSecretKey.
If you are also using an IAM role to authenticate, you must additionally specify the following:
The CSV Cloud now supports IMDSv2. Unlike IMDSv1, the new version requires an authentication token. Endpoints and response are the same in both versions.
In IMDSv2, the CSV Cloud first attempts to retrieve the IMDSv2 metadata token and then uses it to call AWS metadata endpoints. If it is unable to retrieve the token, the Cloud reverts to IMDSv1.
Set AuthScheme to AwsWebIdentity.
If you are either using CSV from a container configured to assume role with web identity (such as a Pod in an EKS cluster with an OpenID Provider) or have authenticated with a web identity provider associated with an IAM role (and have thus obtained an identity token), you can exchange the web identity token and IAM role information for temporary security credentials to authenticate and access AWS services.
If the container has AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE specified in the environment variables, CSV automatically obtains the credentials.
You can also authenticate by specifying both AWSRoleARN and AWSWebIdentityToken to execute the AssumeRoleWithWebIdentity API operation.
To authenticate as an AWS role, set these properties:
If multi-factor authentication is required, specify the following:
Note: If you want to control the duration of the temporary credentials, set the TemporaryTokenDuration property (default: 3600 seconds).
Note: In some circumstances it might be preferable to use an IAM role for authentication, rather than the direct security credentials of an AWS root user. If you are specifying the AWSAccessKey and AWSSecretKey of an AWS root user, you cannot use roles.
To connect to ADFS, set these properties:
To authenticate to ADFS, set these SSOProperties:
Example connection string:
AuthScheme=ADFS;User=username;Password=password;SSOLoginURL='https://sts.company.com';SSOProperties='RelyingParty=https://saml.salesforce.com';
The ADFS Integrated flow indicates you are connecting with the user credentials of the currently logged in Windows user. To use the ADFS Integrated flow, do not specify the User and Password, but otherwise follow the same steps noted above under ADFS.
To connect to Okta, set these properties:
If you are either using a trusted application or proxy that overrides the Okta client request OR configuring MFA, you must use combinations of SSOProperties to authenticate using Okta. Set any of the following, as applicable:
Example connection string:
AuthScheme=Okta;SSOLoginURL='https://example.okta.com/home/appType/0bg4ivz6cJRZgCz5d6/46';User=oktaUserName;Password=oktaPassword;
To enable mutual SSL authentication for SSOLoginURL, the WS-Trust STS endpoint, configure these SSOProperties:
Example connection string:
authScheme=pingfederate;SSOLoginURL=https://mycustomserver.com:9033/idp/sts.wst;SSOExchangeUrl=https://us-east-1.signin.aws.amazon.com/platform/saml/acs/764ef411-xxxxxx;user=admin;password=PassValue;AWSPrincipalARN=arn:aws:iam::215338515180:saml-provider/pingFederate;AWSRoleArn=arn:aws:iam::215338515180:role/SSOTest2;
To authenticate using temporary credentials, specify the following:
The Cloud can now request resources using the same permissions provided by long-term credentials (such as IAM user credentials) for the lifespan of the temporary credentials.
To authenticate using both temporary credentials and an IAM role, set all the parameters described above, and specify these additional parameters:
If multi-factor authentication is required, specify the following:
Note: If you want to control the duration of the temporary credentials, set the TemporaryTokenDuration property (default: 3600 seconds).
You can use any credentials file to authenticate, including any configurations related to AccessKey/SecretKey authentication, temporary credentials, role authentication, or MFA.
To do this, set these properties:
This configuration requires two separate Azure AD applications:
To connect to Azure AD, set the AuthScheme to AzureAD, and set these properties:
To authenticate to Azure AD, set these SSOProperties:
Example connection string:
AuthScheme=AzureAD;OAuthClientId=3ea1c786-d527-4399-8c3b-2e3696ae4b48;OauthClientSecret=xxx;CallbackUrl=https://localhost:33333;SSOProperties='Resource=https://signin.aws.amazon.com/saml;AzureTenant=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx';
To obtain the credentials for an AzureBlob user, follow the steps below:
Set AzureStorageAccount to your Azure Blob Storage account name.
You can authenticate to Azure Blob Storage via Access Key, Shared Access Signatures (SAS), AzureAD user, Azure MSI, or Azure Service Principal.
Set the following to authenticate with an Azure Access Key:
AuthScheme must be set to AzureAD in all user account flows.
The authentication as an Azure Service Principal is handled via the OAuth Client Credentials flow. It does not involve direct user authentication. Instead, credentials are created for just the application itself. All tasks taken by the application are done without a default user context, but based on the assigned roles. The application access to the resources is controlled through the assigned roles' permissions.
Create an AzureAD App and an Azure Service Principal
When authenticating using an Azure Service Principal, you must create and register an Azure AD application with an Azure AD tenant. See Creating an Entra ID (Azure AD) Application for more details.
In your App Registration in portal.azure.com, navigate to API Permissions and select the Microsoft Graph permissions. There are two distinct sets of permissions: Delegated permissions and Application permissions. The permissions used during client credential authentication are under Application Permissions.
Assign a role to the application
To access resources in your subscription, you must assign a role to the application.
Client Secret
Set these connection properties:
Certificate
Set these connection properties:
You are now ready to connect. Authentication with client credentials takes place automatically like any other connection, except there is no window opened prompting the user. Because there is no user context, there is no need for a browser popup. Connections take place and are handled internally.
If you are connecting from an Azure VM with permissions for Azure Data Lake Storage, set AuthScheme to AzureMSI.
There are two types of custom AzureAD applications: AzureAD and AzureAD with an Azure Service Principal. Both are OAuth-based.
You may choose to use your own AzureAD Application Credentials when you want to
Follow the steps below to obtain the AzureAD values for your application, the OAuthClientId and OAuthClientSecret.
When authenticating using an Azure Service Principal, you must create both a custom AzureAD application and a service principal that can access the necessary resources. Follow the steps below to create a custom AzureAD application and obtain the connection properties for Azure Service Principal authentication.
Follow the steps below to obtain the AzureAD values for your application.
Set AzureStorageAccount to your Azure Data Lake Storage account name.
You can authenticate to Azure Data Lake Storage via Access Key, Shared Access Signature (SAS), AzureAD user, Azure MSI, or Azure Service Principal.
Set the following to authenticate with an Azure Access Key:
AuthScheme must be set to AzureAD in all user account flows.
The authentication as an Azure Service Principal is handled via the OAuth Client Credentials flow. It does not involve direct user authentication. Instead, credentials are created for just the application itself. All tasks taken by the application are done without a default user context, but based on the assigned roles. The application access to the resources is controlled through the assigned roles' permissions.
Create an AzureAD App and an Azure Service Principal
When authenticating using an Azure Service Principal, you must create and register an Azure AD application with an Azure AD tenant. See Creating an Entra ID (Azure AD) Application for more details.
In your App Registration in portal.azure.com, navigate to API Permissions and select the Microsoft Graph permissions. There are two distinct sets of permissions: Delegated permissions and Application permissions. The permissions used during client credential authentication are under Application Permissions.
Assign a role to the application
To access resources in your subscription, you must assign a role to the application.
Client Secret
Set these connection properties:
Certificate
Set these connection properties:
You are now ready to connect. Authentication with client credentials takes place automatically like any other connection, except there is no window opened prompting the user. Because there is no user context, there is no need for a browser popup. Connections take place and are handled internally.
If you are connecting from an Azure VM with permissions for Azure Data Lake Storage, set AuthScheme to AzureMSI.
There are two types of custom AzureAD applications: AzureAD and AzureAD with an Azure Service Principal. Both are OAuth-based.
You may choose to use your own AzureAD Application Credentials when you want to
Follow the steps below to obtain the AzureAD values for your application, the OAuthClientId and OAuthClientSecret.
When authenticating using an Azure Service Principal, you must create both a custom AzureAD application and a service principal that can access the necessary resources. Follow the steps below to create a custom AzureAD application and obtain the connection properties for Azure Service Principal authentication.
Follow the steps below to obtain the AzureAD values for your application.
Use the OAuth authentication standard to connect to Box. You can authenticate with a user account or with a service account. A service account is required to grant organization-wide access scopes to the Cloud. The Cloud facilitates these authentication flows as described below.
AuthScheme must be set to OAuth in all user account flows.
Set the AuthScheme to OAuthJWT to authenticate with this method.
Service accounts have silent authentication, without user authentication in the browser. You can also use a service account to delegate enterprise-wide access scopes to the Cloud.
You need to create an OAuth application in this flow. See Create a Custom OAuth App to create and authorize an app. You can then connect to Box data that the service account has permission to access.
After setting the following connection properties, you are ready to connect:
You may choose to use your own OAuth Application Credentials when you want to:
At the Box Enterprise Developer Console:
Note: Box does not back up private keys for security reasons. Be careful to back up the Public/Private JSON file. If you lose your private key, you must reset the entire keypair.
openssl genrsa -des3 -out private.pem 2048 openssl rsa -in private.pem -outform PEM -pubout -out public.pem
Note: To run OpenSSL in a Windows environment, install the Cygwin package.
After your application is created and registered, click Configuration from the main menu to access your settings. Note the displayed Redirect URI, Client ID, and Client Secret. You will need these values later.
If you change the JWT access scopes, you must reauthorize the application in the enterprise admin console:
Dropbox uses the OAuth authentication standard.
You need to choose between using CData's embedded OAuth app or Create a Custom OAuth App.
The embedded app includes the following scopes:
You may choose to use your own OAuth Application Credentials when you want to
No further values need to be specified in the CSV app settings.
Set the ProjectId property to the Id of the project you want to connect to.
The Cloud supports using user accounts and GCP instance accounts for authentication.
The following sections discuss the available authentication schemes for Google Cloud Storage:
AuthScheme must be set to OAuth in all user account flows.
Get an OAuth Access Token
Set the following connection properties to obtain the OAuthAccessToken:
Then call stored procedures to complete the OAuth exchange:
Once you have obtained the access and refresh tokens, you can connect to data and refresh the OAuth access token either automatically or manually.
Automatic Refresh of the OAuth Access Token
To have the driver automatically refresh the OAuth access token, set the following on the first data connection:
Manual Refresh of the OAuth Access Token
The only value needed to manually refresh the OAuth access token when connecting to data is the OAuth refresh token.
Use the RefreshOAuthAccessToken stored procedure to manually refresh the OAuthAccessToken after the ExpiresIn parameter value returned by GetOAuthAccessToken has elapsed, then set the following connection properties:
Then call RefreshOAuthAccessToken with OAuthRefreshToken set to the OAuth refresh token returned by GetOAuthAccessToken. After the new tokens have been retrieved, open a new connection by setting the OAuthAccessToken property to the value returned by RefreshOAuthAccessToken.
Finally, store the OAuth refresh token so that you can use it to manually refresh the OAuth access token after it has expired.
Option 1: Obtain and Exchange a Verifier Code
To obtain a verifier code, you must authenticate at the OAuth authorization URL.
Follow the steps below to authenticate from the machine with an internet browser and obtain the OAuthVerifier connection property.
On the headless machine, set the following connection properties to obtain the OAuth authentication values:
After the OAuth settings file is generated, you need to re-set the following properties to connect:
Option 2: Transfer OAuth Settings
Prior to connecting on a headless machine, you need to create and install a connection with the driver on a device that supports an internet browser. Set the connection properties as described in "Desktop Applications" above.
After completing the instructions in "Desktop Applications", the resulting authentication values are encrypted and written to the location specified by OAuthSettingsLocation. The default filename is OAuthSettings.txt.
Once you have successfully tested the connection, copy the OAuth settings file to your headless machine.
On the headless machine, set the following connection properties to connect to data:
When running on a GCP virtual machine, the Cloud can authenticate using a service account tied to the virtual machine. To use this mode, set AuthScheme to GCPInstanceAccount.
(For information on getting and setting the OAuthAccessToken and other configuration parameters, see the Desktop Authentication section of "Connecting to CSV".)
However, you must create a custom OAuth application to connect to CSV via the Web. And since custom OAuth applications seamlessly support all three commonly-used auth flows, you might want to create custom OAuth applications (use your own OAuth Application Credentials) for those auth flows anyway.
Custom OAuth applications are useful if you want to:
The following sections describe how to enable the Directory API and create custom OAuth applications for user accounts (OAuth) and Service Accounts (OAuth/JWT).
For users whose AuthScheme is OAuth and who need to authenticate over a web application, you must always create a custom OAuth application. (For desktop and headless flows, creating a custom OAuth application is optional.)
Do the following:
Note: The client secret remains accessible from from the Google Cloud Console.
To create a new service account:
In the service account flow, the Cloud exchanges a JSON Web Token (JWT) for the OAuthAccessToken. The private key downloaded in the steps above is used to sign the JWT. The Cloud inherits the permissions granted to the service account, including any scopes configured through domain-wide delegation.
The Cloud supports using user accounts and GCP instance accounts for authentication.
The following sections discuss the available authentication schemes for Google Drive:
AuthScheme must be set to OAuth in all user account flows.
When running on a GCP virtual machine, the Cloud can authenticate using a service account tied to the virtual machine. To use this mode, set AuthScheme to GCPInstanceAccount.
(For information on getting and setting the OAuthAccessToken and other configuration parameters, see the Desktop Authentication section of "Connecting to CSV".)
However, you must create a custom OAuth application to connect to CSV via the Web. And since custom OAuth applications seamlessly support all three commonly-used auth flows, you might want to create custom OAuth applications (use your own OAuth Application Credentials) for those auth flows anyway.
Custom OAuth applications are useful if you want to:
The following sections describe how to enable the Directory API and create custom OAuth applications for user accounts (OAuth) and Service Accounts (OAuth/JWT).
For users whose AuthScheme is OAuth and who need to authenticate over a web application, you must always create a custom OAuth application. (For desktop and headless flows, creating a custom OAuth application is optional.)
Do the following:
Note: The client secret remains accessible from from the Google Cloud Console.
To create a new service account:
In the service account flow, the Cloud exchanges a JSON Web Token (JWT) for the OAuthAccessToken. The private key downloaded in the steps above is used to sign the JWT. The Cloud inherits the permissions granted to the service account, including any scopes configured through domain-wide delegation.
The Cloud generically supports connecting to CSV data stored on HTTP(S) streams.
Several authentication methods, such as user/password, digest access, OAuth, OAuthJWT, and OAuth PASSWORD flow are supported.
You can also connect to streams that have no authentication set up.
Connect to an HTTP(S) stream with no authentication by setting the AuthScheme connection property to None.
Set the following to connect:
Set the following to connect:
OAuth requires the authenticating user to interact with CSV using the browser. The Cloud facilitates this in various ways as described in the following sections.
Before following the procedures below, you need to register an OAuth app with the service containing the CSV data you want to work with.
Creating a custom application in most services requires registering as a developer and creating an app in the UI of the service.
This is not necessarily true for all services. In some you must contact the service provider to create the app for you. However it is done, you must obtain the values for OAuthClientId, OAuthClientSecret, and CallbackURL.
Set AuthScheme to OAuthJWT.
The Cloud supports using JWT as an authorization grant in situations where a user cannot perform an interactive sign-on. After setting the following connection properties, you are ready to connect:
Note that the JWT signature algorithm cannot be set directly. The Cloud only supports the RS256 algorithm.
The Cloud will then construct a JWT including the following fields, and submit it to OAuthAccessTokenURL for an access token.
AuthScheme: Set this to OAuthPassword.
OAuth requires the authenticating user to interact with CSV using the browser. The Cloud facilitates this in various ways as described in the following sections.
Before following the procedures below, you need to register an OAuth app with the service containing the CSV data you want to work with.
Creating a custom application in most services requires registering as a developer and creating an app in the UI of the service.
This is not necessarily true for all services. In some you must contact the service provider to create the app for you. However it is done, you must obtain the values for OAuthClientId, OAuthClientSecret, and CallbackURL.
After setting the following connection properties, you are ready to connect:
If you do not already have Cloud Object Storage in your IBM Cloud account, you can follow the procedure below to install an instance of SQL Query in your account:
To connect with IBM Cloud Object Storage, you will need an ApiKey. You can obtain this as follows:
Set Region to to your IBM instance region.
You can authenticate to IBM Cloud Object Storage using either IAMSecretKey, or OAuth authentication.
Set the following properties to authenticate:
For example:ConnectionType=IBM Object Storage Source;URI=ibmobjectstorage://bucket1/folder1; AccessKey=token1; SecretKey=secret1; Region=eu-gb;
Set the following to authenticate using OAuth authentication.
ConnectionType=IBM Object Storage Source;URI=ibmobjectstorage://bucket1/folder1; ApiKey=key1; Region=eu-gb; AuthScheme=OAuth; InitiateOAuth=GETANDREFRESH;
When you connect, the Cloud completes the OAuth process.
You can connect to OneDrive using an AzureAD user, with MSI authentication, or using an Azure Service Principal.
AuthScheme must be set to AzureAD in all user account flows.
The authentication as an Azure Service Principal is handled via the OAuth Client Credentials flow. It does not involve direct user authentication. Instead, credentials are created for just the application itself. All tasks taken by the application are done without a default user context, but based on the assigned roles. The application access to the resources is controlled through the assigned roles' permissions.
Create an AzureAD App and an Azure Service Principal
When authenticating using an Azure Service Principal, you must create and register an Azure AD application with an Azure AD tenant. See Creating an Entra ID (Azure AD) Application for more details.
In your App Registration in portal.azure.com, navigate to API Permissions and select the Microsoft Graph permissions. There are two distinct sets of permissions: Delegated permissions and Application permissions. The permissions used during client credential authentication are under Application Permissions.
Assign a role to the application
To access resources in your subscription, you must assign a role to the application.
Client Secret
Set these connection properties:
Certificate
Set these connection properties:
You are now ready to connect. Authentication with client credentials takes place automatically like any other connection, except there is no window opened prompting the user. Because there is no user context, there is no need for a browser popup. Connections take place and are handled internally.
If you are connecting from an Azure VM with permissions for Azure Data Lake Storage, set AuthScheme to AzureMSI.
There are two types of custom AzureAD applications: AzureAD and AzureAD with an Azure Service Principal. Both are OAuth-based.
You may choose to use your own AzureAD Application Credentials when you want to
Follow the steps below to obtain the AzureAD values for your application, the OAuthClientId and OAuthClientSecret.
When authenticating using an Azure Service Principal, you must create both a custom AzureAD application and a service principal that can access the necessary resources. Follow the steps below to create a custom AzureAD application and obtain the connection properties for Azure Service Principal authentication.
Follow the steps below to obtain the AzureAD values for your application.
You can authenticate to OneLake via AzureAD user, Azure MSI, or Azure Service Principal.
AuthScheme must be set to AzureAD in all user account flows.
The authentication as an Azure Service Principal is handled via the OAuth Client Credentials flow. It does not involve direct user authentication. Instead, credentials are created for just the application itself. All tasks taken by the application are done without a default user context, but based on the assigned roles. The application access to the resources is controlled through the assigned roles' permissions.
Create an AzureAD App and an Azure Service Principal
When authenticating using an Azure Service Principal, you must create and register an Azure AD application with an Azure AD tenant. See Creating an Entra ID (Azure AD) Application for more details.
In your App Registration in portal.azure.com, navigate to API Permissions and select the Microsoft Graph permissions. There are two distinct sets of permissions: Delegated permissions and Application permissions. The permissions used during client credential authentication are under Application Permissions.
Assign a role to the application
To access resources in your subscription, you must assign a role to the application.
Client Secret
Set these connection properties:
Certificate
Set these connection properties:
You are now ready to connect. Authentication with client credentials takes place automatically like any other connection, except there is no window opened prompting the user. Because there is no user context, there is no need for a browser popup. Connections take place and are handled internally.
If you are connecting from an Azure VM with permissions for Azure Data Lake Storage, set AuthScheme to AzureMSI.
There are two types of custom AzureAD applications: AzureAD and AzureAD with an Azure Service Principal. Both are OAuth-based.
You may choose to use your own AzureAD Application Credentials when you want to
Follow the steps below to obtain the AzureAD values for your application, the OAuthClientId and OAuthClientSecret.
When authenticating using an Azure Service Principal, you must create both a custom AzureAD application and a service principal that can access the necessary resources. Follow the steps below to create a custom AzureAD application and obtain the connection properties for Azure Service Principal authentication.
Follow the steps below to obtain the AzureAD values for your application.
Follow the steps below to add a service principal to a workspace.
You can authenticate to SFTP using a user and password or an SSH certificate. Additionally, you can connect to an SFTP server that has no authentication enabled.
Set SSHAuthMode to None to connect without authentication, assuming your server supports doing so.
Provide user credentials associated with your SFTP server:
Set the following to connect.
| Service provider | Okta | OneLogin | ADFS | AzureAD |
| Amazon S3 | Y | Y | Y | |
| Azure Blob Storage | ||||
| Azure Data Lake Store Gen1 | ||||
| Azure Data Lake Store Gen2 | ||||
| Azure Data Lake Store Gen2 with SSL | ||||
| Google Drive | ||||
| OneDrive | ||||
| Box | ||||
| Dropbox | ||||
| SharePoint Online SOAP | Y | Y | Y | |
| SharePoint Online REST | ||||
| Wasabi | ||||
| Google Cloud Storage | ||||
| Oracle Cloud Storage | ||||
| Azure File |
Azure AD Configuration
The main theme behind this configuration is the OAuth 2.0 On-Behalf-Of flow. It requires two Azure AD applications:
Save the step "Assign the Azure AD test user" until after provisioning so that you can select the AWS roles when assigning the user.
CData Driver Common Properties
The following SSOProperties are needed to authenticate to Azure Active Directory and must be specified for every service provider.
We will retrieve the SSO SAML response from an OAuth 2.0 On-Behalf-Of flow so the following OAuth connection properties must be specified:
Amazon S3
In addition to the common properties, the following properties must be specified when connecting to Amazon S3 service provider:
AuthScheme=AzureAD;InitiateOAuth=GETANDREFRESH;OAuthClientId=d593a1d-ad89-4457-872d-8d7443aaa655;OauthClientSecret=g9-oy5D_rl9YEKfN-45~3Wm8FgVa2F;SSOProperties='Tenant=94be7-edb4-4fda-ab12-95bfc22b232f;Resource=https://signin.aws.amazon.com/saml;';AWSRoleARN=arn:aws:iam::2153385180:role/AWS_AzureAD;AWSPrincipalARN=arn:aws:iam::215515180:saml-provider/AzureAD;
OneLogin Configuration
You must create an application used for the single sign-on process to a specific provider.
Sharepoint SOAP
The following properties must be specified when connecting to Sharepoint SOAP service provider:
AuthScheme='OneLogin';User=test;Password=test;SSOProperties='Domain=test.cdata;';
Okta Configuration
You must create an application used for the single sign-on process to a specific provider.
Sharepoint SOAP
The following properties must be specified when connecting to Sharepoint SOAP service provider:
AuthScheme='Okta';User=test;Password=test;SSOProperties='Domain=test.cdata;';
Amazon S3
The following properties must be specified when connecting to an Amazon S3 service provider:
AuthScheme=Okta;User=OktaUser;Password=OktaPassword;SSOLoginURL='https://{subdomain}.okta.com/home/amazon_aws/0oan2hZLgQiy5d6/272';
ADFS Configuration
You must create an application used for the single sign-on process to a specific provider.
Sharepoint SOAP
The following properties must be specified when connecting to a Sharepoint SOAP service provider:
AuthScheme='ADFS';User=test;Password=test;SSOProperties='Domain=test.cdata;';
Amazon S3
The following properties must be specified when connecting to a Sharepoint SOAP service provider:
AuthScheme=ADFS;User=username;Password=password;SSOLoginURL='https://sts.company.com';ADFS Integrated
The ADFS Integrated flow indicates you are connecting with the currently logged in Windows user credentials. To use the ADFS Integrated flow, simply do not specify the User and Password, but otherwise follow the same steps in the ADFS guide above.
To authenticate to CSV with Kerberos, set AuthScheme to NEGOTIATE.
Authenticating to CSV via Kerberos requires you to define authentication properties and to choose how Kerberos should retrieve authentication tickets.
The Cloud provides three ways to retrieve the required Kerberos ticket, depending on whether or not the KRB5CCNAME and/or KerberosKeytabFile variables exist in your environment.
MIT Kerberos Credential Cache File
This option enables you to use the MIT Kerberos Ticket Manager or kinit command to get tickets. With this option there is no need to set the User or Password connection properties.
This option requires that KRB5CCNAME has been created in your system.
To enable ticket retrieval via MIT Kerberos Credential Cache Files:
If the ticket is successfully obtained, the ticket information appears in Kerberos Ticket Manager and is stored in the credential cache file.
The Cloud uses the cache file to obtain the Kerberos ticket to connect to CSV.
Note: If you would prefer not to edit KRB5CCNAME, you can use the KerberosTicketCache property to set the file path manually. After this is set, the Cloud uses the specified cache file to obtain the Kerberos ticket to connect to CSV.
Keytab File
If your environment lacks the KRB5CCNAME environment variable, you can retrieve a Kerberos ticket using a Keytab File.
To use this method, set the User property to the desired username, and set the KerberosKeytabFile property to a file path pointing to the keytab file associated with the user.
User and Password
If your environment lacks the KRB5CCNAME environment variable and the KerberosKeytabFile property has not been set, you can retrieve a ticket using a user and password combination.
To use this method, set the User and Password properties to the user/password combination that you use to authenticate with CSV.
To enable this kind of cross-realm authentication, set the KerberosRealm and KerberosKDC properties to the values required for user authentication. Also, set the KerberosServiceRealm and KerberosServiceKDC properties to the values required to obtain the service ticket.
Set the following properties to control how the Cloud models CSV as tables:
The CData Cloud hides the complexity of processing local and remote CSV data, from connecting over wire protocols to modeling the data as tables. However, you also have control over these layers.
The Cloud dynamically derives schemas from CSV based on the connection properties specified. The available connection properties give you control over many aspects of how CSV data is modeled as tables. See Connecting to CSV Data Sources for more information on configuring the connection. When working with local CSV, you can also configure the column definitions and file format with Schema.ini, the configuration used by the Microsoft Jet driver.
For more granular control over the columns reported and other aspects of modeling the data as tables, you can define your own schemas or extend the generated ones. Schemas are defined in extendable configuration files. See Generating Schema Files to save the detected schemas to configuration files, which you can then easily edit.
The following sections show how to customize schemas or write your own from scratch.
Tables and views are defined by authoring schema files in API Script. API Script is a simple configuration language that allows you to define the columns and the behavior of the table. It also has built-in operations that enable you to process CSV.
In addition to these data processing primitives, API Script is a full-featured language with constructs for conditionals, looping, etc. However, as shown by the example schema, for most table definitions you will not need to use these features.
Below is a fully functional table schema that models the Person entity in the popular Northwind sample database. It contains all the components you will need to access your data source through SQL. You can find more information on using these components in Column Definitions and SELECT Execution.
<api:script>
<!-- See Column Definitions to define column behavior. -->
<api:info title="CSVPersons" desc="Parse the CSV Persons feed.">
<attr name="ID" xs:type="int" key="true" />
<attr name="EmployeeID" xs:type="int" />
<attr name="Name" xs:type="string" />
<attr name="TotalExpense" xs:type="double" />
<attr name="HireDate" xs:type="datetime" />
<attr name="Salary" xs:type="int" />
</api:info>
<api:set attr="uri" value="http://pathtocsvstream" />
<!-- The GET method corresponds to SELECT. The results of processing are pushed to the schema's output. See SELECT Execution for more information. -->
<api:script method="GET" >
<api:call op="csvproviderGet"/>
</api:script>
<!-- Not implemented -->
<api:script method="POST">
<api:call op="csvproviderInsert">
<api:push/>
</api:call>
</api:script>
<!-- Not implemented -->
<api:script method="MERGE">
<api:call op="csvproviderUpdate">
<api:push/>
</api:call>
</api:script>
<!-- Not implemented -->
<api:script method="DELETE">
<api:call op="csvproviderDelete">
<api:push/>
</api:call>
</api:script>
</api:script>
In the Schema.ini file you can specify the format of a text file you want to model as a table and you can also define the columns of the table. Schema.ini must be located in the folder specified in the URI -- or, if IncludeSubdirectories is set, Schema.ini can be defined in each subfolder.
To allow you to define a Schema.ini only when necessary, you can also use IncludeFiles and ExtendedProperties.
ExtendedProperties is compatible with Microsoft Jet OLE DB 4.0. The format for all text files can be set in ExtendedProperties. Schema.ini overrides ExtendedProperties for a specific file.
Files specified in Schema.ini are reported as tables in addition to files included by IncludeFiles. The Cloud uses a definition in Schema.ini if one exists and the filename otherwise to report the table.
A section in Schema.ini must begin with the file name enclosed in square brackets. For example:
[Jerrie's travel expense.txt]
After adding a file name entry, you can set the Format property to the format of the file. The possible values are the following:
Format=Delimited(,)Note: By default, .txt files are processed as CSV files with headers.
There are two ways to define columns based on the fields in your text files:
To define a column in Schema.ini, use the following format:
Coln=ColumnName DataType [Width Width]
For example:
Col2=A Text Width 100Note: If format is set to fixed length, then defining the width of each column is mandatory.
[Jerrie's travel expense.csv] ColNameHeader=True Format=Delimited(,) Col1=Date Text Col2=A Text Col3=B Text Col4=C Text Col5=Total Text Col6=Date Text Col7=D Text Col8=E Text Col9=F Text Col10=G Text Col11=rate numeric [invoices.csv] ColNameHeader=True Format=Delimited(,) Col1=id numeric Col2=invoicedate date Col3=total numeric
Data types can be any of the following:
The CData Cloud enables you to persist schema definitions to configuration files. Schema files make it easy to customize and save the dynamically detected schemas, or to define your own view of the data.
The following sections show how to use the GenerateSchemaFiles property to save the table definitions detected based on the connection string. Alternatively, you can invoke the CreateSchema stored procedure to manually generate a schema file based on the provided input parameters.
After creating a schema, see Modeling CSV Data for more information on extending table schemas to gain further control over data types and other aspects of modeling CSV as tables.
Set the following additional connection properties to generate table schemas for local or remote CSV:
Note: Columns defined in .rsd files take precedence over the definitions in Schema.ini. Columns defined in generated schema files take precedence over the definitions in Schema.ini.
The basic attributes of a column are the name of the column, the data type, whether the column is a primary key, and the internal name. The Cloud uses the internal name to extract nodes from CSV with no readable names.
Mark up column attributes in the api:info block of the schema file. You can set the internal name in the other:internalname property. You can also specify the format of the resulting column value with other:valueFormat.
To see the column definitions in a complete example, refer to Modeling CSV Data.
<api:info title="CSVPersons" desc="Parse the CSV Persons feed.">
<attr name="ID" xs:type="int" key="true" />
<attr name="EmployeeID" xs:type="int" other:internalname="employee_id" />
<attr name="Name" xs:type="string" />
<attr name="TotalExpense" xs:type="double" />
<attr name="HireDate" xs:type="datetime" />
<attr name="Salary" xs:type="int" />
</api:info>
The other:internalname property is used to specify the CSV column name that selects the column's value from CSV. So, if the CSV file contains a column name employee_id you use other:internalname="employee_id"
With a URI and Column Definitions specified, the Cloud processes SELECT statements client-side, in memory. The following sections show how to use the Cloud's built-in operations to customize how the Cloud requests and returns data from the server.
When a SELECT query is issued, the Cloud executes the GET method of the schema. In this method you can process CSV. To see this schema in a complete example, refer to Modeling CSV Data.
The following line maps the schema to a URI:
<api:set attr="uri" value="ftp://somewebsite/NorthwindOData.csv" />
Invoke the operation to retrieve the data in the GET method. Specify the operation with the api:push keyword. The following lines push the results of processing to the schema's output.
<api:script method="GET" >
<api:push op="csvproviderGet"/>
</api:script>
You can then execute WHERE clause searches, JOIN queries, and SQL aggregate functions.
The Cloud's operations give you high level control over the request sent to the server. You can set a variety of inputs to control authentication and other aspects of the request. See Operations for the available inputs.
You can also build the request by injecting inputs from the SQL statement. As an example, the following sections show how to use the WHERE clause to change the request dynamically. Note that other filters specified in the WHERE clause are processed client-side by the Cloud; you can search on any column returned in the response.
Consider a weather forecast API that returns a location's forecast in CSV. You specify the location you want in the URI. Using the Cloud, you could get the forecast with a query like the following:
SELECT * FROM Forecasts WHERE (Location = '90210')
Follow the steps below to implement this query. The following procedure defines a pseudo column, an input that can only be used in the WHERE clause, and maps the pseudo column to an API request.
<api:info>
...
<input name="Location" required="true"/>
</api:info>
<api:set attr='uri' value="http://api.wunderground.com/api/MyAPIKey/hourly/q/[_input.Location].csv"/>
<api:script method="GET" >
<api:push op="csvproviderGet"/>
</api:script>To override the Cloud's internal paging mechanism, add the Rows@Next input to the list of columns in the api:info block.
<input name="rows@next" desc="Identifier for the next page of results." />
Note that making this an input parameter instead of an attr parameter will prevent it from
showing up in column listings.
You will also need to set the EnablePaging attribute to TRUE to turn off the driver's internal paging mechanism.
<api:set attr="EnablePaging" value="TRUE" />
When the Rows@Next value is set in the output, the Cloud will automatically call the method again with the Rows@Next value in the input after it is finished returning results for this page. You can use the value of this input to modify the request on the next pass to get the next page of
data. Set the Rows@Next input to any information needed to make the request for the next page of data.
For example, your API may return the next page's URL in the response. You can obtain this value by providing the XPath to the URL:
<api:set attr="elementmappath#" value="/next_page" />
<api:set attr="elementmapname#" value="rows@next" />
You can then modify the URL where the request is made, provided the value is set. The api:check element is useful for checking the existence of a required input before attempting to access its value. The Rows@Next input can be accessed as an attribute of the _input item:
<api:check attr="_input.rows@next">
<api:set attr="uri" value="[_input.rows@next]" />
<api:else>
<api:set attr="uri" value="<first page's URL>" />
</api:else>
<api:check>
You can use the _query item to access any component of the SELECT statement in the schema.
| query | The SQL query. For example:
SELECT Id, Name FROM Accounts WHERE City LIKE '%New%' AND COUNTRY = 'US' GROUP BY CreatedDate ORDER BY Name LIMIT 10,50; |
| selectcolumns | A comma-separated list containing the columns specified in the SELECT statement. For example, the Id and Name columns in the example. |
| table | The table name specified in the SELECT statement. For example, Accounts in the example. |
| criteria | The WHERE clause of the statement. For example, the following WHERE clause in the example:
City LIKE '%New%' AND COUNTRY = 'US' |
| orderby | The columns specified in the ORDER BY clause. For example, Name in the example. |
| groupby | The GROUP BY clause in the SELECT statement. For example, CreatedDate in the example. |
| limit | The limit specified in the LIMIT or TOP clauses of the SELECT statement. For example, 50 in the example. |
| offset | The offset specified in the LIMIT or TOP clauses of the SELECT statement. For example, 10 in the example. |
| isjoin | Whether the query is a join. |
| jointable | The table to be joined. |
| isschemaonly | Whether the query retrieves only schema information. |
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with CSV.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from CSV, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
| CopyFile | Copies a specified file from a local directory or supported cloud-storage provider to another location. This procedure is useful for automating data migration and replication tasks in workflows that rely on multiple storage environments. |
| DeleteFile | Removes a file from a local folder or supported cloud-storage provider. This procedure helps maintain storage hygiene by programmatically deleting obsolete or temporary files from integrated systems. |
| ListFiles | Returns a list of available CSV or related data files within a designated local or cloud-based directory. This procedure enables browsing, validation, or synchronization of available files before data processing or import operations. |
| MoveFile | Transfers a file from one location to another within local or supported cloud storage. This procedure is useful for reorganizing file structures or moving processed files to archival or staging areas. |
Copies a specified file from a local directory or supported cloud-storage provider to another location. This procedure is useful for automating data migration and replication tasks in workflows that rely on multiple storage environments.
The procedure accepts the parameters
EXEC COPYFILE @SourcePath = 'sftp://localhost:22/folder1/file1.csv' @DestinationPath = 'sftp://localhost:22/folder2/' //absolute path
EXEC COPYFILE @SourcePath = 'file1.csv' @DestinationPath = 'folder2' //relative path
| Name | Type | Description |
| SourcePath | String | Specifies the full file path of the source file that is copied from a local or cloud-based storage system. |
| DestinationPath | String | Specifies the full file path of the destination location where the copied file is written in a local or cloud-based directory. |
| Name | Type | Description |
| Success | Boolean | Indicates whether the file copy operation completed successfully. Returns a value of 'true' when the file is copied without error and a value of 'false' when a failure occurs during the process. |
Removes a file from a local folder or supported cloud-storage provider. This procedure helps maintain storage hygiene by programmatically deleting obsolete or temporary files from integrated systems.
The procedure PATH parameter accepts relative and absolute paths to the file you request to delete
EXEC DELETEFILE @PATH = 'sftp://localhost:22/folder1/file1.csv' //absolute path
EXEC DELETEFILE @PATH = 'file1.csv' //relative path
| Name | Type | Description |
| Path | String | Specifies the full file path of the file that is to be deleted. The path is relative to the directory that is defined in the URI connection property. |
| Name | Type | Description |
| Success | Bool | Indicates whether the delete operation completed successfully. The Success output returns a value of 'true' when the file is deleted without error and a value of 'false' when a failure occurs, in which case the Details output provides additional information. |
| Details | String | Provides detailed information about any execution failure that occurs during the delete operation. The Details output returns a NULL value when the Success output is true. |
Returns a list of available CSV or related data files within a designated local or cloud-based directory. This procedure enables browsing, validation, or synchronization of available files before data processing or import operations.
| Name | Type | Description |
| Mask | String | Specifies the file-name filter mask that determines which files are included in the result set (for example, '*.csv'). |
| Path | String | Specifies the directory path from which files are listed. The path is relative to the directory that is defined in the URI connection property. |
| Name | Type | Description |
| FileName | String | Returns the name of each file that matches the specified filter mask. The FileName output identifies individual files in the listed directory. |
| LastModified | Long | Returns the UNIX timestamp that indicates when each file was last modified. The LastModified output enables users to track file updates or synchronization status. |
| CreatedAt | Long | Returns the UNIX timestamp that indicates when each file was created. The CreatedAt output returns a value of -1 when the connected storage system does not support file creation time metadata. |
| URI | String | Returns the full Uniform Resource Identifier (URI) of each listed file. The URI output provides the absolute reference to the file's location in local or cloud-based storage. |
Transfers a file from one location to another within local or supported cloud storage. This procedure is useful for reorganizing file structures or moving processed files to archival or staging areas.
The procedure accepts the parameters
EXEC MOVEFILE @SourcePath = 'sftp://localhost:22/folder1/file1.csv' @DestinationPath = 'sftp://localhost:22/folder2/' //absolute path
EXEC MOVEFILE @SourcePath = 'file1.csv' @DestinationPath = 'folder2' //relative path
| Name | Type | Description |
| SourcePath | String | Specifies the full file path of the source file that is moved from a local or cloud-based storage system. |
| DestinationPath | String | Specifies the full file path of the destination location where the file is placed after the move operation. |
| Name | Type | Description |
| Success | Boolean | Indicates whether the file move operation completed successfully. The Success output returns a value of 'true' when the file is moved without error and a value of 'false' when a failure occurs during the process. |
The Cloud has high-performance, built-in operations for accessing data from CSV data sources. These operations are platform neutral: Schema files that invoke these operations can be used in both .NET and Java. You can also extend the Cloud with your own operations written in .NET or Java.
The Cloud consists of the following operations:
| Operation Name | Description | |
| csvproviderGet | The csvproviderGet operation is an API Script operation that is used to process CSV content. It allows you to split CSV content into rows. | |
| oauthGetAccessToken | For OAuth 1.0, exchange a request token for an access token. For OAuth 2.0, get an access token or get a new access token with the refresh token. | |
| oauthGetUserAuthorizationURL | Generates the user authorization URL. OAuth 2.0 will not access the network in this operation. |
The csvproviderGet operation is an API Script operation that is used to process CSV content. It allows you to split CSV content into rows.
The csvproviderGet operation can be used to execute remote data retrieval operations. It abstracts the request and also enables configuration of most aspects through the following inputs, including authentication and firewall traversal. See ProxyAuthScheme and FirewallType the properties needed to negotiate a firewall.
The csvproviderGet operation reads the api:info section of the table schema file to map various elements in the CSV document into column values within a row. It does so using the other:internalname property of the column definition.
The oauthGetAccessToken operation is an API Script operation that is used to facilitate the OAuth authentication flow. To pass the needed inputs to the operation, define the GetOAuthAccessToken stored procedure and, if your data source has a refresh flow, RefreshOAuthAccessToken. The Cloud can call this internally.
The Cloud includes stored procedures that invoke this operation to complete the OAuth exchange. The following example schema briefly lists some of the typically required inputs before the following sections explain them in more detail.
For a guide to using the Cloud to authenticate, see the "Getting Started" chapter.
Invoke the oauthGetAccessToken with the GetOAuthAccessToken stored procedure. The following inputs are required for most data sources and will provide default values for the connection properties of the same name.
<api:script xmlns:api="http://www.rssbus.com/ns/rsbscript/2">
<api:info title="GetOAuthAccessToken" description="Obtains the OAuth access token to be used for authentication with various APIs." >
<input name="AuthMode" desc="The OAuth flow. APP or WEB." />
<input name="CallbackURL" desc="The URL to be used as a trusted redirect URL, where the user will return with the token that verifies that they have granted your app access. " />
<input name="OAuthAccessToken" desc="The request token. OAuth 1.0 only." />
<input name="OAuthAccessTokenSecret" desc="The request token secret. OAuth 1.0 only." />
<input name="Verifier" desc="The verifier code obtained when the user grants permissions to your app." />
<output name="OAuthAccessToken" desc="The access token." />
<output name="OAuthTokenSecret" desc="The access token secret." />
<output name="OAuthRefreshToken" desc="A token that may be used to obtain a new access token." />
</api:info>
<!-- Set OAuthVersion to 1.0 or 2.0. -->
<api:set attr="OAuthVersion" value="MyOAuthVersion" />
<!-- Set RequestTokenURL to the URL where the request for the request token is made. OAuth 1.0 only.-->
<api:set attr="OAuthRequestTokenURL" value="http://MyOAuthRequestTokenURL" />
<!-- Set OAuthAuthorizationURL to the URL where the user logs into the service and grants permissions to the application. -->
<api:set attr="OAuthAuthorizationURL" value="http://MyOAuthAuthorizationURL" />
<!-- Set OAuthAccessTokenURL to the URL where the request for the access token is made. -->
<api:set attr="OAuthAccessTokenURL" value="http://MyOAuthAccessTokenURL" />
<!-- Set GrantType to the authorization grant type. OAuth 2.0 only. -->
<api:set attr="GrantType" value="CODE" />
<!-- Set SignMethod to the signature method used to calculate the signature of the request. OAuth 1.0 only.-->
<api:set attr="SignMethod" value="HMAC-SHA1" />
<api:call op="oauthGetAccessToken">
<api:push/>
</api:call>
</api:script>
You can also use oauthGetAccessToken to refresh the access token by providing the following inputs:
<api:script xmlns:api="http://www.rssbus.com/ns/rsbscript/2">
<api:info title="RefreshOAuthAccessToken" description="Refreshes the OAuth access token used for authentication." >
<input name="OAuthRefreshToken" desc="A token that may be used to obtain a new access token." />
<output name="OAuthAccessToken" desc="The authentication token returned." />
<output name="OAuthTokenSecret" desc="The authentication token secret returned. OAuth 1.0 only." />
<output name="OAuthRefreshToken" desc="A token that may be used to obtain a new access token." />
<output name="ExpiresIn" desc="The remaining lifetime on the access token." />
</api:info>
<!-- Set OAuthVersion to 1.0 or 2.0. -->
<api:set attr="OAuthVersion" value="MyOAuthVersion" />
<!-- Set GrantType to REFRESH. OAuth 2.0 only. -->
<api:set attr="GrantType" value="REFRESH" />
<!-- Set SignMethod to the signature method used to calculate the signature of the request. OAuth 1.0 only.-->
<api:set attr="SignMethod" value="HMAC-SHA1" />
<!-- Set OAuthAccessTokenURL to the URL where the request for the access token is made. -->
<api:set attr="OAuthAccessTokenURL" value="http://MyOAuthAccessTokenURL" />
<!-- Set AuthMode to 'WEB' when calling RefreshOAuthAccessToken -->
<api:set attr="AuthMode" value="WEB"/>
<api:call op="oauthGetAccessToken">
<api:push/>
</api:call>
</api:script>
The oauthGetUserAuthorizationURL is an API Script operation that is used to facilitate the OAuth authentication flow for Web apps, for offline apps, and in situations where the Cloud is not allowed to open a Web browser. To pass the needed inputs to this operation, define the GetOAuthAuthorizationURL stored procedure. The Cloud can call this internally.
Define stored procedures in .rsb files with the same file name as the schema's title. The example schema briefly lists some of the typically required inputs before the following sections explain them in more detail.
For a guide to authenticating in the OAuth flow, see the "Getting Started" chapter.
Call oauthGetUserAuthorizationURL in the GetOAuthAuthorizationURL stored procedure.
<api:script xmlns:api="http://www.rssbus.com/ns/rsbscript/2">
<api:info title="Get OAuth Authorization URL" description="Obtains the OAuth authorization URL used for authentication with various APIs." >
<input name="CallbackURL" desc="The URL to be used as a trusted redirect URL, where the user will return with the token that verifies that they have granted your app access. " />
<output name="URL" desc="The URL where the user logs in and is prompted to grant permissions to the app. " />
<output name="OAuthAccessToken" desc="The request token. OAuth 1.0 only." />
<output name="OAuthTokenSecret" desc="The request token secret. OAuth 1.0 only." />
</api:info>
<!-- Set OAuthVersion to 1.0 or 2.0. -->
<api:set attr="OAuthVersion" value="MyOAuthVersion" />
<!-- Set ResponseType to the desired authorization grant type. OAuth 2.0 only.-->
<api:set attr="ResponseType" value="code" />
<!-- Set SignMethod to the signature method used to calculate the signature. OAuth 1.0 only.-->
<api:set attr="SignMethod" value="HMAC-SHA1" />
<!-- Set OAuthAuthorizationURL to the URL where the user logs into the service and grants permissions to the application. -->
<api:set attr="OAuthAuthorizationURL" value="http://MyOAuthAuthorizationURL" />
<!-- Set OAuthAccessTokenURL to the URL where the request for the access token is made. -->
<api:set attr="OAuthAccessTokenURL" value="http://MyOAuthAccessTokenURL"/>
<!-- Set RequestTokenURL to the URL where the request for the request token is made. OAuth 1.0 only.-->
<api:set attr="OAuthRequestTokenURL" value="http://MyOAuthRequestTokenURL" />
<api:call op="oauthGetUserAuthorizationUrl">
<api:push/>
</api:call>
</api:script>
<p>
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for CSV:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries, including batch operations::
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
| Name | Type | Description |
| CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
| Name | Type | Description |
| CatalogName | String | The database name. |
| SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
| Name | Type | Description |
| CatalogName | String | The database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view. |
| TableType | String | The table type (table or view). |
| Description | String | A description of the table or view. |
| IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the NorthwindOData table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='NorthwindOData'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the table or view. |
| SchemaName | String | The schema containing the table or view. |
| TableName | String | The name of the table or view containing the column. |
| ColumnName | String | The column name. |
| DataTypeName | String | The data type name. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| Length | Int32 | The storage size of the column. |
| DisplaySize | Int32 | The designated column's normal maximum width in characters. |
| NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
| NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
| IsNullable | Boolean | Whether the column can contain null. |
| Description | String | A brief description of the column. |
| Ordinal | Int32 | The sequence number of the column. |
| IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
| IsGeneratedColumn | String | Whether the column is generated. |
| IsHidden | Boolean | Whether the column is hidden. |
| IsArray | Boolean | Whether the column is an array. |
| IsReadOnly | Boolean | Whether the column is read-only. |
| IsKey | Boolean | Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
| ColumnType | String | The role or classification of the column in the schema. Possible values include SYSTEM, LINKEDCOLUMN, NAVIGATIONKEY, REFERENCECOLUMN, and NAVIGATIONPARENTCOLUMN. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
| Name | Type | Description |
| CatalogName | String | The database containing the stored procedure. |
| SchemaName | String | The schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure. |
| Description | String | A description of the stored procedure. |
| ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the GetOAuthAccessToken stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'GetOAuthAccessToken' AND Direction = 1 OR Direction = 2
To include result set columns in addition to the parameters, set the IncludeResultColumns pseudo column to True:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'GetOAuthAccessToken' AND IncludeResultColumns='True'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the stored procedure. |
| SchemaName | String | The name of the schema containing the stored procedure. |
| ProcedureName | String | The name of the stored procedure containing the parameter. |
| ColumnName | String | The name of the stored procedure parameter. |
| Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
| DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
| DataTypeName | String | The name of the data type. |
| NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
| Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
| NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
| IsNullable | Boolean | Whether the parameter can contain null. |
| IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
| IsArray | Boolean | Whether the parameter is an array. |
| Description | String | The description of the parameter. |
| Ordinal | Int32 | The index of the parameter. |
| Values | String | The values you can set in this parameter are limited to those shown in this column. Possible values are comma-separated. |
| SupportsStreams | Boolean | Whether the parameter represents a file that you can pass as either a file path or a stream. |
| IsPath | Boolean | Whether the parameter is a target path for a schema creation operation. |
| Default | String | The value used for this parameter when no value is specified. |
| SpecificName | String | A label that, when multiple stored procedures have the same name, uniquely identifies each identically-named stored procedure. If there's only one procedure with a given name, its name is simply reflected here. |
| IsCDataProvided | Boolean | Whether the procedure is added/implemented by CData, as opposed to being a native CSV procedure. |
| Name | Type | Description |
| IncludeResultColumns | Boolean | Whether the output should include columns from the result set in addition to parameters. Defaults to False. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the NorthwindOData table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='NorthwindOData'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
| IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| PrimaryKeyName | String | The name of the primary key. |
| ForeignKeyName | String | The name of the foreign key. |
| ReferencedCatalogName | String | The database containing the primary key. |
| ReferencedSchemaName | String | The schema containing the primary key. |
| ReferencedTableName | String | The table containing the primary key. |
| ReferencedColumnName | String | The column name of the primary key. |
| ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
| Name | Type | Description |
| CatalogName | String | The name of the database containing the key. |
| SchemaName | String | The name of the schema containing the key. |
| TableName | String | The name of the table containing the key. |
| ColumnName | String | The name of the key column. |
| KeySeq | String | The sequence number of the primary key. |
| KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
| Name | Type | Description |
| CatalogName | String | The name of the database containing the index. |
| SchemaName | String | The name of the schema containing the index. |
| TableName | String | The name of the table containing the index. |
| IndexName | String | The index name. |
| ColumnName | String | The name of the column associated with the index. |
| IsUnique | Boolean | True if the index is unique. False otherwise. |
| IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
| Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
| SortOrder | String | The sort order: A for ascending or D for descending. |
| OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
| Name | Type | Description |
| Name | String | The name of the connection property. |
| ShortDescription | String | A brief description. |
| Type | String | The data type of the connection property. |
| Default | String | The default value if one is not explicitly set. |
| Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
| Value | String | The value you set or a preconfigured default. |
| Required | Boolean | Whether the property is required to connect. |
| Category | String | The category of the connection property. |
| IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
| Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
| PropertyName | String | A camel-cased truncated form of the connection property name. |
| Ordinal | Int32 | The index of the parameter. |
| CatOrdinal | Int32 | The index of the parameter category. |
| Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
| Visible | Boolean | Informs whether the property is visible in the connection UI. |
| ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
| AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
| COUNT | Whether COUNT function is supported. | YES, NO |
| IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
| IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
| SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
| GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
| OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
| OUTER_JOINS | Whether outer joins are supported. | YES, NO |
| SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
| STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
| NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
| TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
| REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
| REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
| IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
| SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
| DIALECT | Indicates the SQL dialect to use. | |
| KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
| SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
| SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
| DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
| DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
| SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
| SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
| SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
| PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
| ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
| PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
| MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
| REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
| REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
| REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
| REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
| REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
| IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
| CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
| CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Modeling CSV Data section for more information.
| Name | Type | Description |
| NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
| VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
| Name | Type | Description |
| Id | String | The database-generated Id returned from a data modification operation. |
| Batch | String | An identifier for the batch. 1 for a single operation. |
| Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
| Message | String | SUCCESS or an error message if the update in the batch failed. |
Describes the available system information.
The following query retrieves all columns:
SELECT * FROM sys_information
| Name | Type | Description |
| Product | String | The name of the product. |
| Version | String | The version number of the product. |
| Datasource | String | The name of the datasource the product connects to. |
| NodeId | String | The unique identifier of the machine where the product is installed. |
| HelpURL | String | The URL to the product's help documentation. |
| License | String | The license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.) |
| Location | String | The file path location where the product's library is stored. |
| Environment | String | The version of the environment or rumtine the product is currently running under. |
| DataSyncVersion | String | The tier of CData Sync required to use this connector. |
| DataSyncCategory | String | The category of CData Sync functionality (e.g., Source, Destination). |
By default, the Cloud attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
The CSV Cloud also supports setting client certificates. Set the following to connect using a client certificate.
To authenticate to an HTTP proxy, set the following:
Set the following properties:
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| AuthScheme | The type of authentication to use when connecting to remote services. |
| AccessKey | The access key used to authenticate to CSV. This value is accessible from your security credentials page. |
| SecretKey | Your account secret key. This value is accessible from your security credentials page. |
| ApiKey | The API Key used to identify the user to IBM Cloud. |
| User | Specifies the user account that the provider uses to authenticate. |
| Password | Specifies the password used to authenticate the user. |
| SharePointEdition | The edition of SharePoint being used. Set either SharePointOnline or SharePointOnPremise. |
| ImpersonateUserMode | Specify the type of the user impersonation. It should be whether the User mode or the Admin mode. |
| Property | Description |
| ConnectionType | Specifies the file storage service, server, or file access protocol through which your CSV files are stored and retreived. |
| URI | The Uniform Resource Identifier (URI) for the CSV resource location. |
| Region | The hosting region for your S3-like Web Services. |
| OracleNamespace | The Oracle Cloud Object Storage namespace to use. |
| StorageBaseURL | Specifies the URL of a cloud storage service provider. |
| SimpleUploadLimit | This setting specifies the threshold, in bytes, above which the provider will choose to perform a multipart upload rather than uploading everything in one request. |
| UseVirtualHosting | If true (default), buckets will be referenced in the request using the hosted-style request: http://yourbucket.s3.amazonaws.com/yourobject. If set to false, the bean will use the path-style request: http://s3.amazonaws.com/yourbucket/yourobject. Note that this property will be set to false, in case of an S3 based custom service when the CustomURL is specified. |
| TestConnectionBehavior | Specifies the behavior of the test connection operation. |
| UseLakeFormation | When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion. |
| Property | Description |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRoleARN | The Amazon Resource Name of the role to use when authenticating. |
| AWSPrincipalARN | The ARN of the SAML Identity provider in your AWS account. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSSessionToken | Your AWS session token. |
| AWSExternalId | A unique identifier that might be required when you assume a role in another account. |
| MFASerialNumber | The serial number of the MFA device if one is being used. |
| MFAToken | The temporary token available from your MFA device. |
| TemporaryTokenDuration | The amount of time (in seconds) a temporary token will last. |
| AWSWebIdentityToken | The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider. |
| ServerSideEncryption | When activated, file uploads into Amazon S3 buckets will be server-side encrypted. |
| SSEContext | A BASE64-encoded UTF-8 string holding JSON which represents a string-string (key-value) map. |
| SSEEnableS3BucketKeys | Configuration to use an S3 Bucket Key at the object level when encrypting data with AWS KMS. Enabling this will reduce the cost of server-side encryption by lowering calls to AWS KMS. |
| SSEKey | A symmetric encryption KeyManagementService key, that is used to protect the data when using ServerSideEncryption. |
| Property | Description |
| AzureStorageAccount | The name of your Azure storage account. |
| AzureAccessKey | The storage key associated with your Azure account. |
| AzureSharedAccessSignature | A shared access key signature that may be used for authentication. |
| AzureTenant | Identifies the CSV tenant being used to access data. Accepts either the tenant's domain name (for example, contoso.onmicrosoft.com ) or its directory (tenant) ID. |
| AzureEnvironment | Specifies the Azure network environment to which you will connect. Must be the same network to which your Azure account was added. |
| Property | Description |
| KeycloakRealmURL | Specifies the full URL to the Keycloak server including the specific realm used for authentication and authorization. |
| Property | Description |
| SSOLoginURL | The identity provider's login URL. |
| SSOProperties | Additional properties required to connect to the identity provider, formatted as a semicolon-separated list. |
| SSOExchangeURL | The URL used for consuming the SAML response and exchanging it for service specific credentials. |
| Property | Description |
| OAuthJWTCert | Supplies the name of the client certificate's JWT Certificate store. |
| OAuthJWTCertType | Identifies the type of key store containing the JWT Certificate. |
| OAuthJWTCertPassword | Provides the password for the OAuth JWT certificate used to access a password-protected certificate store. If the certificate store does not require a password, leave this property blank. |
| OAuthJWTCertSubject | Identifies the subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate. |
| OAuthJWTSubject | The user subject for which the application is requesting delegated access. |
| OAuthJWTSubjectType | The SubType for the JWT authentication. |
| OAuthJWTPublicKeyId | The Id of the public key for JWT. |
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| SubjectId | The user subject for which the application is requesting delegated access. |
| SubjectType | The Subject Type for the Client Credentials authentication. |
| Scope | Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created. |
| OAuthPasswordGrantMode | Specifies how the OAuth Client ID and Client Secret are sent to the authorization server. |
| OAuthAuthorizationURL | The authorization URL for the OAuth service. |
| OAuthAccessTokenURL | The URL from which the OAuth access token is retrieved. |
| AuthToken | The authentication token used to request and obtain the OAuth Access Token. |
| AuthKey | The authentication secret used to request and obtain the OAuth Access Token. |
| Property | Description |
| SSLMode | The authentication mechanism to be used when connecting to the FTP or FTPS server. |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| SSHAuthMode | The authentication method used when establishing an SSH Tunnel to the service. |
| SSHClientCert | A certificate to be used for authenticating the SSHUser. |
| SSHClientCertPassword | The password of the SSHClientCert key if it has one. |
| SSHClientCertSubject | The subject of the SSH client certificate. |
| SSHClientCertType | The type of SSHClientCert private key. |
| SSHUser | The SSH user. |
| SSHPassword | The SSH password. |
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| AggregateFiles | Specifies whether the provider aggregates all files with the same schema in the specified folder into a single table called AggregatedFiles . |
| MetadataDiscoveryURI | Specifies the file that the provider uses to determine the schema when aggregating multiple files into a single result set. |
| TypeDetectionScheme | Specifies how the provider determines column data types when reading text files. |
| ColumnCount | Specifies the number of columns that the provider detects when dynamically determining table columns. |
| RowScanDepth | Specifies the number of rows that the provider scans when dynamically determining table columns. |
| Property | Description |
| IncludeColumnHeaders | Specifies whether the provider derives column names from the first row of each file. |
| FMT | Specifies the file format that the provider uses to parse all text files. |
| ExtendedProperties | Specifies Microsoft Jet OLE DB 4.0-compatible extended properties that define the format of local text files. |
| RowDelimiter | Specifies the character or sequence of characters that the provider uses to detect the end of a row in a text file. |
| SkipTop | Specifies the number of rows that the provider skips from the top of the file before reading data. |
| IgnoreBlankRows | Specifies whether the provider skips blank rows when reading data from text files. |
| IncludeEmptyHeaders | Specifies whether the provider includes columns with empty header values when reading files that contain column headers. |
| SkipHeaderComments | Specifies whether the provider skips comment rows at the top of a file. |
| Charset | Specifies the character set that the provider uses to encode and decode text data when reading from or writing to files. |
| QuoteEscapeCharacter | Determines the character which will be used to escape quotes. |
| QuoteCharacter | Determines the character which will be used to quote values in CSV file. |
| TrimQuotedValues | Specifies whether the provider trims spaces inside quoted values when applying the TrimSpaces property. |
| TrimSpaces | Specifies how the provider handles leading and trailing spaces in cell values. |
| PushEmptyValuesAsNull | Specifies whether the provider converts empty values to null when reading data. |
| NullValues | A comma separated list which is replaced with nulls if there are found in the CSV file. |
| PathSeparator | Specifies the character that the provider uses to replace file path separators when generating table names. |
| IgnoreIncompleteRows | Specifies how the provider handles rows that do not match the expected structure based on the column headers. |
| MaxCellLength | Specifies the maximum number of characters that a cell can contain before its value is truncated. |
| DateTimeFormat | This setting specifies in which format the datetime values will be written to for CSV files. |
| Property | Description |
| AWSCertificate | The absolute path to the certificate file or the certificate content in PEM format encoded in base64. |
| AWSCertificatePassword | The password for the certificate if applicable, otherwise leave blank. |
| AWSCertificateType | The type of AWSCertificate . |
| AWSPrivateKey | The absolute path to the private key file or the private key content in PEM format encoded in base64. |
| AWSPrivateKeyPassword | The password for the private key if it is encrypted, otherwise leave blank. |
| AWSPrivateKeyType | The type of AWSPrivateKey . |
| AWSProfileARN | Profile to pull policies from. |
| AWSSessionDuration | Duration, in seconds, for the resulting session. |
| AWSTrustAnchorARN | Trust anchor to use for authentication. |
| BatchNamingConvention | Specifies the naming convention that the provider uses for batch files. |
| ClientCulture | This property can be used to specify the format of data (e.g., currency values) that is accepted by the client application. This property can be used when the client application does not support the machine's culture settings. For example, Microsoft Access requires 'en-US'. |
| CreateBatchFolder | Specifies whether the provider creates a folder for storing batch files when InsertMode is set to FilePerBatch. |
| Culture | This setting can be used to specify culture settings that determine how the provider interprets certain data types that are passed into the provider. For example, setting Culture='de-DE' will output German formats even on an American machine. |
| CustomHeaders | Specifies additional HTTP headers to append to the request headers created from other properties, such as ContentType and From. Use this property to customize requests for specialized or nonstandard APIs. |
| CustomURLParams | A string of custom URL parameters to be included with the HTTP request, in the form field1=value1&field2=value2&field3=value3. |
| DirectoryRetrievalDepth | Limit the subfolders recursively scanned when IncludeSubdirectories is enabled. |
| ExcludeFileExtensions | Specifies whether the provider excludes file extensions from table names. |
| ExcludeFiles | Comma-separated list of file extensions to exclude from the set of the files modeled as tables. |
| ExcludeStorageClasses | A comma seperated list of storage classes to ignore. |
| FolderId | The ID of a folder in Google Drive. If set, the resource location specified by the URI is relative to the Folder ID for all operations. |
| IncludeDropboxTeamResources | Indicates if you want to include Dropbox team files and folders. |
| IncludeFiles | Comma-separated list of file extensions to include into the set of the files modeled as tables. |
| IncludeItemsFromAllDrives | Whether Google Drive shared drive items should be included in results. If not present or set to false, then shared drive items are not returned. |
| IncludeSubdirectories | Whether to read files from nested folders. In the case of a name collision, table names are prefixed by the underscore-separated folder names. |
| InsertMode | Specifies the mode for inserting data into CSV files. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| Pagesize | Specifies the maximum number of records per page the provider returns when requesting data from CSV. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| ThrowsKeyNotFound | Specifies whether or not throws an exception if there is no rows updated. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| TruncateOnInserts | Specifies whether the provider truncates the target table before performing each batch insert operation. |
| UseRowNumbers | Specifies whether the provider generates a RowNumber column to identify records when no custom schema is defined. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | The type of authentication to use when connecting to remote services. |
| AccessKey | The access key used to authenticate to CSV. This value is accessible from your security credentials page. |
| SecretKey | Your account secret key. This value is accessible from your security credentials page. |
| ApiKey | The API Key used to identify the user to IBM Cloud. |
| User | Specifies the user account that the provider uses to authenticate. |
| Password | Specifies the password used to authenticate the user. |
| SharePointEdition | The edition of SharePoint being used. Set either SharePointOnline or SharePointOnPremise. |
| ImpersonateUserMode | Specify the type of the user impersonation. It should be whether the User mode or the Admin mode. |
The type of authentication to use when connecting to remote services.
string
"NONE"
The following options are available when ConnectionType is set to Amazon S3:
The following options are available when ConnectionType is set to Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Data Lake Storage Gen2 SSL, or OneDrive:
The following options are available when ConnectionType is set to OneLake:
The following options are available when ConnectionType is set to Box:
Only the following option is available when ConnectionType is set to Dropbox:
OAuth: Uses OAuth2 with the authorization code grant type. OAuthVersion must be set to 2.0.
Only the following option is available when ConnectionType is set to FTP or FTPS:
Basic: Basic user credentials (user/password).
The following options are available when ConnectionType points Google Cloud Storage or Google Drive:
The following options are available when ConnectionType is set to HDFS or HDFS Secure:
The following options are available when ConnectionType is set to HTTP or HTTPS:
The following options are also available when ConnectionType is set to IBM Object Storage Source:
Only the following option is available when ConnectionType is set to Oracle Cloud Storage:
IAMSecretKey: Uses AccessKey and SecretKey to authenticate to the Oracle Cloud Storage.
When ConnectionType is set to SFTP, the Cloud sets AuthScheme to SFTP. When AuthScheme is set to SFTP, the precise authentication method is controlled using the SSHAuthMode property. See this property's documentation for further information.
The following options are also available when ConnectionType is set to SharePoint REST:
The following options are also available when ConnectionType is set to SharePoint SOAP:
The access key used to authenticate to CSV. This value is accessible from your security credentials page.
string
""
User is used with AccessKey to authenticate the user against the CSV server.
Your account secret key. This value is accessible from your security credentials page.
string
""
Your account secret key. This value is accessible from your security credentials page depending on the service you are using.
The API Key used to identify the user to IBM Cloud.
string
""
Access to resources in the CSV REST API is governed by an API key in order to retrieve token. An API Key can be created by navigating to Manage --> Access (IAM) --> Users and clicking 'Create'.
Specifies the user account that the provider uses to authenticate.
string
""
The User and Password properties are used together to authenticate with the target service or server.
The meaning of this property depends on the connection context, which is determined by ConnectionType and AuthScheme.
Specifies the password used to authenticate the user.
string
""
The User and Password properties are used together to authenticate with the target service or server.
This property is useful for authenticating user accounts across various connection types and authentication schemes.
Specify the type of the user impersonation. It should be whether the User mode or the Admin mode.
string
"User"
Specify the type of the user impersonation. It should be whether the User mode or the Admin mode. The Admin mode is available only for Enterprise with Governance accounts and will be upon request. It will not work for any other accounts.
This section provides a complete list of the Connection properties you can configure in the connection string for this provider.
| Property | Description |
| ConnectionType | Specifies the file storage service, server, or file access protocol through which your CSV files are stored and retreived. |
| URI | The Uniform Resource Identifier (URI) for the CSV resource location. |
| Region | The hosting region for your S3-like Web Services. |
| OracleNamespace | The Oracle Cloud Object Storage namespace to use. |
| StorageBaseURL | Specifies the URL of a cloud storage service provider. |
| SimpleUploadLimit | This setting specifies the threshold, in bytes, above which the provider will choose to perform a multipart upload rather than uploading everything in one request. |
| UseVirtualHosting | If true (default), buckets will be referenced in the request using the hosted-style request: http://yourbucket.s3.amazonaws.com/yourobject. If set to false, the bean will use the path-style request: http://s3.amazonaws.com/yourbucket/yourobject. Note that this property will be set to false, in case of an S3 based custom service when the CustomURL is specified. |
| TestConnectionBehavior | Specifies the behavior of the test connection operation. |
| UseLakeFormation | When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion. |
Specifies the file storage service, server, or file access protocol through which your CSV files are stored and retreived.
string
"Local"
Set the ConnectionType to one of the following:
Set the ConnectionType to one of the following:
The Uniform Resource Identifier (URI) for the CSV resource location.
string
""
Set the URI property to specify a path to a file or stream.
NOTE:
See for more advanced features available for parsing and merging multiple files.
Below are examples of the URI formats for the available data sources:
| Service provider | URI formats | |
| Local | Single File Path One table
localPath file://localPath Directory Path (one table per file) localPath file://localPath | |
| HTTP or HTTPS | http://remoteStream
https://remoteStream | |
| Amazon S3 | Single File Path One table
s3://remotePath Directory Path (one table per file) s3://remotePath | |
| Azure Blob Storage | Single File Path One table
azureblob://mycontainer/myblob/ Directory Path (one table per file) azureblob://mycontainer/myblob/ | |
| OneDrive | Single File Path One table
onedrive://remotePath Directory Path (one table per file) onedrive://remotePath | |
| Google Cloud Storage | Single File Path One table
gs://bucket/remotePath Directory Path (one table per file) gs://bucket/remotePath | |
| Google Drive | Single File Path One table
gdrive://remotePath Directory Path (one table per file) gdrive://remotePath | |
| Box | Single File Path One table
box://remotePath Directory Path (one table per file) box://remotePath | |
| FTP or FTPS | Single File Path One table
ftp://server:port/remotePath Directory Path (one table per file) ftp://server:port/remotePath | |
| SFTP | Single File Path One table
sftp://server:port/remotePath Directory Path (one table per file) sftp://server:port/remotePath | |
| Sharepoint | Single File Path One table
sp://https://server/remotePath Directory Path (one table per file) sp://https://server/remotePath Use the Sharepoint URL as the remote path. Not the display name. |
Below are example connection strings to CSV files or streams.
| Service provider | URI formats | Connection example |
| Local | Single File Path One table
localPath file://localPath Directory Path (one table per file) localPath file://localPath | URI=C:\folder1 |
| Amazon S3 | Single File Path One table
s3://bucket1/folder1 Directory Path (one table per file) s3://bucket1/folder1 | URI=s3://bucket1/folder1; AWSAccessKey=token1; AWSSecretKey=secret1; AWSRegion=OHIO; |
| Azure Blob Storage | Single File Path One table
azureblob://mycontainer/myblob/ Directory Path (one table per file) azureblob://mycontainer/myblob/ | URI=azureblob://mycontainer/myblob/; AzureStorageAccount=myAccount; AzureAccessKey=myKey;
URI=azureblob://mycontainer/myblob/; AzureStorageAccount=myAccount; AuthScheme=OAuth; |
| OneDrive | Single File Path One table
onedrive://remotePath Directory Path (one table per file) onedrive://remotePath | URI=onedrive://folder1; AuthScheme=OAuth;
URI=onedrive://SharedWithMe/folder1; AuthScheme=OAuth; |
| Google Cloud Storage | Single File Path One table
gs://bucket/remotePath Directory Path (one table per file) gs://bucket/remotePath | URI=gs://bucket/folder1; AuthScheme=OAuth; ProjectId=test; |
| Google Drive | Single File Path One table
gdrive://remotePath Directory Path (one table per file) gdrive://remotePath | URI=gdrive://folder1; |
| Box | Single File Path One table
box://remotePath Directory Path (one table per file) box://remotePath | URI=box://folder1; OAuthClientId=oauthclientid1; OAuthClientSecret=oauthcliensecret1; CallbackUrl=http://localhost:12345; |
| FTP or FTPS | Single File Path One table
ftp://server:port/remotePath Directory Path (one table per file) ftp://server:port/remotePath | URI=ftps://localhost:990/folder1; User=user1; Password=password1; |
| SFTP | sftp://server:port/remotePath | URI=sftp://127.0.0.1:22/remotePath; User=user1; Password=password1; |
| Sharepoint |
sp://https://server/remotePath Use the Sharepoint URL as the remote path. Not the display name. | URI=sp://https://domain.sharepoint.com/Documents; User=user1; Password=password1; |
The hosting region for your S3-like Web Services.
string
""
The hosting region for your S3-like Web Services.
| Value | Region |
| Commercial Cloud Regions | |
| ap-hyderabad-1 | India South (Hyderabad) |
| ap-melbourne-1 | Australia Southeast (Melbourne) |
| ap-mumbai-1 | India West (Mumbai) |
| ap-osaka-1 | Japan Central (Osaka) |
| ap-seoul-1 | South Korea Central (Seoul) |
| ap-sydney-1 | Australia East (Sydney) |
| ap-tokyo-1 | Japan East (Tokyo) |
| ca-montreal-1 | Canada Southeast (Montreal) |
| ca-toronto-1 | Canada Southeast (Toronto) |
| eu-amsterdam-1 | Netherlands Northwest (Amsterdam) |
| eu-frankfurt-1 | Germany Central (Frankfurt) |
| eu-zurich-1 | Switzerland North (Zurich) |
| me-jeddah-1 | Saudi Arabia West (Jeddah) |
| sa-saopaulo-1 | Brazil East (Sao Paulo) |
| uk-london-1 | UK South (London) |
| us-ashburn-1 (default) | US East (Ashburn, VA) |
| us-phoenix-1 | US West (Phoenix, AZ) |
| US Gov FedRAMP High Regions | |
| us-langley-1 | US Gov East (Ashburn, VA) |
| us-luke-1 | US Gov West (Phoenix, AZ) |
| US Gov DISA IL5 Regions | |
| us-gov-ashburn-1 | US DoD East (Ashburn, VA) |
| us-gov-chicago-1 | US DoD North (Chicago, IL) |
| us-gov-phoenix-1 | US DoD West (Phoenix, AZ) |
| Value | Region |
| eu-central-1 | Europe (Amsterdam) |
| us-east-1 (Default) | US East (Ashburn, VA) |
| us-east-2 | US East (Manassas, VA) |
| us-west-1 | US West (Hillsboro, OR) |
The Oracle Cloud Object Storage namespace to use.
string
""
The Oracle Cloud Object Storage namespace to use. This setting must be set to the Oracle Cloud Object Storage namespace associated with the Oracle Cloud account before any requests can be made. Refer to the Understanding Object Storage Namespaces page of the Oracle Cloud documentation for instructions on how to find your account's Object Storage namespace.
Specifies the URL of a cloud storage service provider.
string
""
This connection property is used to specify:
If the domain for this option ends in -my (for example, https://bigcorp-my.sharepoint.com) then you may need to use the onedrive:// scheme instead of the sp:// or sprest:// scheme.
When connecting to files in a non–root-level SharePoint Online site (for example, under /sites/<your site>/), set this property to the full site path. For example: StorageBaseURL=https://<your domain>.sharepoint.com/sites/<your site>/
Using the full SharePoint site URL ensures the connector can locate files stored in subsites or other non-root-level site structures.
This setting specifies the threshold, in bytes, above which the provider will choose to perform a multipart upload rather than uploading everything in one request.
string
""
This setting specifies the threshold, in bytes, above which the Cloud will choose to perform a multipart upload rather than uploading everything in one request.
If true (default), buckets will be referenced in the request using the hosted-style request: http://yourbucket.s3.amazonaws.com/yourobject. If set to false, the bean will use the path-style request: http://s3.amazonaws.com/yourbucket/yourobject. Note that this property will be set to false, in case of an S3 based custom service when the CustomURL is specified.
bool
true
If true (default), buckets will be referenced in the request using the hosted-style request: http://yourbucket.s3.amazonaws.com/yourobject. If set to false, the bean will use the path-style request: http://s3.amazonaws.com/yourbucket/yourobject. Note that this property will be set to false, in case of an S3 based custom service when the CustomURL is specified.
Specifies the behavior of the test connection operation.
string
"LIST_OR_READ_FILES"
Change how the Cloud responds to a test connection operation based on the integration scenario.
When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion.
bool
false
When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, AzureAD, PingFederate, while providing a SAML assertion.
This section provides a complete list of the AWS Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AWSAccessKey | Specifies your AWS account access key. This value is accessible from your AWS security credentials page. |
| AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
| AWSRoleARN | The Amazon Resource Name of the role to use when authenticating. |
| AWSPrincipalARN | The ARN of the SAML Identity provider in your AWS account. |
| AWSRegion | The hosting region for your Amazon Web Services. |
| AWSSessionToken | Your AWS session token. |
| AWSExternalId | A unique identifier that might be required when you assume a role in another account. |
| MFASerialNumber | The serial number of the MFA device if one is being used. |
| MFAToken | The temporary token available from your MFA device. |
| TemporaryTokenDuration | The amount of time (in seconds) a temporary token will last. |
| AWSWebIdentityToken | The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider. |
| ServerSideEncryption | When activated, file uploads into Amazon S3 buckets will be server-side encrypted. |
| SSEContext | A BASE64-encoded UTF-8 string holding JSON which represents a string-string (key-value) map. |
| SSEEnableS3BucketKeys | Configuration to use an S3 Bucket Key at the object level when encrypting data with AWS KMS. Enabling this will reduce the cost of server-side encryption by lowering calls to AWS KMS. |
| SSEKey | A symmetric encryption KeyManagementService key, that is used to protect the data when using ServerSideEncryption. |
Specifies your AWS account access key. This value is accessible from your AWS security credentials page.
string
""
To find your AWS account access key:
Your AWS account secret key. This value is accessible from your AWS security credentials page.
string
""
Your AWS account secret key. This value is accessible from your AWS security credentials page:
The Amazon Resource Name of the role to use when authenticating.
string
""
When authenticating outside of AWS, it is common to use a Role for authentication instead of your direct AWS account credentials. Entering the AWSRoleARN will cause the CData Cloud to perform a role based authentication instead of using the AWSAccessKey and AWSSecretKey directly. The AWSAccessKey and AWSSecretKey must still be specified to perform this authentication. You cannot use the credentials of an AWS root user when setting RoleARN. The AWSAccessKey and AWSSecretKey must be those of an IAM user.
The ARN of the SAML Identity provider in your AWS account.
string
""
The ARN of the SAML Identity provider in your AWS account.
The hosting region for your Amazon Web Services.
string
"NORTHERNVIRGINIA"
The hosting region for your Amazon Web Services. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, TAIPEI, HYDERABAD, JAKARTA, MALAYSIA, MELBOURNE, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, THAILAND, TOKYO, CENTRAL, CALGARY, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, SPAIN, STOCKHOLM, ZURICH, TELAVIV, MEXICOCENTRAL, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST, ISOLATEDUSEAST, ISOLATEDUSEASTB, ISOLATEDUSEASTF, ISOLATEDUSSOUTHF, ISOLATEDUSWEST and ISOLATEDEUWEST.
Your AWS session token.
string
""
Your AWS session token. This value can be retrieved in different ways. See this link for more info.
A unique identifier that might be required when you assume a role in another account.
string
""
A unique identifier that might be required when you assume a role in another account.
The serial number of the MFA device if one is being used.
string
""
You can find the device for an IAM user by going to the AWS Management Console and viewing the user's security credentials. For virtual devices, this is actually an Amazon Resource Name (such as arn:aws:iam::123456789012:mfa/user).
The temporary token available from your MFA device.
string
""
If MFA is required, this value will be used along with the MFASerialNumber to retrieve temporary credentials to login. The temporary credentials available from AWS will only last up to 1 hour by default (see TemporaryTokenDuration). Once the time is up, the connection must be updated to specify a new MFA token so that new credentials may be obtained. %AWSpSecurityToken; %AWSpTemporaryTokenDuration;
The amount of time (in seconds) a temporary token will last.
string
"3600"
Temporary tokens are used with both MFA and Role based authentication. Temporary tokens will eventually time out, at which time a new temporary token must be obtained. For situations where MFA is not used, this is not a big deal. The CData Cloud will internally request a new temporary token once the temporary token has expired.
However, for MFA required connection, a new MFAToken must be specified in the connection to retrieve a new temporary token. This is a more intrusive issue since it requires an update to the connection by the user. The maximum and minimum that can be specified will depend largely on the connection being used.
For Role based authentication, the minimum duration is 900 seconds (15 minutes) while the maximum if 3600 (1 hour). Even if MFA is used with role based authentication, 3600 is still the maximum.
For MFA authentication by itself (using an IAM User or root user), the minimum is 900 seconds (15 minutes), the maximum is 129600 (36 hours).
The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider.
string
""
The OAuth 2.0 access token or OpenID Connect ID token that is provided by an identity provider. An application can get this token by authenticating a user with a web identity provider. If not specified, the value for this connection property is automatically obtained from the value of the 'AWS_WEB_IDENTITY_TOKEN_FILE' environment variable.
When activated, file uploads into Amazon S3 buckets will be server-side encrypted.
string
"OFF"
Server-side encryption is the encryption of data at its destination by the application or service that receives it. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html
A BASE64-encoded UTF-8 string holding JSON which represents a string-string (key-value) map.
string
""
Example of what the JSON may look decoded: {"aws:s3:arn": "arn:aws:s3:::_bucket_/_object_"}.
Configuration to use an S3 Bucket Key at the object level when encrypting data with AWS KMS. Enabling this will reduce the cost of server-side encryption by lowering calls to AWS KMS.
bool
false
Configuration to use an S3 Bucket Key at the object level when encrypting data with AWS KMS. Enabling this will reduce the cost of server-side encryption by lowering calls to AWS KMS.
A symmetric encryption KeyManagementService key, that is used to protect the data when using ServerSideEncryption.
string
""
A symmetric encryption KeyManagementService key, that is used to protect the data when using ServerSideEncryption.
This section provides a complete list of the Azure Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AzureStorageAccount | The name of your Azure storage account. |
| AzureAccessKey | The storage key associated with your Azure account. |
| AzureSharedAccessSignature | A shared access key signature that may be used for authentication. |
| AzureTenant | Identifies the CSV tenant being used to access data. Accepts either the tenant's domain name (for example, contoso.onmicrosoft.com ) or its directory (tenant) ID. |
| AzureEnvironment | Specifies the Azure network environment to which you will connect. Must be the same network to which your Azure account was added. |
The name of your Azure storage account.
string
""
The name of your Azure storage account.
The storage key associated with your Azure account.
string
""
The storage key associated with your CSV account. You can retrieve it as follows:
Identifies the CSV tenant being used to access data. Accepts either the tenant's domain name (for example, contoso.onmicrosoft.com ) or its directory (tenant) ID.
string
""
A tenant is a digital container for your organization's users and resources, managed through Microsoft Entra ID (formerly Azure AD). Each tenant is associated with a unique directory ID, and often with a custom domain (for example, microsoft.com or contoso.onmicrosoft.com).
To find the directory (tenant) ID in the Microsoft Entra Admin Center, navigate to Microsoft Entra ID > Properties and copy the value labeled "Directory (tenant) ID".
This property is required in the following cases:
You can provide the tenant value in one of two formats:
Specifying the tenant explicitly ensures that the authentication request is routed to the correct directory, which is especially important when a user belongs to multiple tenants or when using service principal–based authentication.
If this value is omitted when required, authentication may fail or connect to the wrong tenant. This can result in errors such as unauthorized or resource not found.
Specifies the Azure network environment to which you will connect. Must be the same network to which your Azure account was added.
string
"GLOBAL"
Required if your Azure account is part of a different network than the Global network, such as China, USGOVT, or USGOVTDOD.
This section provides a complete list of the Keycloak Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| KeycloakRealmURL | Specifies the full URL to the Keycloak server including the specific realm used for authentication and authorization. |
Specifies the full URL to the Keycloak server including the specific realm used for authentication and authorization.
string
""
The URL must be in the format: http(s)://{server-url}:{port}/realms/{realm-name}.
A realm in Keycloak is a logical namespace that manages a set of users, roles, clients, and configurations. It isolates authentication and authorization for different applications or services, allowing each realm to have its own user base and security settings. Multiple realms can exist within a single Keycloak instance, providing separation between different environments or groups.
Specifying KeycloakRealmURL is required when AuthScheme = Keycloak.
This section provides a complete list of the SSO properties you can configure in the connection string for this provider.
| Property | Description |
| SSOLoginURL | The identity provider's login URL. |
| SSOProperties | Additional properties required to connect to the identity provider, formatted as a semicolon-separated list. |
| SSOExchangeURL | The URL used for consuming the SAML response and exchanging it for service specific credentials. |
The identity provider's login URL.
string
""
The identity provider's login URL.
Additional properties required to connect to the identity provider, formatted as a semicolon-separated list.
string
""
Additional properties required to connect to the identity provider, formatted as a semicolon-separated list.
This is used with the SSOLoginURL.
SSO configuration is discussed further in .
The URL used for consuming the SAML response and exchanging it for service specific credentials.
string
""
The CData Cloud will use the URL specified here to consume a SAML response and exchange it for service specific credentials. The retrieved credentials are the final piece during the SSO connection that are used to communicate with CSV.
This section provides a complete list of the JWT OAuth properties you can configure in the connection string for this provider.
| Property | Description |
| OAuthJWTCert | Supplies the name of the client certificate's JWT Certificate store. |
| OAuthJWTCertType | Identifies the type of key store containing the JWT Certificate. |
| OAuthJWTCertPassword | Provides the password for the OAuth JWT certificate used to access a password-protected certificate store. If the certificate store does not require a password, leave this property blank. |
| OAuthJWTCertSubject | Identifies the subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate. |
| OAuthJWTSubject | The user subject for which the application is requesting delegated access. |
| OAuthJWTSubjectType | The SubType for the JWT authentication. |
| OAuthJWTPublicKeyId | The Id of the public key for JWT. |
Supplies the name of the client certificate's JWT Certificate store.
string
""
The OAuthJWTCertType field specifies the type of the certificate store specified in OAuthJWTCert. If the store is password-protected, use OAuthJWTCertPassword to supply the password..
OAuthJWTCert is used in conjunction with the OAuthJWTCertSubject field in order to specify client certificates. If OAuthJWTCert has a value, and OAuthJWTCertSubject is set, the CData Cloud initiates a search for a certificate. For further information, see OAuthJWTCertSubject.
Designations of certificate stores are platform-dependent.
Notes
Identifies the type of key store containing the JWT Certificate.
string
"PEMKEY_BLOB"
| Value | Description | Notes |
| USER | A certificate store owned by the current user. | Only available in Windows. |
| MACHINE | A machine store. | Not available in Java or other non-Windows environments. |
| PFXFILE | A PFX (PKCS12) file containing certificates. | |
| PFXBLOB | A string (base-64-encoded) representing a certificate store in PFX (PKCS12) format. | |
| JKSFILE | A Java key store (JKS) file containing certificates. | Only available in Java. |
| JKSBLOB | A string (base-64-encoded) representing a certificate store in Java key store (JKS) format. | Only available in Java. |
| PEMKEY_FILE | A PEM-encoded file that contains a private key and an optional certificate. | |
| PEMKEY_BLOB | A string (base64-encoded) that contains a private key and an optional certificate. | |
| PUBLIC_KEY_FILE | A file that contains a PEM- or DER-encoded public key certificate. | |
| PUBLIC_KEY_BLOB | A string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate. | |
| SSHPUBLIC_KEY_FILE | A file that contains an SSH-style public key. | |
| SSHPUBLIC_KEY_BLOB | A string (base-64-encoded) that contains an SSH-style public key. | |
| P7BFILE | A PKCS7 file containing certificates. | |
| PPKFILE | A file that contains a PPK (PuTTY Private Key). | |
| XMLFILE | A file that contains a certificate in XML format. | |
| XMLBLOB | Astring that contains a certificate in XML format. | |
| BCFKSFILE | A file that contains an Bouncy Castle keystore. | |
| BCFKSBLOB | A string (base-64-encoded) that contains a Bouncy Castle keystore. | |
| GOOGLEJSON | A JSON file containing the service account information. | Only valid when connecting to a Google service. |
| GOOGLEJSONBLOB | A string that contains the service account JSON. | Only valid when connecting to a Google service. |
| BOXJSON | A JSON file containing the service account credentials. | Only valid when connecting to Box. |
| BOXJSONBLOB | The certificate store is a string that contains the service account JSON. | Only valid when connecting to Box. |
Provides the password for the OAuth JWT certificate used to access a password-protected certificate store. If the certificate store does not require a password, leave this property blank.
string
""
This property specifies the password needed to open a password-protected certificate store. To determine if a password is necessary, refer to the documentation or configuration for your specific certificate store.
This is not required when using the GOOGLEJSON OAuthJWTCertType. Google JSON keys are not encrypted.
Identifies the subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate.
string
"*"
The value of this property is used to locate a matching certificate in the store. The search process works as follows:
You can set the value to '*' to automatically select the first certificate in the store. The certificate subject is a comma-separated list of distinguished name fields and values. For example: CN=www.server.com, OU=test, C=US, [email protected].
Common fields include:
| Field | Meaning |
| CN | Common Name. This is commonly a host name like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma, enclose it in quotes. For example: "O=ACME, Inc.".
The user subject for which the application is requesting delegated access.
string
""
The user subject for which the application is requesting delegated access. Typically, the user account name or email address.
The SubType for the JWT authentication.
string
"enterprise"
The SubType for the JWT authentication. Set this to "enterprise" or "user" depending on the type of token being requested.
The Id of the public key for JWT.
string
""
The Id of the public key for JWT. Set this to the value of your Public Key Id in your app settings.
This section provides a complete list of the OAuth properties you can configure in the connection string for this provider.
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| SubjectId | The user subject for which the application is requesting delegated access. |
| SubjectType | The Subject Type for the Client Credentials authentication. |
| Scope | Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created. |
| OAuthPasswordGrantMode | Specifies how the OAuth Client ID and Client Secret are sent to the authorization server. |
| OAuthAuthorizationURL | The authorization URL for the OAuth service. |
| OAuthAccessTokenURL | The URL from which the OAuth access token is retrieved. |
| AuthToken | The authentication token used to request and obtain the OAuth Access Token. |
| AuthKey | The authentication secret used to request and obtain the OAuth Access Token. |
Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication.
string
""
This property is required in two cases:
(When the driver provides embedded OAuth credentials, this value may already be provided by the Cloud and thus not require manual entry.)
OAuthClientId is generally used alongside other OAuth-related properties such as OAuthClientSecret and OAuthSettingsLocation when configuring an authenticated connection.
OAuthClientId is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can usually find this value in your identity provider’s application registration settings. Look for a field labeled Client ID, Application ID, or Consumer Key.
While the client ID is not considered a confidential value like a client secret, it is still part of your application's identity and should be handled carefully. Avoid exposing it in public repositories or shared configuration files.
For more information on how this property is used when configuring a connection, see Establishing a Connection.
Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.).
string
""
This property (sometimes called the application secret or consumer secret) is required when using a custom OAuth application in any flow that requires secure client authentication, such as web-based OAuth, service-based connections, or certificate-based authorization flows. It is not required when using an embedded OAuth application.
The client secret is used during the token exchange step of the OAuth flow, when the driver requests an access token from the authorization server. If this value is missing or incorrect, authentication fails with either an invalid_client or an unauthorized_client error.
OAuthClientSecret is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can obtain this value from your identity provider when registering the OAuth application.
Notes:
For more information on how this property is used when configuring a connection, see Establishing a Connection
The user subject for which the application is requesting delegated access.
string
""
Id of the user or enterprise, based on the configuration set in SubjectType.
The Subject Type for the Client Credentials authentication.
string
"enterprise"
The Subject Type for the Client Credentials authentication. Set this to "enterprise" or "user" depending on the type of token being requested.
Specifies the scope of the authenticating user's access to the application, to ensure they get appropriate access to data. If a custom OAuth application is needed, this is generally specified at the time the application is created.
string
""
Scopes are set to define what kind of access the authenticating user will have; for example, read, read and write, restricted access to sensitive information. System administrators can use scopes to selectively enable access by functionality or security clearance.
When InitiateOAuth is set to GETANDREFRESH, you must use this property if you want to change which scopes are requested.
When InitiateOAuth is set to either REFRESH or OFF, you can change which scopes are requested using either this property or the Scope input.
Specifies how the OAuth Client ID and Client Secret are sent to the authorization server.
string
"Post"
The OAuth RFC provides two methods of passing the OAuthClientId and OAuthClientSecret:
The authorization URL for the OAuth service.
string
""
The authorization URL for the OAuth service. At this URL, the user logs into the server and grants permissions to the application. In OAuth 1.0, if permissions are granted, the request token is authorized.
The URL from which the OAuth access token is retrieved.
string
""
In OAuth 1.0, the authorized request token is exchanged for the access token at this URL.
The authentication token used to request and obtain the OAuth Access Token.
string
""
This property is required only when performing headless authentication in OAuth 1.0. It can be obtained from the GetOAuthAuthorizationUrl stored procedure.
It can be supplied alongside the AuthKey in the GetOAuthAccessToken stored procedure to obtain the OAuthAccessToken.
The authentication secret used to request and obtain the OAuth Access Token.
string
""
This property is required only when performing headless authentication in OAuth 1.0. It can be obtained from the GetOAuthAuthorizationUrl stored procedure.
It can be supplied alongside the AuthToken in the GetOAuthAccessToken stored procedure to obtain the OAuthAccessToken.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLMode | The authentication mechanism to be used when connecting to the FTP or FTPS server. |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
The authentication mechanism to be used when connecting to the FTP or FTPS server.
string
"AUTOMATIC"
If SSLMode is set to NONE, default plaintext authentication is used to log in to the server. If SSLMode is set to IMPLICIT, the SSL negotiation will start immediately after the connection is established. If SSLMode is set to EXPLICIT, the Cloud will first connect in plaintext, and then explicitly start SSL negotiation through a protocol command such as STARTTLS. If SSLMode is set to AUTOMATIC, if the remote port is set to the standard plaintext port of the protocol (where applicable), the component will behave the same as if SSLMode is set to EXPLICIT. In all other cases, SSL negotiation will be IMPLICIT.
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If you are using a TLS/SSL connection, use this property to specify the TLS/SSL certificate to be accepted from the server. If you specify a value for this property, all other certificates that are not trusted by the machine are rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space- or colon-separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space- or colon-separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
Note: It is possible to use '*' to signify that all certificates should be accepted, but due to security concerns this is not recommended.
This section provides a complete list of the SSH properties you can configure in the connection string for this provider.
| Property | Description |
| SSHAuthMode | The authentication method used when establishing an SSH Tunnel to the service. |
| SSHClientCert | A certificate to be used for authenticating the SSHUser. |
| SSHClientCertPassword | The password of the SSHClientCert key if it has one. |
| SSHClientCertSubject | The subject of the SSH client certificate. |
| SSHClientCertType | The type of SSHClientCert private key. |
| SSHUser | The SSH user. |
| SSHPassword | The SSH password. |
The authentication method used when establishing an SSH Tunnel to the service.
string
"Password"
A certificate to be used for authenticating the SSHUser.
string
""
SSHClientCert must contain a valid private key in order to use public key authentication. A public key is optional, if one is not included then the Cloud generates it from the private key. The Cloud sends the public key to the server and the connection is allowed if the user has authorized the public key.
The SSHClientCertType field specifies the type of the key store specified by SSHClientCert. If the store is password protected, specify the password in SSHClientCertPassword.
Some types of key stores are containers which may include multiple keys. By default the Cloud will select the first key in the store, but you can specify a specific key using SSHClientCertSubject.
The password of the SSHClientCert key if it has one.
string
""
This property is required for SSH tunneling when using certificate-based authentication. If the SSH certificate is in a password-protected key store, provide the password using this property to access the certificate.
The subject of the SSH client certificate.
string
"*"
When loading a certificate the subject is used to locate the certificate in the store.
If an exact match is not found, the store is searched for subjects containing the value of the property.
If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks the first certificate in the certificate store.
The certificate subject is a comma separated list of distinguished name fields and values. For instance "CN=www.server.com, OU=test, C=US, [email protected]". Common fields and their meanings are displayed below.
| Field | Meaning |
| CN | Common Name. This is commonly a host name like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma it must be quoted.
The type of SSHClientCert private key.
string
"PEMKEY_BLOB"
This property can take one of the following values:
| Types | Description | Allowed Blob Values |
| MACHINE/USER | Blob values are not supported. | |
| JKSFILE/JKSBLOB | base64-only | |
| PFXFILE/PFXBLOB | A PKCS12-format (.pfx) file. Must contain both a certificate and a private key. | base64-only |
| PEMKEY_FILE/PEMKEY_BLOB | A PEM-format file. Must contain an RSA, DSA, or OPENSSH private key. Can optionally contain a certificate matching the private key. | base64 or plain text. |
| PPKFILE/PPKBLOB | A PuTTY-format private key created using the puttygen tool. | base64-only |
| XMLFILE/XMLBLOB | An XML key in the format generated by the .NET RSA class: RSA.ToXmlString(true). | base64 or plain text. |
The SSH user.
string
""
The SSH user.
The SSH password.
string
""
The SSH password.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| Verbosity | Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5. |
Specifies the verbosity level of the log file, which controls the amount of detail logged. Supported values range from 1 to 5.
string
"1"
This property defines the level of detail the Cloud includes in the log file. Higher verbosity levels increase the detail of the logged information, but may also result in larger log files and slower performance due to the additional data being captured.
The default verbosity level is 1, which is recommended for regular operation. Higher verbosity levels are primarily intended for debugging purposes. For more information on each level, refer to Logging.
When combined with the LogModules property, Verbosity can refine logging to specific categories of information.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| AggregateFiles | Specifies whether the provider aggregates all files with the same schema in the specified folder into a single table called AggregatedFiles . |
| MetadataDiscoveryURI | Specifies the file that the provider uses to determine the schema when aggregating multiple files into a single result set. |
| TypeDetectionScheme | Specifies how the provider determines column data types when reading text files. |
| ColumnCount | Specifies the number of columns that the provider detects when dynamically determining table columns. |
| RowScanDepth | Specifies the number of rows that the provider scans when dynamically determining table columns. |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
string
""
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
Specifies whether the provider aggregates all files with the same schema in the specified folder into a single table called AggregatedFiles .
bool
false
When this property is set to true, the Cloud combines all files in the specified folder that share a common schema. The first file defines the schema unless MetadataDiscoveryURI is specified to use another file.
For example, the following two CSV files share the same column structure:
File 1
ItemID,Name,NumInStock 1,Peanuts - Salted,76 2,Peanuts - Unsalted,43 3,Raisins,26
File 2
ItemID,Name,NumInStock 4,Pretzels - Original,55 5,Pretzels - Chocolate,35 6,Toffee,44
The Cloud aggregates the files into a single result set. Only the columns present in the defined schema are included in the aggregate.
AggregatedFiles
ItemID,Name,NumInStock 1,Peanuts - Salted,76 2,Peanuts - Unsalted,43 3,Raisins,26 4,Pretzels - Original,55 5,Pretzels - Chocolate,35 6,Toffee,44
This property is useful for unifying data from multiple files with identical formats into a single result set.
Specifies the file that the provider uses to determine the schema when aggregating multiple files into a single result set.
string
""
This property applies when AggregateFiles is set to true. It defines which file the Cloud reads to discover column names and data types for the aggregated table.
This property is useful when one file serves as the reference for schema discovery across multiple input files.
Specifies how the provider determines column data types when reading text files.
string
"RowScan"
This property controls the method that the Cloud uses to detect column structures and data types.
Available options:
None: All columns are returned as string values. RowScan: The Cloud scans a sample of rows to infer data types. The RowScanDepth property determines how many rows are scanned. ColumnCount: The Cloud determines the number of columns to include based on the ColumnCount property, returning all values as strings.
This property is useful for adjusting schema detection based on file structure or performance needs.
Specifies the number of columns that the provider detects when dynamically determining table columns.
string
"10"
This property applies when TypeDetectionScheme is set to ColumnCount.
The Cloud uses this value to determine how many columns to generate when a schema definition file is not available, such as when using GenerateSchemaFiles.
This property is useful for controlling schema inference when the number of columns is known in advance but no schema definition exists.
Specifies the number of rows that the provider scans when dynamically determining table columns.
string
"100"
This property applies when TypeDetectionScheme is set to RowScan. The Cloud scans the specified number of rows to infer column names and data types when a schema definition file is not available, such as when using GenerateSchemaFiles.
Higher values increase detection accuracy, but may lengthen processing time. Setting this property to 0 instructs the Cloud to scan the entire file.
This property is useful for improving schema accuracy when files contain variable data patterns across rows.
This section provides a complete list of the Data Formatting properties you can configure in the connection string for this provider.
| Property | Description |
| IncludeColumnHeaders | Specifies whether the provider derives column names from the first row of each file. |
| FMT | Specifies the file format that the provider uses to parse all text files. |
| ExtendedProperties | Specifies Microsoft Jet OLE DB 4.0-compatible extended properties that define the format of local text files. |
| RowDelimiter | Specifies the character or sequence of characters that the provider uses to detect the end of a row in a text file. |
| SkipTop | Specifies the number of rows that the provider skips from the top of the file before reading data. |
| IgnoreBlankRows | Specifies whether the provider skips blank rows when reading data from text files. |
| IncludeEmptyHeaders | Specifies whether the provider includes columns with empty header values when reading files that contain column headers. |
| SkipHeaderComments | Specifies whether the provider skips comment rows at the top of a file. |
| Charset | Specifies the character set that the provider uses to encode and decode text data when reading from or writing to files. |
| QuoteEscapeCharacter | Determines the character which will be used to escape quotes. |
| QuoteCharacter | Determines the character which will be used to quote values in CSV file. |
| TrimQuotedValues | Specifies whether the provider trims spaces inside quoted values when applying the TrimSpaces property. |
| TrimSpaces | Specifies how the provider handles leading and trailing spaces in cell values. |
| PushEmptyValuesAsNull | Specifies whether the provider converts empty values to null when reading data. |
| NullValues | A comma separated list which is replaced with nulls if there are found in the CSV file. |
| PathSeparator | Specifies the character that the provider uses to replace file path separators when generating table names. |
| IgnoreIncompleteRows | Specifies how the provider handles rows that do not match the expected structure based on the column headers. |
| MaxCellLength | Specifies the maximum number of characters that a cell can contain before its value is truncated. |
| DateTimeFormat | This setting specifies in which format the datetime values will be written to for CSV files. |
Specifies whether the provider derives column names from the first row of each file.
bool
true
When this property is set to true, the Cloud reads column names from the first row of each file.
When set to false, the Cloud assigns generic column names based on column numbers, unless a Schema.ini file defines explicit column names.
As with Microsoft Jet OLE DB 4.0, this property can also be specified in ExtendedProperties. The IncludeColumnHeaders value specified in ExtendedProperties overrides this property.
The following connection string parses .csv and .log files as CSV without headers:
DataSource=C:\mycsvlogs;IncludeColumnHeaders=False;Include Files='CSV,LOG'
This property is useful for defining whether column names should be inferred from file headers or automatically generated.
Specifies the file format that the provider uses to parse all text files.
string
"CsvDelimited"
When this property is set, the Cloud parses all text files in the target folder according to the specified format. The format can also be defined in ExtendedProperties using Microsoft Jet OLE DB 4.0-style syntax. The format defined in ExtendedProperties overrides the value set in this property, and any Format entry in a Schema.ini file overrides both.
The FMT property supports the following values:
The following example parses all text files in a folder as tab-delimited values with headers:
URI=C:\mytsv;FMT=TabDelimited
If the property is set to any other value, the Cloud treats the literal input as the delimiter. For example:
URI=C:\mypipdelimitedfile;FMT=||
Hexadecimal delimiters are also supported. Any value starting with '0x' (for example, FMT=0x01) is treated as a hexadecimal delimiter rather than a string literal.
Hexadecimal delimiters do not support escape sequences.
This property is useful for defining how the Cloud interprets text data when reading delimited or fixed-width files.
Specifies Microsoft Jet OLE DB 4.0-compatible extended properties that define the format of local text files.
string
""
This property allows you to specify the text file format using Microsoft Jet OLE DB 4.0-style extended properties. When processing local files, any format defined in a Schema.ini file overrides this setting. Likewise, IncludeColumnHeaders and FMT are overridden by ExtendedProperties.
The following example parses all text files in the target folder as tab-delimited values with headers:
ExtendedProperties='text;FMT=TabDelimited'
The next example parses .csv and .log files as CSV without headers:
ExtendedProperties='text;IncludeColumnHeaders=False';Include Files='CSV,LOG'
This property is useful for maintaining compatibility with Microsoft Jet OLE DB 4.0 configurations or when importing files that follow a specific legacy text format.
Specifies the character or sequence of characters that the provider uses to detect the end of a row in a text file.
string
""
You do not need to set this property if the file already uses standard newline delimiters such as \r, \n, \r\n, or \n\r.
This property supports hexadecimal delimiters. The Cloud treats any value starting with 0x (for example, "0x01") as a hexadecimal rather than a string literal delimiter.
Hexadecimal delimiters do not support escape sequences.
This property is useful for defining custom row boundaries when working with non-standard text file formats.
Specifies the number of rows that the provider skips from the top of the file before reading data.
int
0
When this property is set to a positive integer, the Cloud skips that number of rows at the beginning of the file and starts reading data afterward. When set to 0, no rows are skipped.
This property is useful for ignoring header or metadata rows that appear before the actual data in a file.
Specifies whether the provider skips blank rows when reading data from text files.
bool
false
When this property is set to true, the Cloud ignores blank or empty rows while reading data. When set to false, blank rows are included as empty records in the result set.
This property is useful for preventing empty lines in a file from being interpreted as records during import.
Specifies whether the provider includes columns with empty header values when reading files that contain column headers.
bool
false
This property applies when IncludeColumnHeaders is set to true.
When this property is set to true, the Cloud assigns generic names based on column numbers to any columns that have empty header values.
When set to false, the Cloud excludes columns that do not have a header value.
This property is useful for maintaining consistent column positions when some files contain missing or blank header names.
Specifies whether the provider skips comment rows at the top of a file.
bool
false
When this property is set to true, the Cloud skips all rows that begin with the # character until it encounters a row that does not.
When set to false, comment rows are included in the dataset as regular rows.
This property is useful for ignoring commented header sections in files that include descriptive text or metadata at the beginning.
Specifies the character set that the provider uses to encode and decode text data when reading from or writing to files.
string
"UTF-8"
This property defines the character encoding used for all text operations. The default value is UTF-8, which supports most international characters.
Use this property to ensure consistent encoding when working with files created on systems that use a different default charset.
Determines the character which will be used to escape quotes.
string
""
Determines the character which will be used to escape quotes.
Determines the character which will be used to quote values in CSV file.
string
""
Determines the character which will be used to quote values in CSV file.
Note: This property works only for CSV files. Set this property to "NONE" if you want to insert fields in a CSV file without quoting them.
Specifies whether the provider trims spaces inside quoted values when applying the TrimSpaces property.
bool
false
When this property is set to true, the Cloud trims leading and trailing spaces in both quoted and unquoted cell values. When set to false, only unquoted cell values are affected by the TrimSpaces property.
This property is useful for ensuring consistent whitespace handling in files that include quoted values.
Specifies how the provider handles leading and trailing spaces in cell values.
string
"FALSE"
This property controls whether spaces at the beginning and end of cell values are removed or retained when reading data. It applies to all cell values unless limited by the TrimQuotedValues property.
Possible values include:
This property is useful for normalizing inconsistent spacing in text files during import.
> > > >Specifies whether the provider converts empty values to null when reading data.
bool
false
When this property is set to true, the Cloud treats empty values as null. When set to false, empty values are returned as empty strings.
This property is useful for normalizing blank fields in text files to null values during import.
A comma separated list which is replaced with nulls if there are found in the CSV file.
string
""
When this property is set, any cell containing one of the specified values is interpreted as a null value.
For example, setting NullValues to "NaN,\N,N/A" causes all occurrences of these strings to be returned as "null".
This property is useful for normalizing placeholder text values into nulls during import.
Specifies the character that the provider uses to replace file path separators when generating table names.
string
"_"
When this property is set, the Cloud replaces any directory separators in file paths with the specified character when naming tables.
For example, if a file is located at Test/CSVFiles/Test.csv and this property is set to _, the resulting table name is Test_CSVFiles_Test.csv.
This property is useful for creating valid table names when working with files organized in nested folders.
Specifies how the provider handles rows that do not match the expected structure based on the column headers.
string
"FALSE"
This property applies when IncludeColumnHeaders is set to true. It determines whether and how the Cloud ignores rows that have missing or extra cells compared to the header row.
When this property is set to true, the Cloud ignores any row that does not match the expected number of columns. When set to false, all rows are included, even if incomplete.
You can also use the following modes for finer control:
This property is useful for controlling how the Cloud processes irregular or malformed rows in text files during import.
Specifies the maximum number of characters that a cell can contain before its value is truncated.
int
-1
When this property is set to a positive integer, the Cloud truncates any cell value that exceeds the specified number of characters.
When set to -1, there is no limit on cell length.
This property is useful for preventing excessively long text values from impacting performance or memory usage when reading large files.
This setting specifies in which format the datetime values will be written to for CSV files.
string
""
The format should follow a specified pattern:
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| AWSCertificate | The absolute path to the certificate file or the certificate content in PEM format encoded in base64. |
| AWSCertificatePassword | The password for the certificate if applicable, otherwise leave blank. |
| AWSCertificateType | The type of AWSCertificate . |
| AWSPrivateKey | The absolute path to the private key file or the private key content in PEM format encoded in base64. |
| AWSPrivateKeyPassword | The password for the private key if it is encrypted, otherwise leave blank. |
| AWSPrivateKeyType | The type of AWSPrivateKey . |
| AWSProfileARN | Profile to pull policies from. |
| AWSSessionDuration | Duration, in seconds, for the resulting session. |
| AWSTrustAnchorARN | Trust anchor to use for authentication. |
| BatchNamingConvention | Specifies the naming convention that the provider uses for batch files. |
| ClientCulture | This property can be used to specify the format of data (e.g., currency values) that is accepted by the client application. This property can be used when the client application does not support the machine's culture settings. For example, Microsoft Access requires 'en-US'. |
| CreateBatchFolder | Specifies whether the provider creates a folder for storing batch files when InsertMode is set to FilePerBatch. |
| Culture | This setting can be used to specify culture settings that determine how the provider interprets certain data types that are passed into the provider. For example, setting Culture='de-DE' will output German formats even on an American machine. |
| CustomHeaders | Specifies additional HTTP headers to append to the request headers created from other properties, such as ContentType and From. Use this property to customize requests for specialized or nonstandard APIs. |
| CustomURLParams | A string of custom URL parameters to be included with the HTTP request, in the form field1=value1&field2=value2&field3=value3. |
| DirectoryRetrievalDepth | Limit the subfolders recursively scanned when IncludeSubdirectories is enabled. |
| ExcludeFileExtensions | Specifies whether the provider excludes file extensions from table names. |
| ExcludeFiles | Comma-separated list of file extensions to exclude from the set of the files modeled as tables. |
| ExcludeStorageClasses | A comma seperated list of storage classes to ignore. |
| FolderId | The ID of a folder in Google Drive. If set, the resource location specified by the URI is relative to the Folder ID for all operations. |
| IncludeDropboxTeamResources | Indicates if you want to include Dropbox team files and folders. |
| IncludeFiles | Comma-separated list of file extensions to include into the set of the files modeled as tables. |
| IncludeItemsFromAllDrives | Whether Google Drive shared drive items should be included in results. If not present or set to false, then shared drive items are not returned. |
| IncludeSubdirectories | Whether to read files from nested folders. In the case of a name collision, table names are prefixed by the underscore-separated folder names. |
| InsertMode | Specifies the mode for inserting data into CSV files. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| Pagesize | Specifies the maximum number of records per page the provider returns when requesting data from CSV. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| ThrowsKeyNotFound | Specifies whether or not throws an exception if there is no rows updated. |
| Timeout | Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. |
| TruncateOnInserts | Specifies whether the provider truncates the target table before performing each batch insert operation. |
| UseRowNumbers | Specifies whether the provider generates a RowNumber column to identify records when no custom schema is defined. |
The absolute path to the certificate file or the certificate content in PEM format encoded in base64.
string
""
The absolute path to the certificate file or the certificate file content in PEM format encoded in base64, depending on the value of AWSCertificateType.
The password for the certificate if applicable, otherwise leave blank.
string
""
The password for the certificate if applicable, otherwise leave blank.
The type of AWSCertificate .
string
"PEM_BLOB"
This property can take one of the following values:
| PEM_FILE | Absolute path to a certificate file in PEM format. |
| PEM_BLOB | A string (base64-encoded) representing a PEM-encoded certificate. |
The absolute path to the private key file or the private key content in PEM format encoded in base64.
string
""
The absolute path to the private key file or the private key file content in PEM format encoded in base64, depending on the value of AWSPrivateKeyType.
The password for the private key if it is encrypted, otherwise leave blank.
string
""
The password for the private key if it is encrypted, otherwise leave blank.
The type of AWSPrivateKey .
string
"PEM_BLOB"
This property can take one of the following values:
| PEM_FILE | Absolute path to a private key file in PEM format. |
| PEM_BLOB | A string (base64-encoded) representing a PEM-encoded private key. |
Profile to pull policies from.
string
""
Profile to pull policies from.
Duration, in seconds, for the resulting session.
int
3600
Duration, in seconds, for the resulting session. Default: 3600 seconds.
Trust anchor to use for authentication.
string
""
Trust anchor to use for authentication.
Specifies the naming convention that the provider uses for batch files.
string
"Timestamp_BatchNumber"
This property determines how the Cloud names each batch file when InsertMode is set to FilePerBatch.
This property is useful for controlling file naming consistency when generating multiple batch output files.
This property can be used to specify the format of data (e.g., currency values) that is accepted by the client application. This property can be used when the client application does not support the machine's culture settings. For example, Microsoft Access requires 'en-US'.
string
""
This option affects the format of Cloud output. To specify the format that defines how input should be interpreted, use the Culture option. By default the Cloud uses the current locale settings of the machine to interpret input and format output.
Specifies whether the provider creates a folder for storing batch files when InsertMode is set to FilePerBatch.
bool
true
When this property is set to true, the Cloud automatically creates a new folder to store the generated batch files.
When set to false, the files are written directly to the target directory.
This property is useful for organizing batch output files and preventing naming conflicts when inserting data in FilePerBatch mode.
This setting can be used to specify culture settings that determine how the provider interprets certain data types that are passed into the provider. For example, setting Culture='de-DE' will output German formats even on an American machine.
string
""
This property affects the Cloud input. To interpret values in a different cultural format, use the Client Culture property. By default the Cloud uses the current locale settings of the machine to interpret input and format output.
Specifies additional HTTP headers to append to the request headers created from other properties, such as ContentType and From. Use this property to customize requests for specialized or nonstandard APIs.
string
""
Use this property to add custom headers to HTTP requests sent by the Cloud.
This property is useful when fine-tuning requests to interact with APIs that require additional or nonstandard headers. Headers must follow the format "header: value" as described in the HTTP specifications and each header line must be separated by the carriage return and line feed (CRLF) characters. Important: Use caution when setting this property. Supplying invalid headers may cause HTTP requests to fail.
A string of custom URL parameters to be included with the HTTP request, in the form field1=value1&field2=value2&field3=value3.
string
""
This property enables you to specify custom query string parameters that are included with the HTTP request. The parameters must be encoded as a query string in the form field1=value1&field2=value2&field3=value3, where each value is URL encoded. URL encoding converts the characters in the string that can be transmitted over the internet as follows:
Limit the subfolders recursively scanned when IncludeSubdirectories is enabled.
string
"-1"
When IncludeSubdirectories is enabled, DirectoryRetrievalDepth specifies how many subfolders will be recursively scanned before stopping. -1 specifies that all subfolders are scanned.
Specifies whether the provider excludes file extensions from table names.
bool
false
When this property is set to true, the Cloud removes file extensions from table names. For example, a file named users.csv appears as users.
When set to false, the Cloud includes the full file name, including its extension, in the table name.
Comma-separated list of file extensions to exclude from the set of the files modeled as tables.
string
""
It is also possible to specify datetime filters. We currently support CreatedDate and ModifiedDate. All extension filters are evaluated in disjunction (using OR operator), and then the resulting filter is evaluated in conjunction (using AND operator) with the datetime filters.
Examples:
ExcludeFiles="TXT,CreatedDate<='2020-11-26T07:39:34-05:00'"
ExcludeFiles="TXT,ModifiedDate<=DATETIMEFROMPARTS(2020, 11, 26, 7, 40, 50, 000)"
ExcludeFiles="ModifiedDate>=DATETIMEFROMPARTS(2020, 11, 26, 7, 40, 49, 000),ModifiedDate<=CURRENT_TIMESTAMP()"
A comma seperated list of storage classes to ignore.
string
""
This can be used to refine the type of Files to be retrieved from Amazon S3. For example setting this property to GLACIER will ignore all files of storage class GLACIER. Possible values are:
The ID of a folder in Google Drive. If set, the resource location specified by the URI is relative to the Folder ID for all operations.
string
""
The ID of a folder in Google Drive. If set, the resource location specified by the URI is relative to the Folder ID for all operations.
Indicates if you want to include Dropbox team files and folders.
bool
false
In order to access Dropbox team folders and files, please set this connection property to True.
Comma-separated list of file extensions to include into the set of the files modeled as tables.
string
"CSV,TXT,TAB"
Comma-separated list of file extensions to include into the set of the files modeled as tables. For example, IncludeFiles=TXT,TAB. The default is CSV, TAB, and TXT.
A 'NOEXT' value can be specified to include files without an extension.
The following archive types are also supported (only when AggregateFiles is true): ZIP, TAR, and GZ. Files of these types are modeled as an aggregated table. You can use DirectoryRetrievalDepth and IncludeSubdirectories to refine the subset of files in the archive that are included in the aggregate table.
When archive files are found, they will be downloaded to the local machine so the Cloud can extract and parse the contained files. Note: Files contained within an archive must match an extension listed in IncludeFiles to be included in the set of files modeled as tables.
File masks can be specified using an asterisk (*) to provide enhanced filtering capabilities; e.g. IncludeFiles=2020*.csv,TXT.
Files specified in Schema.ini are honored in addition to the files included by this property.
It is also possible to specify datetime filters. We currently support CreatedDate and ModifiedDate. All extension filters are evaluated in disjunction (using OR operator), and then the resulting filter is evaluated in conjunction (using AND operator) with the datetime filters.
Examples:
IncludeFiles="TXT,CreatedDate<='2020-11-26T07:39:34-05:00'"
IncludeFiles="TXT,ModifiedDate<=DATETIMEFROMPARTS(2020, 11, 26, 7, 40, 50, 000)"
IncludeFiles="ModifiedDate>=DATETIMEFROMPARTS(2020, 11, 26, 7, 40, 49, 000),ModifiedDate<=CURRENT_TIMESTAMP()"
Whether Google Drive shared drive items should be included in results. If not present or set to false, then shared drive items are not returned.
bool
false
If this property is set to 'True', files will be retrieved from all drives, including shared drives. The file retrieval can be limited a specific shared drive or a specific folder in that shared drive by setting the start of the URI to the path of the shared drive and optionally any folder within, for example: 'gdrive://SharedDriveA/FolderA/...'. Additionally, the FolderId property can be used to limit the search to an exact subdirectory.
Whether to read files from nested folders. In the case of a name collision, table names are prefixed by the underscore-separated folder names.
bool
false
Whether to read files from nested folders. When accessing local CSV, the Cloud honors Schema.ini defined in subfolders. Table names are prefixed by each nested folder name separated by underscores only in the case of a table name conflict. For example,
| Root\subfolder1\tableA | Root\subfolder1\subfolder2\tableA |
| subfolder1_tableA | subfolder1_subfolder2_tableA |
Archive files (ZIP, GZ, TAR) are also supported and treated like folders.
Specifies the mode for inserting data into CSV files.
string
"SingleFile"
There are two modes available for inserting data to CSV file:
Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY.
int
-1
The default value for this property, -1, means that no row limit is enforced unless the query explicitly includes a LIMIT clause. (When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting.)
Setting MaxRows to a whole number greater than 0 ensures that queries do not return excessively large result sets by default.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
Specifies the maximum number of records per page the provider returns when requesting data from CSV.
int
5000
When processing a query, instead of requesting all of the queried data at once from CSV, the Cloud can request the queried data in pieces called pages.
This connection property determines the maximum number of results that the Cloud requests per page.
Note: Setting large page sizes may improve overall query execution time, but doing so causes the Cloud to use more memory when executing queries and risks triggering a timeout.
Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'.
string
""
This property allows you to define which pseudocolumns the Cloud exposes as table columns.
To specify individual pseudocolumns, use the following format:
Table1=Column1;Table1=Column2;Table2=Column3
To include all pseudocolumns for all tables use:
*=*
Specifies whether or not throws an exception if there is no rows updated.
bool
false
Specifies whether or not throws an exception if there is no rows updated.
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error.
int
60
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond 60 seconds if each paging call completes within the timeout limit.
Timeout is set to 60 seconds by default. To disable timeouts, set this property to 0.
Disabling the timeout allows operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server.
Note: Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
Specifies whether the provider truncates the target table before performing each batch insert operation.
bool
false
When this property is set to true, the Cloud removes all existing data from the target table before executing each batch insert. When set to false, new rows are appended without truncating existing data.
This property is useful for replacing table data entirely during batch insert operations.
Specifies whether the provider generates a RowNumber column to identify records when no custom schema is defined.
bool
false
When this property is set to true, the Cloud creates a new column named RowNumber and uses it as the key for update and delete operations. When set to false, no row number column is created, and a custom schema must define a key column for modification operations.
This property is useful for performing update or delete operations on CSV files that do not include a natural primary key.
LZMA from 7Zip LZMA SDK
LZMA SDK is placed in the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original LZMA SDK code, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
LZMA2 from XZ SDK
Version 1.9 and older are in the public domain.
Xamarin.Forms
Xamarin SDK
The MIT License (MIT)
Copyright (c) .NET Foundation Contributors
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NSIS 3.10
Copyright (C) 1999-2025 Contributors THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
1. DEFINITIONS
"Contribution" means:
a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and b) in the case of each subsequent Contributor:
i) changes to the Program, and
ii) additions to the Program;
where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
"Contributor" means any person or entity that distributes the Program.
"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
"Program" means the Contributions distributed in accordance with this Agreement.
"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
2. GRANT OF RIGHTS
a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
3. REQUIREMENTS
A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
a) it complies with the terms and conditions of this Agreement; and
b) its license agreement:
i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
When the Program is made available in source code form:
a) it must be made available under this Agreement; and
b) a copy of this Agreement must be included with each copy of the Program.
Contributors may not remove or alter any copyright notices contained within the Program.
Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
4. COMMERCIAL DISTRIBUTION
Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
5. NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
6. DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. GENERAL
If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.