The CData Sync App provides a straightforward way to continuously pipeline your CSV data to any database, data lake, or data warehouse, making it easily available for Analytics, Reporting, AI, and Machine Learning.
The CSV connector can be used from the CData Sync application to pull data from CSV and move it to any of the supported destinations.
The CData Sync App is designed for streaming CSV only.
This streamed file content does not include all of the metadata associated with remotely stored CSV files, such as file and folder name.
If access to both the file metadata and the actual file content is needed, then the CData Sync App must be used in tandem with the associated file system driver(s) for the service the CSV files are remotely stored in.
The following file system drivers are available:
See the relevant CData file system driver's documentation for a configuration guide for connecting to stored CSV file metadata.
Create a connection to CSV by navigating to the Connections page in the Sync App application and selecting the corresponding icon in the Add Connections panel. If the CSV icon is not available, click the Add More icon to download and install the CSV connector from the CData site.
Required properties are listed under the Settings tab. The Advanced tab lists connection properties that are not typically required.
The CData Sync App allows connecting to local and remote CSV resources. Set the URI property to the CSV resource location, in addition to any other properties necessary to connect to your data source.
Set the ConnectionType to Local. Local files support SELECT\INSERT\UPDATE\DELETE.
Set the URI to a folder containing CSV files: C:\folder1.
You can also connect to multiple CSV files which share the same schema. Below is an example connection string:
URI=C:\folder; AggregateFiles=True;
If you would prefer to expose all of the individual CSV files as tables instead, leave this property False.
URI=C:\folder; AggregateFiles=False;
If you need INSERT/UPDATE/DELETE cloud files, you can download the corresponding CData Sync App for that cloud host (supported via stored procedures), make changes with the local file's corresponding Sync App, then upload the file using the cloud source's stored procedures.
As an example, if you wanted to update a file stored on SharePoint, you could use the CData SharePoint Sync App's DownloadDocument procedure to download the CSV file, update the local CSV file with the CData CSV Sync App, then use the SharePoint Sync App's UploadDocument procedure to upload the changed file to SharePoint.
A unique prefix at the beginning of the URI connection property is used to identify the cloud data store being targed by the Sync App and the remainder of the path is a relative path to the desired folder (one table per file) or single file (a single table).
Set the following to identify your CSV resources stored on Amazon S3:
See Connecting to Amazon S3 for more information regarding how to connect and authenticate to CSV files hosted on Amazon S3.
Set the following to identify your CSV resources stored on Azure Blob Storage:
See Connecting to Azure Blob Storage for more information regarding how to connect and authenticate to CSV files hosted on Amazon Blob Storage.
Set the following to identify your CSV resources stored on Azure Data Lake Storage:
See Connecting to Azure Data Lake Storage for more information regarding how to connect and authenticate to CSV files hosted on Azure Data Lake Storage.
Set the following properties to connect:
You can authenticate either an Azure access key or an Azure shared access signature. Set one of the following:
Set the following to identify your CSV resources stored on Box:
See Connecting to Box for more information regarding how to connect and authenticate to CSV files hosted on Box.
Set the following to identify your CSV resources stored on Dropbox:
See Connecting to Dropbox for more information regarding how to connect and authenticate to CSV files hosted on Dropbox.
The Sync App supports both plaintext and SSL/TLS connections to FTP servers.
Set the following connection properties to connect:
Set the following to identify your CSV resources stored on Google Cloud Storage:
See Connecting to Google Cloud Storage for more information regarding how to connect and authenticate to CSV files hosted on Google Cloud Storage.
Set the following to identify your CSV resources stored on Google Drive:
See Connecting to Google Drive for more information regarding how to connect and authenticate to CSV files hosted on Google Drive.
Set the following to identify your CSV resources stored on HDFS:
There are two authentication methods available for connecting to HDFS data source, Anonymous Authentication and Negotiate (Kerberos) Authentication.
Anonymous Authentication
In some situations, you can connect to HDFS without any authentication connection properties. To do so, set the AuthScheme property to None (default).
Authenticate using Kerberos
When authentication credentials are required, you can use Kerberos for authentication. See Using Kerberos for details on how to authenticate with Kerberos.
Set the following to identify your CSV resources stored on HTTP streams:
See Connecting to HTTP Streams for more information regarding how to connect and authenticate to CSV files hosted on HTTP Streams.
Set the following to identify your CSV resources stored on IBM Cloud Object Storage:
See Connecting to IBM Object Storage for more information regarding how to connect and authenticate to CSV files hosted on IBM Cloud Object Storage.
Set the following to identify your CSV resources stored on OneDrive:
See Connecting to OneDrive for more information regarding how to connect and authenticate to CSV files hosted on OneDrive.
Set the following properties to authenticate with HMAC:
Set the following to identify your CSV resources stored on SFTP:
See Connecting to SFTP for more information regarding how to connect and authenticate to CSV files hosted on SFTP.
Set the following to identify your CSV resources stored on SharePoint Online:
See Connecting to SharePoint Online for more information regarding how to connect and authenticate to CSV files hosted on SharePoint Online.
Set the URI to the HTTP or HTTPS URL of the CSV resource you want to access as a table. For example:
URI=http://www.host1.com/streamname1;
To authenticate, set AuthScheme and the corresponding properties. Specify additional headers in CustomHeaders to modify the query string, set CustomUrlParams.
To query the CSV stream, reference streamedtable as the table name.
SELECT * FROM streamedtable
You can also read from and write to system streams. Reference the stream from code with the ExtendedProperties connection property.
By default, the Sync App attempts to negotiate SSL/TLS by checking the server's certificate against the system's trusted certificate store. To specify another certificate, see the SSLServerCert property for the available formats to do so.
Below are example connection strings to CSV files or streams, using the Sync App's default data modeling configuration (see below)
Service provider | URI formats | Connection example |
Local | Single File Path (One table)
file://localPath Directory Path (one table per file) file://localPath | URI=C:/folder1; |
HTTP or HTTPS | http://remoteStream
https://remoteStream | URI=http://www.host1.com/streamname1; |
Amazon S3 | Single File Path (One table)
s3://remotePath Directory Path (one table per file) s3://remotePath | URI=s3://bucket1/folder1; AWSSecretKey=secret1; AWSRegion=OHIO; |
Azure Blob Storage | azureblob://mycontainer/myblob/ | URI=azureblob://mycontainer/myblob/; AzureStorageAccount=myAccount; AzureAccessKey=myKey;
URI=azureblob://mycontainer/myblob/; AzureStorageAccount=myAccount; AuthScheme=OAuth; |
Google Drive | Single File Path (One table)
gdrive://remotePath gdrive://SharedWithMe/remotePath Directory Path (one table per file) gdrive://remotePath gdrive://SharedWithMe/remotePath | URI=gdrive://folder1; AuthScheme=OAuth;
URI=gdrive://SharedWithMe/folder1; AuthScheme=OAuth; |
OneDrive | Single File Path (One table)
onedrive://remotePath onedrive://SharedWithMe/remotePath Directory Path (one table per file) onedrive://remotePath onedrive://SharedWithMe/remotePath | URI=onedrive://folder1; AuthScheme=OAuth;
URI=onedrive://SharedWithMe/folder1; AuthScheme=OAuth; |
Box | Single File Path (One table)
box://remotePath Directory Path (one table per file) box://remotePath | URI=box://folder1; AuthScheme=OAuth; |
Dropbox | Single File Path (One table)
dropbox://remotePath Directory Path (one table per file) dropbox://remotePath | URI=dropbox://folder1; AuthScheme=OAuth; OAuthClientId=oauthclientid1; OAuthClientSecret=oauthcliensecret1; CallbackUrl=http://localhost:12345; |
SharePoint SOAP | Single File Path (One table)
sp://remotePath Directory Path (one table per file) sp://remotePath | URI=sp://Documents/folder1; User=user1; Password=password1; StorageBaseURL=https://subdomain.sharepoint.com; |
SharePoint REST | Single File Path (One table)
sprest://remotePath Directory Path (one table per file) sprest://remotePath | URI=sprest://Documents/folder1; AuthScheme=OAuth; StorageBaseURL=https://subdomain.sharepoint.com; |
FTP or FTPS | Single File Path (One table)
ftp://server:port/remotePath ftps://server:port/remotepath Directory Path (one table per file) ftp://server:port/remotePath ftps://server:port/remotepath; | URI=ftps://localhost:990/folder1; User=user1; Password=password1; |
SFTP | Single File Path (One table)
sftp://server:port/remotePath Directory Path (one table per file) sftp://server:port/remotePath | URI=sftp://127.0.0.1:22/folder1 User=user1; Password=password1;
URI=sftp://127.0.0.1:22/folder1 SSHAuthmode=PublicKey; SSHClientCert=myPrivateKey |
Azure Data Lake Store Gen1 | adl://remotePath
adl://Account.azuredatalakestore.net@remotePath | URI=adl://folder1; AuthScheme=OAuth; AzureStorageAccount=myAccount; AzureTenant=tenant;
URI=adl://myAccount.azuredatalakestore.net@folder1; AuthScheme=OAuth; AzureTenant=tenant; |
AzureDataLakeStoreGen2 | abfs://myfilesystem/remotePath
abfs://[email protected]/remotepath
| URI=abfs://myfilesystem/folder1; AzureStorageAccount=myAccount; AzureAccessKey=myKey;
URI=abfs://[email protected]/folder1; AzureAccessKey=myKey; |
AzureDataLakeStoreGen2 with SSL | abfss://myfilesystem/remotePath
abfss://[email protected]/remotepath
| URI=abfss://myfilesystem/folder1; AzureStorageAccount=myAccount; AzureAccessKey=myKey;
URI=abfss://[email protected]/folder1; AzureAccessKey=myKey; |
Wasabi | Single File Path (One table)
wasabi://bucket1/remotePath Directory Path (one table per file) wasabi://bucket1/remotePath | URI=wasabi://bucket/folder1; AccessKey=token1; SecretKey=secret1; Region='us-west-1'; |
Google Cloud Storage | Single File Path (One table)
gs://bucket/remotePath Directory Path (one table per file) gs://bucket/remotePath | URI=gs://bucket/folder1; AuthScheme=OAuth; ProjectId=test; |
Oracle Cloud Storage | Single File Path (One table)
os://bucket/remotePath Directory Path (one table per file) os://bucket/remotePath | URI=os://bucket/folder1; AccessKey='myKey'; SecretKey='mySecretKey'; OracleNameSpace='myNameSpace' Region='us-west-1'; |
Azure File | Single File Path (One table)
azurefile://fileShare/remotePath Directory Path (one table per file) azurefile://fileShare/remotePath | URI=azurefile://bucket/folder1; AzureStorageAccount='myAccount'; AzureAccessKey='mySecretKey';
URI=azurefile://bucket/folder1; AzureStorageAccount='myAccount'; AzureSharedAccessSignature='mySharedAccessSignature'; |
IBM Object Storage Source | Single File Path (One table)
ibmobjectstorage://bucket1/remotePath Directory Path (one table per file) ibmobjectstorage://bucket1/remotePath | URI=ibmobjectstorage://bucket/folder1; AuthScheme='HMAC'; AccessKey=token1; SecretKey=secret1; Region='eu-gb';
URI=ibmobjectstorage://bucket/folder1; ApiKey=key1; Region='eu-gb'; AuthScheme=OAuth; InitiateOAuth=GETANDREFRESH; |
Hadoop Distributed File System | Single File Path (One table)
webhdfs://host:port/remotePath Directory Path (one table per file) webhdfs://host:port/remotePath | URI=webhdfs://host:port/folder1 |
Secure Hadoop Distributed File System | Single File Path (One table)
webhdfss://host:port/remotePath Directory Path (one table per file) webhdfss://host:port/remotePath | URI=webhdfss://host:port/folder1 |
The following properties control how the Sync App automatically models CSV as tables when you connect:
When working with local CSV, you can also use Schema.ini files, compatible with the Microsoft Jet driver, to define columns and data types. See Using Schema.ini for a guide.
To customize column data types and other aspects of the schemas, you can save the schemas to static configuration files. The configuration files have a simple format that makes them easy to extend. For more information on extending the Sync App schemas, see Generating Schema Files.
Set the following properties to model subfolders as views:
When IncludeSubdirectories is set, the automatically detected table names follow the convention below:
File Path | Root\subfolder1\tableA | Root\subfolder1\subfolder2\tableA |
Table Name | subfolder1_tableA | subfolder1_subfolder2_tableA |
To obtain the credentials for an IAM user, follow the steps below:
To obtain the credentials for your AWS root account, follow the steps below:
Specify the following to connect to data:
There are several authentication methods available for connecting to CSV including:
To authenticate using account root credentials, set the following:
Note: Use of this authentication scheme is discouraged by Amazon for anything but simple tests. The account root credentials have the full permissions of the user, making this the least secure authentication method.
If you are using the Sync App from an EC2 Instance and have an IAM Role assigned to the instance, you can use the IAM Role to authenticate. To do so, set the following properties to authenticate:
If you are also using an IAM role to authenticate, you must additionally specify the following:
IMDSv2 Support
The CSV Sync App now supports IMDSv2. Unlike IMDSv1, the new version requires an authentication token. Endpoints and response are the same in both versions. In IMDSv2, the CSV Sync App first attempts to retrieve the IMDSv2 metadata token and then uses it to call AWS metadata endpoints. If it is unable to retrieve the token, the Sync App reverts to IMDSv1.
In many situations it may be preferable to use an IAM role for authentication instead of the direct security credentials of an AWS root user.
To authenticate as an AWS role, set the following:
Note: Roles may not be used when specifying the AWSAccessKey and AWSSecretKey of an AWS root user.
Set the AuthScheme to ADFS. The following connection properties need to be set:
AuthScheme=ADFS;User=username;Password=password;SSOLoginURL='https://sts.company.com';
ADFS Integrated
To use the ADFS Integrated flow, specify the SSOLoginURL and leave the username and password empty.
Set the AuthScheme to Okta. The following connection properties are used to authenticate through Okta:
then you need to use combinations of SSOProperties input parameters to authenticate using Okta. Otherwise, you do not need to set any of these values.
In SSOProperties when required, set these input parameters:
Example connection string:
AuthScheme=Okta;SSOLoginURL='https://example.okta.com/home/appType/0bg4ivz6cJRZgCz5d6/46';User=oktaUserName;Password=oktaPassword;
Set the AuthScheme to PingFederate. The following connection properties need to be set:
authScheme=pingfederate;SSOLoginURL=https://mycustomserver.com:9033/idp/sts.wst;SSOExchangeUrl=https://us-east-1.signin.aws.amazon.com/platform/saml/acs/764ef411-xxxxxx;user=admin;password=PassValue;AWSPrincipalARN=arn:aws:iam::215338515180:saml-provider/pingFederate;AWSRoleArn=arn:aws:iam::215338515180:role/SSOTest2;
For users and roles that require Multi-factor Authentication, specify the following to authenticate:
Note that you can control the duration of the temporary credentials by setting the TemporaryTokenDuration property (default 3600 seconds).
To authenticate using temporary credentials, specify the following:
The Sync App can now request resources using the same permissions provided by long-term credentials (such as IAM user credentials) for the lifespan of the temporary credentials.
If you are also using an IAM role to authenticate, you must additionally specify the following:
You can use a credentials file to authenticate. Any configurations related to AccessKey/SecretKey authentication, temporary credentials, role authentication, or MFA can be used. To do so, set the following properties to authenticate:
To obtain the credentials for an AzureBlob user, follow the steps below:
Set the AzureAccessKey connection property to the access key associated with the Azure blob to identify the user.
You can authenticate to Azure Blob Storage as an Azure AD user, with MSI authentication, or using an Azure Service Principal.
You can authenticate an Azure AD account using either an Azure Access Key or OAuth authentication.
Method 1: Storage Account and Access Key
Set the following to authenticate with an Azure Access Key:
Method 2: OAuth
Set the following to authenticate with OAuth:
If you are connecting from an Azure VM with permissions for Azure Blob storage, set the following:
If you would like to authenticate with a service principal instead of a client secret, it is also possible to authenticate with a client certificate. Set the following to authenticate:
You can authenticate to Azure Data Lake Storage as an Azure AD user, with MSI authentication, or using an Azure Service Principal.
You can authenticate an Azure AD account using either an Azure Access Key or OAuth authentication.
Method 1: Storage Account and Access Key
Set the following to authenticate with an Azure Access Key:
Method 2: OAuth
Set the following to authenticate with OAuth:
If you are connecting from an Azure VM with permissions to connect to Azure Data Lake Storage, set the following:
If you would like to authenticate with a service principal instead of a client secret, it is also possible to authenticate with a client certificate.
Use the OAuth authentication standard to connect to Box. You can authenticate with a user account or with a service account. A service account is required to grant organization-wide access scopes to the Sync App. The Sync App facilitates these authentication flows as described below.
AuthScheme must be set to OAuth in all user account flows.
When connecting via a Web application, you need to register a custom OAuth app with Box. You can then use the Sync App to get and manage the OAuth token values. See Create a Custom OAuth App for more information.
Get an OAuth Access Token
Set the following connection properties to obtain the OAuthAccessToken:
Then call stored procedures to complete the OAuth exchange:
Call the GetOAuthAuthorizationURL stored procedure. Set the CallbackURL input to the Redirect URI you specified in your app settings. The stored procedure returns the URL to the OAuth endpoint.
After you have obtained the access and refresh tokens, you can connect to data and refresh the OAuth access token either automatically or manually.
Automatic Refresh of the OAuth Access Token
To have the Sync App automatically refresh the OAuth access token, set the following on the first data connection.
Manual Refresh of the OAuth Access Token
The only value needed to manually refresh the OAuth access token when connecting to data is the OAuth refresh token. Use the RefreshOAuthAccessToken stored procedure to manually refresh the OAuthAccessToken after the ExpiresIn parameter value returned by GetOAuthAccessToken has elapsed, then set the following connection properties:
Then call RefreshOAuthAccessToken with OAuthRefreshToken set to the OAuth refresh token returned by GetOAuthAccessToken. After the new tokens have been retrieved, open a new connection by setting the OAuthAccessToken property to the value returned by RefreshOAuthAccessToken.
Finally, store the OAuth refresh token so that you can use it to manually refresh the OAuth access token after it has expired.
Set the AuthScheme to OAuthJWT to authenticate with this method.
Service accounts have silent authentication, without user authentication in the browser. You can also use a service account to delegate enterprise-wide access scopes to the Sync App.
You need to create an OAuth application in this flow. See Create a Custom OAuth App to create and authorize an app. You can then connect to Box data that the service account has permission to access.
After setting the following connection properties, you are ready to connect:
You may choose to use your own OAuth Application Credentials when you want to
Follow the steps below to create an OAuth application and generate a private key. You will then authorize the service account.
Authorize the application in the enterprise admin console: Navigate to Apps > Custom Apps Manager > Add App. In the Add App modal window, provide the Client Id and click Next to identify and verify the app.
Note: If you change the JWT access scopes, you will need to reauthorize the application in the enterprise admin console: Click Apps in the main menu and then select the ellipsis button next to your JWT application name. Select Reauthorize App in the menu.
Dropbox uses the OAuth authentication standard.
You need to choose between using CData's embedded OAuth app or Create a Custom OAuth App.
TODO -- PUT NEEDED SCOPES HERE
When connecting via a Web application, you need to register a custom OAuth app with Dropbox. You can then use the Sync App to get and manage the OAuth token values. See Create a Custom OAuth App for more information.
Get an OAuth Access Token
Set the following connection properties to obtain the OAuthAccessToken:
Then call stored procedures to complete the OAuth exchange:
Call the GetOAuthAuthorizationURL stored procedure. Set the CallbackURL input to the Redirect URI you specified in your app settings. The stored procedure returns the URL to the OAuth endpoint.
After you have obtained the access and refresh tokens, you can connect to data and refresh the OAuth access token either automatically or manually.
Automatic Refresh of the OAuth Access Token
To have the Sync App automatically refresh the OAuth access token, set the following on the first data connection.
Manual Refresh of the OAuth Access Token
The only value needed to manually refresh the OAuth access token when connecting to data is the OAuth refresh token. Use the RefreshOAuthAccessToken stored procedure to manually refresh the OAuthAccessToken after the ExpiresIn parameter value returned by GetOAuthAccessToken has elapsed, then set the following connection properties:
Then call RefreshOAuthAccessToken with OAuthRefreshToken set to the OAuth refresh token returned by GetOAuthAccessToken. After the new tokens have been retrieved, open a new connection by setting the OAuthAccessToken property to the value returned by RefreshOAuthAccessToken.
Finally, store the OAuth refresh token so that you can use it to manually refresh the OAuth access token after it has expired.
You may choose to use your own OAuth Application Credentials when you want to
No further values need to be specified in the CSV app settings.
Set the ProjectId property to the Id of the project you want to connect to.
The Sync App supports using user accounts and GCP instance accounts for authentication.
The following sections discuss the available authentication schemes for Google Cloud Storage:
AuthScheme must be set to OAuth in all user account flows.
CData provides an embedded OAuth application that simplifies OAuth desktop Authentication. Alternatively, you can create a custom OAuth application. See Create a Custom OAuth App for information about creating custom applications and reasons for doing so.
For authentication, the only difference between the two methods is that you must set two additional connection properties when using custom OAuth applications.
After setting the following connection properties, you are ready to connect:
Get an OAuth Access Token
Set the following connection properties to obtain the OAuthAccessToken:
Then call stored procedures to complete the OAuth exchange:
Once you have obtained the access and refresh tokens, you can connect to data and refresh the OAuth access token either automatically or manually.
Automatic Refresh of the OAuth Access Token
To have the driver automatically refresh the OAuth access token, set the following on the first data connection:
Manual Refresh of the OAuth Access Token
The only value needed to manually refresh the OAuth access token when connecting to data is the OAuth refresh token.
Use the RefreshOAuthAccessToken stored procedure to manually refresh the OAuthAccessToken after the ExpiresIn parameter value returned by GetOAuthAccessToken has elapsed, then set the following connection properties:
Then call RefreshOAuthAccessToken with OAuthRefreshToken set to the OAuth refresh token returned by GetOAuthAccessToken. After the new tokens have been retrieved, open a new connection by setting the OAuthAccessToken property to the value returned by RefreshOAuthAccessToken.
Finally, store the OAuth refresh token so that you can use it to manually refresh the OAuth access token after it has expired.
Option 1: Obtain and Exchange a Verifier Code
To obtain a verifier code, you must authenticate at the OAuth authorization URL.
Follow the steps below to authenticate from the machine with an internet browser and obtain the OAuthVerifier connection property.
On the headless machine, set the following connection properties to obtain the OAuth authentication values:
After the OAuth settings file is generated, you need to re-set the following properties to connect:
Option 2: Transfer OAuth Settings
Prior to connecting on a headless machine, you need to create and install a connection with the driver on a device that supports an internet browser. Set the connection properties as described in "Desktop Applications" above.
After completing the instructions in "Desktop Applications", the resulting authentication values are encrypted and written to the path specified by OAuthSettingsLocation. The default filename is OAuthSettings.txt.
Once you have successfully tested the connection, copy the OAuth settings file to your headless machine.
On the headless machine, set the following connection properties to connect to data:
When running on a GCP virtual machine, the Sync App can authenticate using a service account tied to the virtual machine. To use this mode, set AuthScheme to GCPInstanceAccount.
You may choose to use your own OAuth Application Credentials when you want to
Follow these steps to create a custom OAuth application:
The Sync App supports using user accounts and GCP instance accounts for authentication.
The following sections discuss the available authentication schemes for Google Drive:
AuthScheme must be set to OAuth in all user account flows.
Get an OAuth Access Token
Set the following connection properties to obtain the OAuthAccessToken:
Then call stored procedures to complete the OAuth exchange:
Once you have obtained the access and refresh tokens, you can connect to data and refresh the OAuth access token either automatically or manually.
Automatic Refresh of the OAuth Access Token
To have the driver automatically refresh the OAuth access token, set the following on the first data connection:
Manual Refresh of the OAuth Access Token
The only value needed to manually refresh the OAuth access token when connecting to data is the OAuth refresh token.
Use the RefreshOAuthAccessToken stored procedure to manually refresh the OAuthAccessToken after the ExpiresIn parameter value returned by GetOAuthAccessToken has elapsed, then set the following connection properties:
Then call RefreshOAuthAccessToken with OAuthRefreshToken set to the OAuth refresh token returned by GetOAuthAccessToken. After the new tokens have been retrieved, open a new connection by setting the OAuthAccessToken property to the value returned by RefreshOAuthAccessToken.
Finally, store the OAuth refresh token so that you can use it to manually refresh the OAuth access token after it has expired.
When running on a GCP virtual machine, the Sync App can authenticate using a service account tied to the virtual machine. To use this mode, set AuthScheme to GCPInstanceAccount.
The Sync App facilitates the following OAuth authentication flows:
This OAuth flow requires the authenticating user to interact with Google using the browser. The Sync App facilitates this in various ways as described below.
When you connect the Sync App opens the OAuth endpoint in your default browser. Log in and grant permissions to the application. The Sync App then completes the OAuth process:
When connecting via a Web application or if the Sync App is not authorized to open a browser window, you exchange a verifier code for the access token.
To begin, you need to register an OAuth app with Google and set the following connection properties.
Once you have registered an app and set OAuthClientId and OAuthClientSecret you can exchange a verifier code for the access token.
Log in at the OAuth endpoint and authorize the application. You are redirected back via the callback URL.
The verifier code is appended to the callback URL in a query string parameter named "code". Extract the verifier code.
You can use a service account in this OAuth flow to access Google APIs on behalf of users in a domain. A domain administrator can delegate domain-wide access to the service account.
To complete the service account flow, generate a private key in the Google APIs Console. In the service account flow, the Sync App exchanges a JSON Web token (JWT) for the OAuthAccessToken. The private key is required to sign the JWT. The OAuthAccessToken authenticates that the Sync App has the same permissions granted to the service account.
Follow the steps below to generate a private key and obtain the credentials for your application:
After setting the following connection properties, you are ready to connect:
AuthScheme: Set this to Basic.
AuthScheme: Set this to Digest.
AuthScheme: Set this to OAuth.
AuthScheme: Set this to OAuthJWT.
AuthScheme: Set this to OAuthPassword.
AuthScheme: Set this to OAuthClient.
AuthScheme: Set this to OAuthPKCE.
If you do not already have Cloud Object Storage in your IBM Cloud account, you can follow the procedure below to install an instance of SQL Query in your account:
To connect with IBM Cloud Object Storage, you will need an ApiKey. You can obtain this as follows:
You can authenticate to IBM Cloud Object Storage using either HMAC or OAuth authentication.
Set the following properties to authenticate:
ConnectionType=IBM Object Storage Source;URI=ibmobjectstorage://bucket1/folder1; AccessKey=token1; SecretKey=secret1; Region=eu-gb;Optionally, specify Region in addition.
Set the following to authenticate using OAuth authentication.
ConnectionType=IBM Object Storage Source;URI=ibmobjectstorage://bucket1/folder1; ApiKey=key1; Region=eu-gb; AuthScheme=OAuth; InitiateOAuth=GETANDREFRESH;
When you connect, the Sync App completes the OAuth process.
CSV uses the OAuth authentication standard. To authenticate using OAuth, you will need to create an app to obtain the OAuthClientId, OAuthClientSecret, and CallbackURL connection properties. AZUREOAUTHentazuread AZUREOAUTHentazureserviceprincipalinfo AZUREOAUTHentmsiauth
SSHAuthMode: Set this to None.
SSHAuthMode: Set this to Password.
SSHAuthMode: Set this to Public_Key.
Service provider | Okta | OneLogin | ADFS | AzureAD |
Amazon S3 | Y | Y | Y | |
Azure Blob Storage | ||||
Azure Data Lake Store Gen1 | ||||
Azure Data Lake Store Gen2 | ||||
Azure Data Lake Store Gen2 with SSL | ||||
Google Drive | ||||
OneDrive | ||||
Box | ||||
Dropbox | ||||
SharePoint Online SOAP | Y | Y | Y | |
SharePoint Online REST | ||||
Wasabi | ||||
Google Cloud Storage | ||||
Oracle Cloud Storage | ||||
Azure File |
Azure AD Configuration
The main theme behind this configuration is the OAuth 2.0 On-Behalf-Of flow. It requires two Azure AD applications:
Save the step "Assign the Azure AD test user" until after provisioning so that you can select the AWS roles when assigning the user.
CData Driver Common Properties
The following SSOProperties are needed to authenticate to Azure Active Directory and must be specified for every service provider.
We will retrieve the SSO SAML response from an OAuth 2.0 On-Behalf-Of flow so the following OAuth connection properties must be specified:
Amazon S3
In addition to the common properties, the following properties must be specified when connecting to Amazon S3 service provider:
AuthScheme=AzureAD;InitiateOAuth=GETANDREFRESH;OAuthClientId=d593a1d-ad89-4457-872d-8d7443aaa655;OauthClientSecret=g9-oy5D_rl9YEKfN-45~3Wm8FgVa2F;SSOProperties='Tenant=94be7-edb4-4fda-ab12-95bfc22b232f;Resource=https://signin.aws.amazon.com/saml;';AWSRoleARN=arn:aws:iam::2153385180:role/AWS_AzureAD;AWSPrincipalARN=arn:aws:iam::215515180:saml-provider/AzureAD;
OneLogin Configuration
You must create an application used for the single sign-on process to a specific provider.
Sharepoint SOAP
The following properties must be specified when connecting to Sharepoint SOAP service provider:
AuthScheme='OneLogin';User=test;Password=test;SSOProperties='Domain=test.cdata;';
Okta Configuration
You must create an application used for the single sign-on process to a specific provider.
Sharepoint SOAP
The following properties must be specified when connecting to Sharepoint SOAP service provider:
AuthScheme='Okta';User=test;Password=test;SSOProperties='Domain=test.cdata;';
Amazon S3
The following properties must be specified when connecting to an Amazon S3 service provider:
AuthScheme=Okta;User=OktaUser;Password=OktaPassword;SSOLoginURL='https://{subdomain}.okta.com/home/amazon_aws/0oan2hZLgQiy5d6/272';
ADFS Configuration
You must create an application used for the single sign-on process to a specific provider.
Sharepoint SOAP
The following properties must be specified when connecting to a Sharepoint SOAP service provider:
AuthScheme='ADFS';User=test;Password=test;SSOProperties='Domain=test.cdata;';
Amazon S3
The following properties must be specified when connecting to a Sharepoint SOAP service provider:
AuthScheme=ADFS;User=username;Password=password;SSOLoginURL='https://sts.company.com';ADFS Integrated
The ADFS Integrated flow indicates you are connecting with the currently logged in Windows user credentials. To use the ADFS Integrated flow, simply do not specify the User and Password, but otherwise follow the same steps in the ADFS guide above.
This section shows how to use the Sync App to authenticate using Kerberos.
To authenticate to CSV using Kerberos, set the following properties:
You can use one of the following options to retrieve the required Kerberos ticket.
This option enables you to use the MIT Kerberos Ticket Manager or kinit command to get tickets. Note that you do not need to set the User or Password connection properties with this option.
As an alternative to setting the KRB5CCNAME environment variable, you can directly set the file path using the KerberosTicketCache property. When set, the Sync App uses the specified cache file to obtain the Kerberos ticket to connect to CSV.
If the KRB5CCNAME environment variable has not been set, you can retrieve a Kerberos ticket using a Keytab File. To do so, set the User property to the desired username and set the KerberosKeytabFile property to a file path pointing to the keytab file associated with the user.
If both the KRB5CCNAME environment variable and the KerberosKeytabFile property have not been set, you can retrieve a ticket using a user and password combination. To do this, set the User and Password properties to the user/password combination that you use to authenticate with CSV.
More complex Kerberos environments may require cross-realm authentication where multiple realms and KDC servers are used (e.g., where one realm/KDC is used for user authentication and another realm/KDC is used for obtaining the service ticket).
In such an environment, set the KerberosRealm and KerberosKDC properties to the values required for user authentication. Also set the KerberosServiceRealm and KerberosServiceKDC properties to the values required to obtain the service ticket.
Set the following properties to control how the Sync App models CSV as tables:
The CData Sync App hides the complexity of processing local and remote CSV data, from connecting over wire protocols to modeling the data as tables. However, you also have control over these layers.
The Sync App dynamically derives schemas from CSV based on the connection properties specified. The available connection properties give you control over many aspects of how CSV data is modeled as tables. See Connecting to CSV Data Sources for more information on configuring the connection. When working with local CSV, you can also configure the column definitions and file format with Schema.ini, the configuration used by the Microsoft Jet driver.
For more granular control over the columns reported and other aspects of modeling the data as tables, you can define your own schemas or extend the generated ones. Schemas are defined in extendable configuration files. See Generating Schema Files to save the detected schemas to configuration files, which you can then easily edit.
The following sections show how to customize schemas or write your own from scratch.
Tables and views are defined by authoring schema files in API Script. API Script is a simple configuration language that allows you to define the columns and the behavior of the table. It also has built-in operations that enable you to process CSV.
In addition to these data processing primitives, API Script is a full-featured language with constructs for conditionals, looping, etc. However, as shown by the example schema, for most table definitions you will not need to use these features.
Below is a fully functional table schema that models the Person entity in the popular Northwind sample database. It contains all the components you will need to access your data source through SQL. You can find more information on using these components in Column Definitions and SELECT Execution.
<api:script>
<!-- See Column Definitions to define column behavior. -->
<api:info title="CSVPersons" desc="Parse the CSV Persons feed.">
<attr name="ID" xs:type="int" key="true" />
<attr name="EmployeeID" xs:type="int" />
<attr name="Name" xs:type="string" />
<attr name="TotalExpense" xs:type="double" />
<attr name="HireDate" xs:type="datetime" />
<attr name="Salary" xs:type="int" />
</api:info>
<api:set attr="uri" value="http://pathtocsvstream" />
<!-- The GET method corresponds to SELECT. The results of processing are pushed to the schema's output. See SELECT Execution for more information. -->
<api:script method="GET" >
<api:call op="csvproviderGet"/>
</api:script>
<!-- Not implemented -->
<api:script method="POST">
<api:call op="csvproviderInsert">
<api:push/>
</api:call>
</api:script>
<!-- Not implemented -->
<api:script method="MERGE">
<api:call op="csvproviderUpdate">
<api:push/>
</api:call>
</api:script>
<!-- Not implemented -->
<api:script method="DELETE">
<api:call op="csvproviderDelete">
<api:push/>
</api:call>
</api:script>
</api:script>
In the Schema.ini file you can specify the format of a text file you want to model as a table and you can also define the columns of the table. Schema.ini must be located in the folder specified in the URI -- or, if IncludeSubdirectories is set, Schema.ini can be defined in each subfolder.
To allow you to define a Schema.ini only when necessary, you can also use IncludeFiles and ExtendedProperties.
ExtendedProperties is compatible with Microsoft Jet OLE DB 4.0. The format for all text files can be set in ExtendedProperties. Schema.ini overrides ExtendedProperties for a specific file.
Files specified in Schema.ini are reported as tables in addition to files included by IncludeFiles. The Sync App uses a definition in Schema.ini if one exists and the filename otherwise to report the table.
A section in Schema.ini must begin with the file name enclosed in square brackets. For example:
[Jerrie's travel expense.txt]
After adding a file name entry, you can set the Format property to the format of the file. The possible values are the following:
Format=Delimited(,)Note: By default, .txt files are processed as CSV files with headers.
There are two ways to define columns based on the fields in your text files:
To define a column in Schema.ini, use the following format:
Coln=ColumnName DataType [Width Width]
For example:
Col2=A Text Width 100Note: If format is set to fixed length, then defining the width of each column is mandatory.
[Jerrie's travel expense.csv] ColNameHeader=True Format=Delimited(,) Col1=Date Text Col2=A Text Col3=B Text Col4=C Text Col5=Total Text Col6=Date Text Col7=D Text Col8=E Text Col9=F Text Col10=G Text Col11=rate numeric [invoices.csv] ColNameHeader=True Format=Delimited(,) Col1=id numeric Col2=invoicedate date Col3=total numeric
Data types can be any of the following:
The CData Sync App enables you to persist schema definitions to configuration files. Schema files make it easy to customize and save the dynamically detected schemas, or to define your own view of the data.
The following sections show how to use the GenerateSchemaFiles property to save the table definitions detected based on the connection string. Alternatively, you can invoke the CreateSchema stored procedure to manually generate a schema file based on the provided input parameters.
After creating a schema, see Modeling CSV Data for more information on extending table schemas to gain further control over data types and other aspects of modeling CSV as tables.
Set the following additional connection properties to generate table schemas for local or remote CSV:
Note: Columns defined in .rsd files take precedence over the definitions in Schema.ini. Columns defined in generated schema files take precedence over the definitions in Schema.ini.
The basic attributes of a column are the name of the column, the data type, whether the column is a primary key, and the internal name. The Sync App uses the internal name to extract nodes from CSV with no readable names.
Mark up column attributes in the api:info block of the schema file. You can set the internal name in the other:internalname property. You can also specify the format of the resulting column value with other:valueFormat.
To see the column definitions in a complete example, refer to Modeling CSV Data.
<api:info title="CSVPersons" desc="Parse the CSV Persons feed.">
<attr name="ID" xs:type="int" key="true" />
<attr name="EmployeeID" xs:type="int" other:internalname="employee_id" />
<attr name="Name" xs:type="string" />
<attr name="TotalExpense" xs:type="double" />
<attr name="HireDate" xs:type="datetime" />
<attr name="Salary" xs:type="int" />
</api:info>
The other:internalname property is used to specify the CSV column name that selects the column's value from CSV. So, if the CSV file contains a column name employee_id you use other:internalname="employee_id"
With a URI and Column Definitions specified, the Sync App processes SELECT statements client-side, in memory, through SupportEnhancedSQL. The following sections show how to use the Sync App's built-in operations to customize how the Sync App requests and returns data from the server.
When a SELECT query is issued, the Sync App executes the GET method of the schema. In this method you can process CSV. To see this schema in a complete example, refer to Modeling CSV Data.
The following line maps the schema to a URI:
<api:set attr="uri" value="ftp://somewebsite/NorthwindOData.csv" />
Invoke the operation to retrieve the data in the GET method. Specify the operation with the api:push keyword. The following lines push the results of processing to the schema's output.
<api:script method="GET" >
<api:push op="csvproviderGet"/>
</api:script>
You can then execute WHERE clause searches, JOIN queries, and SQL aggregate functions.
The Sync App's operations give you high level control over the request sent to the server. You can set a variety of inputs to control authentication and other aspects of the request. See Operations for the available inputs.
You can also build the request by injecting inputs from the SQL statement. As an example, the following sections show how to use the WHERE clause to change the request dynamically. Note that other filters specified in the WHERE clause are processed client-side by the Sync App; you can search on any column returned in the response.
Consider a weather forecast API that returns a location's forecast in CSV. You specify the location you want in the URI. Using the Sync App, you could get the forecast with a query like the following:
SELECT * FROM Forecasts WHERE (Location = '90210')
Follow the steps below to implement this query. The following procedure defines a pseudo column, an input that can only be used in the WHERE clause, and maps the pseudo column to an API request.
<api:info>
...
<input name="Location" required="true"/>
</api:info>
<api:set attr='uri' value="http://api.wunderground.com/api/MyAPIKey/hourly/q/[_input.Location].csv"/>
<api:script method="GET" >
<api:push op="csvproviderGet"/>
</api:script>
To override the Sync App's internal paging mechanism, add the Rows@Next input to the list of columns in the api:info block.
<input name="rows@next" desc="Identifier for the next page of results." />
Note that making this an input parameter instead of an attr parameter will prevent it from
showing up in column listings.
You will also need to set the EnablePaging attribute to TRUE to turn off the driver's internal paging mechanism.
<api:set attr="EnablePaging" value="TRUE" />
When the Rows@Next value is set in the output, the Sync App will automatically call the method again with the Rows@Next value in the input after it is finished returning results for this page. You can use the value of this input to modify the request on the next pass to get the next page of
data. Set the Rows@Next input to any information needed to make the request for the next page of data.
For example, your API may return the next page's URL in the response. You can obtain this value by providing the XPath to the URL:
<api:set attr="elementmappath#" value="/next_page" />
<api:set attr="elementmapname#" value="rows@next" />
You can then modify the URL where the request is made, provided the value is set. The api:check element is useful for checking the existence of a required input before attempting to access its value. The Rows@Next input can be accessed as an attribute of the _input item:
<api:check attr="_input.rows@next">
<api:set attr="uri" value="[_input.rows@next]" />
<api:else>
<api:set attr="uri" value="<first page's URL>" />
</api:else>
<api:check>
You can use the _query item to access any component of the SELECT statement in the schema.
query | The SQL query. For example:
SELECT Id, Name FROM Accounts WHERE City LIKE '%New%' AND COUNTRY = 'US' GROUP BY CreatedDate ORDER BY Name LIMIT 10,50; |
selectcolumns | A comma-separated list containing the columns specified in the SELECT statement. For example, the Id and Name columns in the example. |
table | The table name specified in the SELECT statement. For example, Accounts in the example. |
criteria | The WHERE clause of the statement. For example, the following WHERE clause in the example:
City LIKE '%New%' AND COUNTRY = 'US' |
orderby | The columns specified in the ORDER BY clause. For example, Name in the example. |
groupby | The GROUP BY clause in the SELECT statement. For example, CreatedDate in the example. |
limit | The limit specified in the LIMIT or TOP clauses of the SELECT statement. For example, 50 in the example. |
offset | The offset specified in the LIMIT or TOP clauses of the SELECT statement. For example, 10 in the example. |
isjoin | Whether the query is a join. |
jointable | The table to be joined. |
isschemaonly | Whether the query retrieves only schema information. |
The Sync App has high-performance, built-in operations for accessing data from CSV data sources. These operations are platform neutral: Schema files that invoke these operations can be used in both .NET and Java. You can also extend the Sync App with your own operations written in .NET or Java.
The Sync App consists of the following operations:
Operation Name | Description | |
csvproviderGet | The csvproviderGet operation is an API Script operation that is used to process CSV content. It allows you to split CSV content into rows. | |
oauthGetAccessToken | For OAuth 1.0, exchange a request token for an access token. For OAuth 2.0, get an access token or get a new access token with the refresh token. | |
oauthGetUserAuthorizationURL | Generates the user authorization URL. OAuth 2.0 will not access the network in this operation. |
The csvproviderGet operation is an API Script operation that is used to process CSV content. It allows you to split CSV content into rows.
The csvproviderGet operation can be used to execute remote data retrieval operations. It abstracts the request and also enables configuration of most aspects through the following inputs, including authentication and firewall traversal. See ProxyAuthScheme and FirewallType the properties needed to negotiate a firewall.
The csvproviderGet operation reads the api:info section of the table schema file to map various elements in the CSV document into column values within a row. It does so using the other:internalname property of the column definition.
The oauthGetAccessToken operation is an API Script operation that is used to facilitate the OAuth authentication flow. To pass the needed inputs to the operation, define the GetOAuthAccessToken stored procedure and, if your data source has a refresh flow, RefreshOAuthAccessToken. The Sync App can call this internally.
The Sync App includes stored procedures that invoke this operation to complete the OAuth exchange. The following example schema briefly lists some of the typically required inputs before the following sections explain them in more detail.
For a guide to using the Sync App to authenticate, see the "Getting Started" chapter.
Invoke the oauthGetAccessToken with the GetOAuthAccessToken stored procedure. The following inputs are required for most data sources and will provide default values for the connection properties of the same name.
<api:script xmlns:api="http://www.rssbus.com/ns/rsbscript/2">
<api:info title="GetOAuthAccessToken" description="Obtains the OAuth access token to be used for authentication with various APIs." >
<input name="AuthMode" desc="The OAuth flow. APP or WEB." />
<input name="CallbackURL" desc="The URL to be used as a trusted redirect URL, where the user will return with the token that verifies that they have granted your app access. " />
<input name="OAuthAccessToken" desc="The request token. OAuth 1.0 only." />
<input name="OAuthAccessTokenSecret" desc="The request token secret. OAuth 1.0 only." />
<input name="Verifier" desc="The verifier code obtained when the user grants permissions to your app." />
<output name="OAuthAccessToken" desc="The access token." />
<output name="OAuthTokenSecret" desc="The access token secret." />
<output name="OAuthRefreshToken" desc="A token that may be used to obtain a new access token." />
</api:info>
<!-- Set OAuthVersion to 1.0 or 2.0. -->
<api:set attr="OAuthVersion" value="MyOAuthVersion" />
<!-- Set RequestTokenURL to the URL where the request for the request token is made. OAuth 1.0 only.-->
<api:set attr="OAuthRequestTokenURL" value="http://MyOAuthRequestTokenURL" />
<!-- Set OAuthAuthorizationURL to the URL where the user logs into the service and grants permissions to the application. -->
<api:set attr="OAuthAuthorizationURL" value="http://MyOAuthAuthorizationURL" />
<!-- Set OAuthAccessTokenURL to the URL where the request for the access token is made. -->
<api:set attr="OAuthAccessTokenURL" value="http://MyOAuthAccessTokenURL" />
<!-- Set GrantType to the authorization grant type. OAuth 2.0 only. -->
<api:set attr="GrantType" value="CODE" />
<!-- Set SignMethod to the signature method used to calculate the signature of the request. OAuth 1.0 only.-->
<api:set attr="SignMethod" value="HMAC-SHA1" />
<api:call op="oauthGetAccessToken">
<api:push/>
</api:call>
</api:script>
You can also use oauthGetAccessToken to refresh the access token by providing the following inputs:
<api:script xmlns:api="http://www.rssbus.com/ns/rsbscript/2">
<api:info title="RefreshOAuthAccessToken" description="Refreshes the OAuth access token used for authentication." >
<input name="OAuthRefreshToken" desc="A token that may be used to obtain a new access token." />
<output name="OAuthAccessToken" desc="The authentication token returned." />
<output name="OAuthTokenSecret" desc="The authentication token secret returned. OAuth 1.0 only." />
<output name="OAuthRefreshToken" desc="A token that may be used to obtain a new access token." />
<output name="ExpiresIn" desc="The remaining lifetime on the access token." />
</api:info>
<!-- Set OAuthVersion to 1.0 or 2.0. -->
<api:set attr="OAuthVersion" value="MyOAuthVersion" />
<!-- Set GrantType to REFRESH. OAuth 2.0 only. -->
<api:set attr="GrantType" value="REFRESH" />
<!-- Set SignMethod to the signature method used to calculate the signature of the request. OAuth 1.0 only.-->
<api:set attr="SignMethod" value="HMAC-SHA1" />
<!-- Set OAuthAccessTokenURL to the URL where the request for the access token is made. -->
<api:set attr="OAuthAccessTokenURL" value="http://MyOAuthAccessTokenURL" />
<!-- Set AuthMode to 'WEB' when calling RefreshOAuthAccessToken -->
<api:set attr="AuthMode" value="WEB"/>
<api:call op="oauthGetAccessToken">
<api:push/>
</api:call>
</api:script>
The oauthGetUserAuthorizationURL is an API Script operation that is used to facilitate the OAuth authentication flow for Web apps, for offline apps, and in situations where the Sync App is not allowed to open a Web browser. To pass the needed inputs to this operation, define the GetOAuthAuthorizationURL stored procedure. The Sync App can call this internally.
Define stored procedures in .rsb files with the same file name as the schema's title. The example schema briefly lists some of the typically required inputs before the following sections explain them in more detail.
For a guide to authenticating in the OAuth flow, see the "Getting Started" chapter.
Call oauthGetUserAuthorizationURL in the GetOAuthAuthorizationURL stored procedure.
<api:script xmlns:api="http://www.rssbus.com/ns/rsbscript/2">
<api:info title="Get OAuth Authorization URL" description="Obtains the OAuth authorization URL used for authentication with various APIs." >
<input name="CallbackURL" desc="The URL to be used as a trusted redirect URL, where the user will return with the token that verifies that they have granted your app access. " />
<output name="URL" desc="The URL where the user logs in and is prompted to grant permissions to the app. " />
<output name="OAuthAccessToken" desc="The request token. OAuth 1.0 only." />
<output name="OAuthTokenSecret" desc="The request token secret. OAuth 1.0 only." />
</api:info>
<!-- Set OAuthVersion to 1.0 or 2.0. -->
<api:set attr="OAuthVersion" value="MyOAuthVersion" />
<!-- Set ResponseType to the desired authorization grant type. OAuth 2.0 only.-->
<api:set attr="ResponseType" value="code" />
<!-- Set SignMethod to the signature method used to calculate the signature. OAuth 1.0 only.-->
<api:set attr="SignMethod" value="HMAC-SHA1" />
<!-- Set OAuthAuthorizationURL to the URL where the user logs into the service and grants permissions to the application. -->
<api:set attr="OAuthAuthorizationURL" value="http://MyOAuthAuthorizationURL" />
<!-- Set OAuthAccessTokenURL to the URL where the request for the access token is made. -->
<api:set attr="OAuthAccessTokenURL" value="http://MyOAuthAccessTokenURL"/>
<!-- Set RequestTokenURL to the URL where the request for the request token is made. OAuth 1.0 only.-->
<api:set attr="OAuthRequestTokenURL" value="http://MyOAuthRequestTokenURL" />
<api:call op="oauthGetUserAuthorizationUrl">
<api:push/>
</api:call>
</api:script>
<p>
This section details a selection of advanced features of the CSV Sync App.
The Sync App allows you to define virtual tables, called user defined views, whose contents are decided by a pre-configured query. These views are useful when you cannot directly control queries being issued to the drivers. See User Defined Views for an overview of creating and configuring custom views.
Use SSL Configuration to adjust how Sync App handles TLS/SSL certificate negotiations. You can choose from various certificate formats; see the SSLServerCert property under "Connection String Options" for more information.
Configure the Sync App for compliance with Firewall and Proxy, including Windows proxies and HTTP proxies. You can also set up tunnel connections.
The Sync App offloads as much of the SELECT statement processing as possible to CSV and then processes the rest of the query in memory (client-side).
See Query Processing for more information.
See Logging for an overview of configuration settings that can be used to refine CData logging. For basic logging, you only need to set two connection properties, but there are numerous features that support more refined logging, where you can select subsets of information to be logged using the LogModules connection property.
By default, the Sync App attempts to negotiate SSL/TLS by checking the server's certificate against the system's trusted certificate store.
To specify another certificate, see the SSLServerCert property for the available formats to do so.
The CSV Sync App also supports setting client certificates. Set the following to connect using a client certificate.
To connect through the Windows system proxy, you do not need to set any additional connection properties. To connect to other proxies, set ProxyAutoDetect to false.
In addition, to authenticate to an HTTP proxy, set ProxyAuthScheme, ProxyUser, and ProxyPassword, in addition to ProxyServer and ProxyPort.
Set the following properties:
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
Property | Description |
AuthScheme | The type of authentication to use when connecting to remote services. |
AccessKey | Your account access key. This value is accessible from your security credentials page. |
SecretKey | Your account secret key. This value is accessible from your security credentials page. |
ApiKey | The API Key used to identify the user to IBM Cloud. |
User | The user account used to authenticate. |
Password | The password used to authenticate the user. |
SharePointEdition | The edition of SharePoint being used. Set either SharePointOnline or SharePointOnPremise. |
Property | Description |
ConnectionType | The type of connection to use. |
URI | The Uniform Resource Identifier (URI) for the CSV resource location. |
Region | The hosting region for your S3-like Web Services. |
ProjectId | The Id of the project where your Google Cloud Storage instance resides. |
OracleNamespace | The Oracle Cloud Object Storage namespace to use. |
StorageBaseURL | The URL of a cloud storage service provider. |
SimpleUploadLimit | This setting specifies the threshold, in bytes, above which the provider will choose to perform a multipart upload rather than uploading everything in one request. |
UseVirtualHosting | If true (default), buckets will be referenced in the request using the hosted-style request: http://yourbucket.s3.amazonaws.com/yourobject. If set to false, the bean will use the path-style request: http://s3.amazonaws.com/yourbucket/yourobject. Note that this property will be set to false, in case of an S3 based custom service when the CustomURL is specified. |
Property | Description |
AWSAccessKey | Your AWS account access key. This value is accessible from your AWS security credentials page. |
AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
AWSRoleARN | The Amazon Resource Name of the role to use when authenticating. |
AWSPrincipalARN | The ARN of the SAML Identity provider in your AWS account. |
AWSRegion | The hosting region for your Amazon Web Services. |
AWSCredentialsFile | The path to the AWS Credentials File to be used for authentication. |
AWSCredentialsFileProfile | The name of the profile to be used from the supplied AWSCredentialsFile. |
AWSSessionToken | Your AWS session token. |
MFASerialNumber | The serial number of the MFA device if one is being used. |
MFAToken | The temporary token available from your MFA device. |
ServerSideEncryption | When activated, file uploads into Amazon S3 buckets will be server-side encrypted. |
Property | Description |
AzureStorageAccount | The name of your Azure storage account. |
AzureAccessKey | The storage key associated with your CSV account. |
AzureSharedAccessSignature | A shared access key signature that may be used for authentication. |
AzureTenant | The Microsoft Online tenant being used to access data. If not specified, your default tentant will be used. |
AzureEnvironment | The Azure Environment to use when establishing a connection. |
Property | Description |
SSOLoginURL | The identity provider's login URL. |
SSOProperties | Additional properties required to connect to the identity provider in a semicolon-separated list. |
Property | Description |
OAuthVersion | The version of OAuth being used. |
OAuthClientId | The client Id assigned when you register your application with an OAuth authorization server. |
OAuthClientSecret | The client secret assigned when you register your application with an OAuth authorization server. |
Scope | Specify scope to obtain the initial access and refresh token. |
OAuthGrantType | The grant type for the OAuth flow. |
OAuthPasswordGrantMode | How to pass Client Id and Secret with OAuthGrantType is set to Password. |
OAuthIncludeCallbackURL | Whether to include the callback URL in an access token request. |
OAuthAuthorizationURL | The authorization URL for the OAuth service. |
OAuthAccessTokenURL | The URL to retrieve the OAuth access token from. |
OAuthRefreshTokenURL | The URL to refresh the OAuth token from. |
OAuthRequestTokenURL | The URL the service provides to retrieve request tokens from. This is required in OAuth 1.0. |
AuthToken | The authentication token used to request and obtain the OAuth Access Token. |
AuthKey | The authentication secret used to request and obtain the OAuth Access Token. |
OAuthParams | A comma-separated list of other parameters to submit in the request for the OAuth access token in the format paramname=value. |
Property | Description |
OAuthJWTCert | The JWT Certificate store. |
OAuthJWTCertType | The type of key store containing the JWT Certificate. |
OAuthJWTCertPassword | The password for the OAuth JWT certificate. |
OAuthJWTCertSubject | The subject of the OAuth JWT certificate. |
OAuthJWTIssuer | The issuer of the Java Web Token. |
OAuthJWTSubject | The user subject for which the application is requesting delegated access. |
Property | Description |
KerberosKDC | The Kerberos Key Distribution Center (KDC) service used to authenticate the user. |
KerberosRealm | The Kerberos Realm used to authenticate the user. |
KerberosSPN | The service principal name (SPN) for the Kerberos Domain Controller. |
KerberosKeytabFile | The Keytab file containing your pairs of Kerberos principals and encrypted keys. |
KerberosServiceRealm | The Kerberos realm of the service. |
KerberosServiceKDC | The Kerberos KDC of the service. |
KerberosTicketCache | The full file path to an MIT Kerberos credential cache file. |
Property | Description |
SSLClientCert | The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). |
SSLClientCertType | The type of key store containing the TLS/SSL client certificate. |
SSLClientCertPassword | The password for the TLS/SSL client certificate. |
SSLClientCertSubject | The subject of the TLS/SSL client certificate. |
SSLMode | The authentication mechanism to be used when connecting to the FTP or FTPS server. |
SSLServerCert | The certificate to be accepted from the server when connecting using TLS/SSL. |
Property | Description |
SSHAuthMode | The authentication method to be used to log on to an SFTP server. |
SSHClientCert | A private key to be used for authenticating the user. |
SSHClientCertPassword | The password of the SSHClientCert key if it has one. |
SSHClientCertSubject | The subject of the SSH client certificate. |
SSHClientCertType | The type of SSHClientCert private key. |
SSHUser | The SSH user. |
SSHPassword | The SSH password. |
Property | Description |
FirewallType | The protocol used by a proxy-based firewall. |
FirewallServer | The name or IP address of a proxy-based firewall. |
FirewallPort | The TCP port for a proxy-based firewall. |
FirewallUser | The user name to use to authenticate with a proxy-based firewall. |
FirewallPassword | A password used to authenticate to a proxy-based firewall. |
Property | Description |
ProxyAutoDetect | This indicates whether to use the system proxy settings or not. This takes precedence over other proxy settings, so you'll need to set ProxyAutoDetect to FALSE in order use custom proxy settings. |
ProxyServer | The hostname or IP address of a proxy to route HTTP traffic through. |
ProxyPort | The TCP port the ProxyServer proxy is running on. |
ProxyAuthScheme | The authentication type to use to authenticate to the ProxyServer proxy. |
ProxyUser | A user name to be used to authenticate to the ProxyServer proxy. |
ProxyPassword | A password to be used to authenticate to the ProxyServer proxy. |
ProxySSLType | The SSL type to use when connecting to the ProxyServer proxy. |
ProxyExceptions | A semicolon separated list of destination hostnames or IPs that are exempt from connecting through the ProxyServer . |
Property | Description |
LogModules | Core modules to be included in the log file. |
Property | Description |
Location | A path to the directory that contains the schema files defining tables, views, and stored procedures. |
BrowsableSchemas | This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC. |
Tables | This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA,TableB,TableC. |
Views | Restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC. |
SchemaIniLocation | A path to the directory that contains the schema.ini file. |
AggregateFiles | When set to true, the provider will aggregate into a single table called AggregatedFiles all the files located in the URI directory which have the same schema. |
MetadataDiscoveryURI | Used together with AggregateFiles , this property specifies a specific file to read the schema of the AggregatedFiles result set. |
TypeDetectionScheme | Determines how to determine the data types of columns. |
ColumnCount | The number of columns to detect when dynamically determining columns for the table. |
RowScanDepth | The number of rows to scan when dynamically determining columns for the table. |
Property | Description |
IncludeColumnHeaders | Whether to get column names from the first line of the specified files. |
FMT | The format to be used to parse all text files. |
ExtendedProperties | The Microsoft Jet OLE DB 4.0-compatible extended properties for text files. |
RowDelimiter | The character which will be used to detect the end of a CSV row. |
SkipTop | Skips the amount of rows specified starting from the top. |
IgnoreBlankRows | Indicates whether to skip the empty rows. |
IncludeEmptyHeaders | Whether to include empty value for a column name within column header. |
SkipHeaderComments | If set to true, skips rows at the top of the file beginning with #. |
Charset | Specifies the session character set for encoding and decoding character data transferred to and from the CSV file. The default value is UTF-8. |
QuoteEscapeCharacter | Determines the character which will be used to escape quotes. |
QuoteCharacter | Determines the character which will be used to quote values in CSV file. |
TrimSpaces | Set to True if you want the provider to trim preceeding and trailing spaces in a cell containing a quoted value. |
PushEmptyValuesAsNull | Indicates whether to read the empty values as empty or as null. |
NullValues | A comma separated list which will be replaced with nulls if there are found in the CSV file. |
PathSeparator | Determines the character which will be used to replace the file separator. |
IgnoreIncompleteRows | Indicates whether to ignore incomplete rows. |
MaxCellLength | Max length a cell can hold, after passing this number the cell content will get truncated. If value is -1 then we don't limit the cell content length. |
Property | Description |
BatchNamingConvention | Determines the naming convention of batch files. |
ClientCulture | This property can be used to specify the format of data (e.g., currency values) that is accepted by the client application. This property can be used when the client application does not support the machine's culture settings. For example, Microsoft Access requires 'en-US'. |
CreateBatchFolder | Whether to create a folder or not when InsertMode is set to FilePerBatch. |
Culture | This setting can be used to specify culture settings that determine how the provider interprets certain data types that are passed into the provider. For example, setting Culture='de-DE' will output German formats even on an American machine. |
CustomHeaders | Other headers as determined by the user (optional). |
CustomUrlParams | The custom query string to be included in the request. |
DirectoryRetrievalDepth | Limit the subfolders recursively scanned when IncludeSubdirectories is enabled. |
ExcludeFileExtensions | Set to true if file extensions should be excluded from table names. |
ExcludeFiles | Comma-separated list of file extensions to exclude from the set of the files modeled as tables. |
GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
IncludeDropboxTeamResources | Indicates if you want to include Dropbox team files and folders. |
IncludeFiles | Comma-separated list of file extensions to include into the set of the files modeled as tables. |
IncludeSubdirectories | Whether to read files from nested folders. In the case of a name collision, table names are prefixed by the underscore-separated folder names. |
InsertMode | Specifies the mode for inserting data into CSV files. |
MaxRows | Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time. |
Other | These hidden properties are used only in specific use cases. |
Pagesize | The maximum number of results to return per page from CSV. |
PseudoColumns | This property indicates whether or not to include pseudo columns as columns to the table. |
Timeout | The value in seconds until the timeout error is thrown, canceling the operation. |
TruncateOnInserts | Set to True if you want the provider to truncate on every (batch) insert. |
UserDefinedViews | A filepath pointing to the JSON configuration file containing your custom views. |
UseRowNumbers | Set this to true if you are deleting or updating in CSV and you do not want to specify a custom schema. This will create a new column with the name RowNumber which will be used as key for that table. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
Property | Description |
AuthScheme | The type of authentication to use when connecting to remote services. |
AccessKey | Your account access key. This value is accessible from your security credentials page. |
SecretKey | Your account secret key. This value is accessible from your security credentials page. |
ApiKey | The API Key used to identify the user to IBM Cloud. |
User | The user account used to authenticate. |
Password | The password used to authenticate the user. |
SharePointEdition | The edition of SharePoint being used. Set either SharePointOnline or SharePointOnPremise. |
The type of authentication to use when connecting to remote services.
The following options are generally available to all connections:
The following options are available when URI refers to a web service:
The following options are also available when URI points to an Amazon service:
The following options are also available when URI points to an Azure service:
The following options are also available when URI points to a SharePoint SOAP service:
The following options are also available when URI points to a IBM Cloud Object Storage service:
Your account access key. This value is accessible from your security credentials page.
Your account access key. This value is accessible from your security credentials page depending on the service you are using.
Your account secret key. This value is accessible from your security credentials page.
Your account secret key. This value is accessible from your security credentials page depending on the service you are using.
The API Key used to identify the user to IBM Cloud.
Access to resources in the CSV REST API is governed by an API key in order to retrieve token. An API Key can be created by navigating to Manage --> Access (IAM) --> Users and clicking 'Create'.
The user account used to authenticate.
Together with Password, this field is used to authenticate against the server.
This property will refer to different things based on the context, namely the value of ConnectionType and AuthScheme:
The password used to authenticate the user.
The User and Password are together used to authenticate with the server.
This property will refer to different things based on the context, namely the value of ConnectionType and AuthScheme:
This section provides a complete list of the Connection properties you can configure in the connection string for this provider.
Property | Description |
ConnectionType | The type of connection to use. |
URI | The Uniform Resource Identifier (URI) for the CSV resource location. |
Region | The hosting region for your S3-like Web Services. |
ProjectId | The Id of the project where your Google Cloud Storage instance resides. |
OracleNamespace | The Oracle Cloud Object Storage namespace to use. |
StorageBaseURL | The URL of a cloud storage service provider. |
SimpleUploadLimit | This setting specifies the threshold, in bytes, above which the provider will choose to perform a multipart upload rather than uploading everything in one request. |
UseVirtualHosting | If true (default), buckets will be referenced in the request using the hosted-style request: http://yourbucket.s3.amazonaws.com/yourobject. If set to false, the bean will use the path-style request: http://s3.amazonaws.com/yourbucket/yourobject. Note that this property will be set to false, in case of an S3 based custom service when the CustomURL is specified. |
The type of connection to use.
Set the ConnectionType to one of the following:
Set the ConnectionType to one of the following:
The Uniform Resource Identifier (URI) for the CSV resource location.
Set the URI property to specify a path to a file or stream.
NOTE: this connection property requires that you set ConnectionType, which provide the
See for more advanced features available for parsing and merging multiple files.
Below are examples of the URI formats for the available data sources:
Service provider | URI formats | |
Local | Single File Path One table
localPath file://localPath Directory Path (one table per file) localPath file://localPath | |
HTTP or HTTPS | http://remoteStream
https://remoteStream | |
Amazon S3 | Single File Path One table
s3://remotePath Directory Path (one table per file) s3://remotePath | |
Azure Blob Storage | Single File Path One table
azureblob://mycontainer/myblob/ Directory Path (one table per file) azureblob://mycontainer/myblob/ | |
OneDrive | Single File Path One table
onedrive://remotePath Directory Path (one table per file) onedrive://remotePath | |
Google Cloud Storage | Single File Path One table
gs://bucket/remotePath Directory Path (one table per file) gs://bucket/remotePath | |
Google Drive | Single File Path One table
gdrive://remotePath Directory Path (one table per file) gdrive://remotePath | |
Box | Single File Path One table
box://remotePath Directory Path (one table per file) box://remotePath | |
FTP or FTPS | Single File Path One table
ftp://server:port/remotePath Directory Path (one table per file) ftp://server:port/remotePath | |
SFTP | Single File Path One table
sftp://server:port/remotePath Directory Path (one table per file) sftp://server:port/remotePath | |
Sharepoint | Single File Path One table
sp://https://server/remotePath Directory Path (one table per file) sp://https://server/remotePath |
Below are example connection strings to CSV files or streams.
Service provider | URI formats | Connection example |
Local | Single File Path One table
localPath file://localPath Directory Path (one table per file) localPath file://localPath | URI=C:\folder1 |
Amazon S3 | Single File Path One table
s3://bucket1/folder1 Directory Path (one table per file) s3://bucket1/folder1 | URI=s3://bucket1/folder1; AWSAccessKey=token1; AWSSecretKey=secret1; AWSRegion=OHIO; |
Azure Blob Storage | Single File Path One table
azureblob://mycontainer/myblob/ Directory Path (one table per file) azureblob://mycontainer/myblob/ | URI=azureblob://mycontainer/myblob/; AzureStorageAccount=myAccount; AzureAccessKey=myKey;
URI=azureblob://mycontainer/myblob/; AzureStorageAccount=myAccount; AuthScheme=OAuth; |
OneDrive | Single File Path One table
onedrive://remotePath Directory Path (one table per file) onedrive://remotePath | URI=onedrive://folder1; AuthScheme=OAuth;
URI=onedrive://SharedWithMe/folder1; AuthScheme=OAuth; |
Google Cloud Storage | Single File Path One table
gs://bucket/remotePath Directory Path (one table per file) gs://bucket/remotePath | URI=gs://bucket/folder1; AuthScheme=OAuth; ProjectId=test; |
Google Drive | Single File Path One table
gdrive://remotePath Directory Path (one table per file) gdrive://remotePath | URI=gdrive://folder1; |
Box | Single File Path One table
box://remotePath Directory Path (one table per file) box://remotePath | URI=box://folder1; OAuthClientId=oauthclientid1; OAuthClientSecret=oauthcliensecret1; CallbackUrl=http://localhost:12345; |
FTP or FTPS | Single File Path One table
ftp://server:port/remotePath Directory Path (one table per file) ftp://server:port/remotePath | URI=ftps://localhost:990/folder1; User=user1; Password=password1; |
SFTP | sftp://server:port/remotePath | URI=sftp://127.0.0.1:22/remotePath; User=user1; Password=password1; |
Sharepoint | sp://https://server/remotePath | URI=sp://https://domain.sharepoint.com/Documents; User=user1; Password=password1; |
The hosting region for your S3-like Web Services.
The hosting region for your S3-like Web Services.
Value | Region |
Commercial Cloud Regions | |
ap-hyderabad-1 | India South (Hyderabad) |
ap-melbourne-1 | Australia Southeast (Melbourne) |
ap-mumbai-1 | India West (Mumbai) |
ap-osaka-1 | Japan Central (Osaka) |
ap-seoul-1 | South Korea Central (Seoul) |
ap-sydney-1 | Australia East (Sydney) |
ap-tokyo-1 | Japan East (Tokyo) |
ca-montreal-1 | Canada Southeast (Montreal) |
ca-toronto-1 | Canada Southeast (Toronto) |
eu-amsterdam-1 | Netherlands Northwest (Amsterdam) |
eu-frankfurt-1 | Germany Central (Frankfurt) |
eu-zurich-1 | Switzerland North (Zurich) |
me-jeddah-1 | Saudi Arabia West (Jeddah) |
sa-saopaulo-1 | Brazil East (Sao Paulo) |
uk-london-1 | UK South (London) |
us-ashburn-1 (default) | US East (Ashburn, VA) |
us-phoenix-1 | US West (Phoenix, AZ) |
US Gov FedRAMP High Regions | |
us-langley-1 | US Gov East (Ashburn, VA) |
us-luke-1 | US Gov West (Phoenix, AZ) |
US Gov DISA IL5 Regions | |
us-gov-ashburn-1 | US DoD East (Ashburn, VA) |
us-gov-chicago-1 | US DoD North (Chicago, IL) |
us-gov-phoenix-1 | US DoD West (Phoenix, AZ) |
Value | Region |
eu-central-1 | Europe (Amsterdam) |
us-east-1 (Default) | US East (Ashburn, VA) |
us-east-2 | US East (Manassas, VA) |
us-west-1 | US West (Hillsboro, OR) |
The Id of the project where your Google Cloud Storage instance resides.
The Id of the project where your Google Cloud Storage instance resides. You can find this value by going to Google Cloud Console and clicking the project name at the top left screen. The ProjectId is displayed on the Id column of the matching project.
The Oracle Cloud Object Storage namespace to use.
The Oracle Cloud Object Storage namespace to use. This setting must be set to the Oracle Cloud Object Storage namespace associated with the Oracle Cloud account before any requests can be made. Refer to the Understanding Object Storage Namespaces page of the Oracle Cloud documentation for instructions on how to find your account's Object Storage namespace.
The URL of a cloud storage service provider.
This connection property is used to specify:
If the domain for this option ends in -my (for example, https://bigcorp-my.sharepoint.com) then you may need to use the onedrive:// scheme instead of the sp:// or sprest:// scheme.
This setting specifies the threshold, in bytes, above which the provider will choose to perform a multipart upload rather than uploading everything in one request.
This setting specifies the threshold, in bytes, above which the Sync App will choose to perform a multipart upload rather than uploading everything in one request.
If true (default), buckets will be referenced in the request using the hosted-style request: http://yourbucket.s3.amazonaws.com/yourobject. If set to false, the bean will use the path-style request: http://s3.amazonaws.com/yourbucket/yourobject. Note that this property will be set to false, in case of an S3 based custom service when the CustomURL is specified.
If true (default), buckets will be referenced in the request using the hosted-style request: http://yourbucket.s3.amazonaws.com/yourobject. If set to false, the bean will use the path-style request: http://s3.amazonaws.com/yourbucket/yourobject. Note that this property will be set to false, in case of an S3 based custom service when the CustomURL is specified.
This section provides a complete list of the AWS Authentication properties you can configure in the connection string for this provider.
Property | Description |
AWSAccessKey | Your AWS account access key. This value is accessible from your AWS security credentials page. |
AWSSecretKey | Your AWS account secret key. This value is accessible from your AWS security credentials page. |
AWSRoleARN | The Amazon Resource Name of the role to use when authenticating. |
AWSPrincipalARN | The ARN of the SAML Identity provider in your AWS account. |
AWSRegion | The hosting region for your Amazon Web Services. |
AWSCredentialsFile | The path to the AWS Credentials File to be used for authentication. |
AWSCredentialsFileProfile | The name of the profile to be used from the supplied AWSCredentialsFile. |
AWSSessionToken | Your AWS session token. |
MFASerialNumber | The serial number of the MFA device if one is being used. |
MFAToken | The temporary token available from your MFA device. |
ServerSideEncryption | When activated, file uploads into Amazon S3 buckets will be server-side encrypted. |
Your AWS account access key. This value is accessible from your AWS security credentials page.
Your AWS account access key. This value is accessible from your AWS security credentials page:
Your AWS account secret key. This value is accessible from your AWS security credentials page.
Your AWS account secret key. This value is accessible from your AWS security credentials page:
The Amazon Resource Name of the role to use when authenticating.
When authenticating outside of AWS, it is common to use a Role for authentication instead of your direct AWS account credentials. Entering the AWSRoleARN will cause the CData Sync App to perform a role based authentication instead of using the AWSAccessKey and AWSSecretKey directly. The AWSAccessKey and AWSSecretKey must still be specified to perform this authentication. You cannot use the credentials of an AWS root user when setting RoleARN. The AWSAccessKey and AWSSecretKey must be those of an IAM user.
The ARN of the SAML Identity provider in your AWS account.
The ARN of the SAML Identity provider in your AWS account.
The hosting region for your Amazon Web Services.
The hosting region for your Amazon Web Services. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, JAKARTA, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, TOKYO, CENTRAL, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, STOCKHOLM, ZURICH, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, and GOVCLOUDWEST.
The path to the AWS Credentials File to be used for authentication.
The path to the AWS Credentials File to be used for authentication. See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html for more information.
The name of the profile to be used from the supplied AWSCredentialsFile.
The name of the profile to be used from the supplied AWSCredentialsFile. See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html for more information.
Your AWS session token.
Your AWS session token. This value can be retrieved in different ways. See this link for more info.
The serial number of the MFA device if one is being used.
You can find the device for an IAM user by going to the AWS Management Console and viewing the user's security credentials. For virtual devices, this is actually an Amazon Resource Name (such as arn:aws:iam::123456789012:mfa/user).
The temporary token available from your MFA device.
If MFA is required, this value will be used along with the MFASerialNumber to retrieve temporary credentials to login. The temporary credentials available from AWS will only last up to 1 hour by default (see TemporaryTokenDuration). Once the time is up, the connection must be updated to specify a new MFA token so that new credentials may be obtained. %AWSpSecurityToken; %AWSpTemporaryTokenDuration;
When activated, file uploads into Amazon S3 buckets will be server-side encrypted.
Server-side encryption is the encryption of data at its destination by the application or service that receives it. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html
This section provides a complete list of the Azure Authentication properties you can configure in the connection string for this provider.
Property | Description |
AzureStorageAccount | The name of your Azure storage account. |
AzureAccessKey | The storage key associated with your CSV account. |
AzureSharedAccessSignature | A shared access key signature that may be used for authentication. |
AzureTenant | The Microsoft Online tenant being used to access data. If not specified, your default tentant will be used. |
AzureEnvironment | The Azure Environment to use when establishing a connection. |
The name of your Azure storage account.
The name of your Azure storage account.
The storage key associated with your CSV account.
The storage key associated with your CSV account. You can retrieve it as follows:
The Microsoft Online tenant being used to access data. If not specified, your default tentant will be used.
The Microsoft Online tenant being used to access data. For instance, contoso.onmicrosoft.com. Alternatively, specify the tenant Id. This value is the directory Id in the Azure Portal > Azure Active Directory > Properties.
Typically it is not necessary to specify the Tenant. This can be automatically determined by Microsoft when using the OAuthGrantType set to CODE (default). However, it may fail in the case that the user belongs to multiple tenants. For instance, if an Admin of domain A invites a user of domain B to be a guest user. The user will now belong to both tenants. It is a good practice to specify the Tenant, although in general things should normally work without having to specify it.
The AzureTenant is required when setting OAuthGrantType to CLIENT. When using client credentials, there is no user context. The credentials are taken from the context of the app itself. While Microsoft still allows client credentials to be obtained without specifying which Tenant, it has a much lower probability of picking the specific tenant you want to work with. For this reason, we require AzureTenant to be explicitly stated for all client credentials connections to ensure you get credentials that are applicable for the domain you intend to connect to.
The Azure Environment to use when establishing a connection.
In most cases, leaving the environment set to global will work. However, if your Azure Account has been added to a different environment, the AzureEnvironment may be used to specify which environment. The available values are GLOBAL, CHINA, USGOVT, USGOVTDOD.
This section provides a complete list of the SSO properties you can configure in the connection string for this provider.
Property | Description |
SSOLoginURL | The identity provider's login URL. |
SSOProperties | Additional properties required to connect to the identity provider in a semicolon-separated list. |
The identity provider's login URL.
The identity provider's login URL.
Additional properties required to connect to the identity provider in a semicolon-separated list.
Additional properties required to connect to the identity provider in a semicolon-separated list. is used in conjunction with the SSOLoginURL.
SSO configuration is discussed further in .
This section provides a complete list of the OAuth properties you can configure in the connection string for this provider.
Property | Description |
OAuthVersion | The version of OAuth being used. |
OAuthClientId | The client Id assigned when you register your application with an OAuth authorization server. |
OAuthClientSecret | The client secret assigned when you register your application with an OAuth authorization server. |
Scope | Specify scope to obtain the initial access and refresh token. |
OAuthGrantType | The grant type for the OAuth flow. |
OAuthPasswordGrantMode | How to pass Client Id and Secret with OAuthGrantType is set to Password. |
OAuthIncludeCallbackURL | Whether to include the callback URL in an access token request. |
OAuthAuthorizationURL | The authorization URL for the OAuth service. |
OAuthAccessTokenURL | The URL to retrieve the OAuth access token from. |
OAuthRefreshTokenURL | The URL to refresh the OAuth token from. |
OAuthRequestTokenURL | The URL the service provides to retrieve request tokens from. This is required in OAuth 1.0. |
AuthToken | The authentication token used to request and obtain the OAuth Access Token. |
AuthKey | The authentication secret used to request and obtain the OAuth Access Token. |
OAuthParams | A comma-separated list of other parameters to submit in the request for the OAuth access token in the format paramname=value. |
The version of OAuth being used.
The version of OAuth being used. The following options are available: 1.0,2.0
The client Id assigned when you register your application with an OAuth authorization server.
As part of registering an OAuth application, you will receive the OAuthClientId value, sometimes also called a consumer key, and a client secret, the OAuthClientSecret.
The client secret assigned when you register your application with an OAuth authorization server.
As part of registering an OAuth application, you will receive the OAuthClientId, also called a consumer key. You will also receive a client secret, also called a consumer secret. Set the client secret in the OAuthClientSecret property.
Specify scope to obtain the initial access and refresh token.
Specify scope to obtain the initial access and refresh token.
The grant type for the OAuth flow.
The following options are available: CODE,CLIENT,PASSWORD
How to pass Client Id and Secret with OAuthGrantType is set to Password.
The OAuth RFC specifies two methods of passing the OAuthClientId and OAuthClientSecret when using the Password OAuthGrantType. The most commonly used is to pass them via post data to the service. However, some services may require that you pass them via the Authorize header as to be used in BASIC authorization. Change this property to Basic to submit the parameters as part of the Authorize header instead of the post data.
Whether to include the callback URL in an access token request.
This defaults to true since standards-compliant OAuth services will ignore the redirect_uri parameter for grant types like CLIENT or PASSWORD that do not require it.
This option should only be enabled for OAuth services that report errors when redirect_uri is included.
The authorization URL for the OAuth service.
The authorization URL for the OAuth service. At this URL, the user logs into the server and grants permissions to the application. In OAuth 1.0, if permissions are granted, the request token is authorized.
The URL to retrieve the OAuth access token from.
The URL to retrieve the OAuth access token from. In OAuth 1.0, the authorized request token is exchanged for the access token at this URL.
The URL to refresh the OAuth token from.
The URL to refresh the OAuth token from. In OAuth 2.0, this URL is where the refresh token is exchanged for a new access token when the old access token expires.
The URL the service provides to retrieve request tokens from. This is required in OAuth 1.0.
The URL the service provides to retrieve request tokens from. This is required in OAuth 1.0. In OAuth 1.0, this is the URL where the app makes a request for the request token.
The authentication token used to request and obtain the OAuth Access Token.
This property is required only when performing headless authentication in OAuth 1.0. It can be obtained from the GetOAuthAuthorizationUrl stored procedure.
It can be supplied alongside the AuthKey in the GetOAuthAccessToken stored procedure to obtain the OAuthAccessToken.
The authentication secret used to request and obtain the OAuth Access Token.
This property is required only when performing headless authentication in OAuth 1.0. It can be obtained from the GetOAuthAuthorizationUrl stored procedure.
It can be supplied alongside the AuthToken in the GetOAuthAccessToken stored procedure to obtain the OAuthAccessToken.
A comma-separated list of other parameters to submit in the request for the OAuth access token in the format paramname=value.
A comma-separated list of other parameters to submit in the request for the OAuth access token in the format paramname=value.
This section provides a complete list of the JWT OAuth properties you can configure in the connection string for this provider.
Property | Description |
OAuthJWTCert | The JWT Certificate store. |
OAuthJWTCertType | The type of key store containing the JWT Certificate. |
OAuthJWTCertPassword | The password for the OAuth JWT certificate. |
OAuthJWTCertSubject | The subject of the OAuth JWT certificate. |
OAuthJWTIssuer | The issuer of the Java Web Token. |
OAuthJWTSubject | The user subject for which the application is requesting delegated access. |
The JWT Certificate store.
The name of the certificate store for the client certificate.
The OAuthJWTCertType field specifies the type of the certificate store specified by OAuthJWTCert. If the store is password protected, specify the password in OAuthJWTCertPassword.
OAuthJWTCert is used in conjunction with the OAuthJWTCertSubject field in order to specify client certificates. If OAuthJWTCert has a value, and OAuthJWTCertSubject is set, a search for a certificate is initiated. Please refer to the OAuthJWTCertSubject field for details.
Designations of certificate stores are platform-dependent.
The following are designations of the most common User and Machine certificate stores in Windows:
MY | A certificate store holding personal certificates with their associated private keys. |
CA | Certifying authority certificates. |
ROOT | Root certificates. |
SPC | Software publisher certificates. |
In Java, the certificate store normally is a file containing certificates and optional private keys.
When the certificate store type is PFXFile, this property must be set to the name of the file. When the type is PFXBlob, the property must be set to the binary contents of a PFX file (i.e. PKCS12 certificate store).
The type of key store containing the JWT Certificate.
This property can take one of the following values:
USER | For Windows, this specifies that the certificate store is a certificate store owned by the current user. Note: This store type is not available in Java. |
MACHINE | For Windows, this specifies that the certificate store is a machine store. Note: this store type is not available in Java. |
PFXFILE | The certificate store is the name of a PFX (PKCS12) file containing certificates. |
PFXBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in PFX (PKCS12) format. |
JKSFILE | The certificate store is the name of a Java key store (JKS) file containing certificates. Note: this store type is only available in Java. |
JKSBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in Java key store (JKS) format. Note: this store type is only available in Java. |
PEMKEY_FILE | The certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate. |
PEMKEY_BLOB | The certificate store is a string (base64-encoded) that contains a private key and an optional certificate. |
PUBLIC_KEY_FILE | The certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate. |
PUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate. |
SSHPUBLIC_KEY_FILE | The certificate store is the name of a file that contains an SSH-style public key. |
SSHPUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains an SSH-style public key. |
P7BFILE | The certificate store is the name of a PKCS7 file containing certificates. |
PPKFILE | The certificate store is the name of a file that contains a PPK (PuTTY Private Key). |
XMLFILE | The certificate store is the name of a file that contains a certificate in XML format. |
XMLBLOB | The certificate store is a string that contains a certificate in XML format. |
GOOGLEJSON | The certificate store is the name of a JSON file containing the service account information. Only valid when connecting to a Google service. |
GOOGLEJSONBLOB | The certificate store is a string that contains the service account JSON. Only valid when connecting to a Google service. |
The password for the OAuth JWT certificate.
If the certificate store is of a type that requires a password, this property is used to specify that password in order to open the certificate store.
This is not required when using the GOOGLEJSON OAuthJWTCertType. Google JSON keys are not encrypted.
The subject of the OAuth JWT certificate.
When loading a certificate the subject is used to locate the certificate in the store.
If an exact match is not found, the store is searched for subjects containing the value of the property.
If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks the first certificate in the certificate store.
The certificate subject is a comma separated list of distinguished name fields and values. For instance "CN=www.server.com, OU=test, C=US, [email protected]". Common fields and their meanings are displayed below.
Field | Meaning |
CN | Common Name. This is commonly a host name like www.server.com. |
O | Organization |
OU | Organizational Unit |
L | Locality |
S | State |
C | Country |
E | Email Address |
If a field value contains a comma it must be quoted.
The issuer of the Java Web Token.
The issuer of the Java Web Token. This is typically either the Client Id or Email Address of the OAuth Application.
This is not required when using the GOOGLEJSON OAuthJWTCertType. Google JSON keys contain a copy of the issuer account.
The user subject for which the application is requesting delegated access.
The user subject for which the application is requesting delegated access. Typically, the user account name or email address.
This section provides a complete list of the Kerberos properties you can configure in the connection string for this provider.
Property | Description |
KerberosKDC | The Kerberos Key Distribution Center (KDC) service used to authenticate the user. |
KerberosRealm | The Kerberos Realm used to authenticate the user. |
KerberosSPN | The service principal name (SPN) for the Kerberos Domain Controller. |
KerberosKeytabFile | The Keytab file containing your pairs of Kerberos principals and encrypted keys. |
KerberosServiceRealm | The Kerberos realm of the service. |
KerberosServiceKDC | The Kerberos KDC of the service. |
KerberosTicketCache | The full file path to an MIT Kerberos credential cache file. |
The Kerberos Key Distribution Center (KDC) service used to authenticate the user.
The Kerberos properties are used when using SPNEGO or Windows Authentication. The Sync App will request session tickets and temporary session keys from the Kerberos KDC service. The Kerberos KDC service is conventionally colocated with the domain controller.
If Kerberos KDC is not specified, the Sync App will attempt to detect these properties automatically from the following locations:
The Kerberos Realm used to authenticate the user.
The Kerberos properties are used when using SPNEGO or Windows Authentication. The Kerberos Realm is used to authenticate the user with the Kerberos Key Distribution Service (KDC). The Kerberos Realm can be configured by an administrator to be any string, but conventionally it is based on the domain name.
If Kerberos Realm is not specified, the Sync App will attempt to detect these properties automatically from the following locations:
The service principal name (SPN) for the Kerberos Domain Controller.
If the SPN on the Kerberos Domain Controller is not the same as the URL that you are authenticating to, use this property to set the SPN.
The Keytab file containing your pairs of Kerberos principals and encrypted keys.
The Keytab file containing your pairs of Kerberos principals and encrypted keys.
The Kerberos realm of the service.
The KerberosServiceRealm is the specify the service Kerberos realm when using cross-realm Kerberos authentication.
In most cases, a single realm and KDC machine are used to perform the Kerberos authentication and this property is not required.
This property is available for complex setups where a different realm and KDC machine are used to obtain an authentication ticket (AS request) and a service ticket (TGS request).
The Kerberos KDC of the service.
The KerberosServiceKDC is used to specify the service Kerberos KDC when using cross-realm Kerberos authentication.
In most cases, a single realm and KDC machine are used to perform the Kerberos authentication and this property is not required.
This property is available for complex setups where a different realm and KDC machine are used to obtain an authentication ticket (AS request) and a service ticket (TGS request).
The full file path to an MIT Kerberos credential cache file.
This property can be set if you wish to use a credential cache file that was created using the MIT Kerberos Ticket Manager or kinit command.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
Property | Description |
SSLClientCert | The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL). |
SSLClientCertType | The type of key store containing the TLS/SSL client certificate. |
SSLClientCertPassword | The password for the TLS/SSL client certificate. |
SSLClientCertSubject | The subject of the TLS/SSL client certificate. |
SSLMode | The authentication mechanism to be used when connecting to the FTP or FTPS server. |
SSLServerCert | The certificate to be accepted from the server when connecting using TLS/SSL. |
The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL).
The name of the certificate store for the client certificate.
The SSLClientCertType field specifies the type of the certificate store specified by SSLClientCert. If the store is password protected, specify the password in SSLClientCertPassword.
SSLClientCert is used in conjunction with the SSLClientCertSubject field in order to specify client certificates. If SSLClientCert has a value, and SSLClientCertSubject is set, a search for a certificate is initiated. See SSLClientCertSubject for more information.
Designations of certificate stores are platform-dependent.
The following are designations of the most common User and Machine certificate stores in Windows:
MY | A certificate store holding personal certificates with their associated private keys. |
CA | Certifying authority certificates. |
ROOT | Root certificates. |
SPC | Software publisher certificates. |
In Java, the certificate store normally is a file containing certificates and optional private keys.
When the certificate store type is PFXFile, this property must be set to the name of the file. When the type is PFXBlob, the property must be set to the binary contents of a PFX file (for example, PKCS12 certificate store).
The type of key store containing the TLS/SSL client certificate.
This property can take one of the following values:
USER - default | For Windows, this specifies that the certificate store is a certificate store owned by the current user. Note that this store type is not available in Java. |
MACHINE | For Windows, this specifies that the certificate store is a machine store. Note that this store type is not available in Java. |
PFXFILE | The certificate store is the name of a PFX (PKCS12) file containing certificates. |
PFXBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in PFX (PKCS12) format. |
JKSFILE | The certificate store is the name of a Java key store (JKS) file containing certificates. Note that this store type is only available in Java. |
JKSBLOB | The certificate store is a string (base-64-encoded) representing a certificate store in JKS format. Note that this store type is only available in Java. |
PEMKEY_FILE | The certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate. |
PEMKEY_BLOB | The certificate store is a string (base64-encoded) that contains a private key and an optional certificate. |
PUBLIC_KEY_FILE | The certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate. |
PUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate. |
SSHPUBLIC_KEY_FILE | The certificate store is the name of a file that contains an SSH-style public key. |
SSHPUBLIC_KEY_BLOB | The certificate store is a string (base-64-encoded) that contains an SSH-style public key. |
P7BFILE | The certificate store is the name of a PKCS7 file containing certificates. |
PPKFILE | The certificate store is the name of a file that contains a PuTTY Private Key (PPK). |
XMLFILE | The certificate store is the name of a file that contains a certificate in XML format. |
XMLBLOB | The certificate store is a string that contains a certificate in XML format. |
The password for the TLS/SSL client certificate.
If the certificate store is of a type that requires a password, this property is used to specify that password to open the certificate store.
The subject of the TLS/SSL client certificate.
When loading a certificate the subject is used to locate the certificate in the store.
If an exact match is not found, the store is searched for subjects containing the value of the property. If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks the first certificate in the certificate store.
The certificate subject is a comma separated list of distinguished name fields and values. For example, "CN=www.server.com, OU=test, C=US, [email protected]". The common fields and their meanings are shown below.
Field | Meaning |
CN | Common Name. This is commonly a host name like www.server.com. |
O | Organization |
OU | Organizational Unit |
L | Locality |
S | State |
C | Country |
E | Email Address |
If a field value contains a comma, it must be quoted.
The authentication mechanism to be used when connecting to the FTP or FTPS server.
If SSLMode is set to NONE, default plaintext authentication is used to log in to the server. If SSLMode is set to IMPLICIT, the SSL negotiation will start immediately after the connection is established. If SSLMode is set to EXPLICIT, the Sync App will first connect in plaintext, and then explicitly start SSL negotiation through a protocol command such as STARTTLS. If SSLMode is set to AUTOMATIC, if the remote port is set to the standard plaintext port of the protocol (where applicable), the component will behave the same as if SSLMode is set to EXPLICIT. In all other cases, SSL negotiation will be IMPLICIT.
The certificate to be accepted from the server when connecting using TLS/SSL.
If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.
This property can take the following forms:
Description | Example |
A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
A path to a local file containing the certificate | C:\cert.cer |
The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
The MD5 Thumbprint (hex values can also be either space or colon separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
The SHA1 Thumbprint (hex values can also be either space or colon separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
If not specified, any certificate trusted by the machine is accepted.
Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.
This section provides a complete list of the SSH properties you can configure in the connection string for this provider.
Property | Description |
SSHAuthMode | The authentication method to be used to log on to an SFTP server. |
SSHClientCert | A private key to be used for authenticating the user. |
SSHClientCertPassword | The password of the SSHClientCert key if it has one. |
SSHClientCertSubject | The subject of the SSH client certificate. |
SSHClientCertType | The type of SSHClientCert private key. |
SSHUser | The SSH user. |
SSHPassword | The SSH password. |
The authentication method to be used to log on to an SFTP server.
A private key to be used for authenticating the user.
SSHClientCert must contain a valid private key in order to use public key authentication. A public key is optional, if one is not included then the Sync App generates it from the private key. The Sync App sends the public key to the server and the connection is allowed if the user has authorized the public key.
The SSHClientCertType field specifies the type of the key store specified by SSHClientCert. If the store is password protected, specify the password in SSHClientCertPassword.
Some types of key stores are containers which may include multiple keys. By default the Sync App will select the first key in the store, but you can specify a specific key using SSHClientCertSubject.
The password of the SSHClientCert key if it has one.
This property is only used when authenticating to SFTP servers with SSHAuthMode set to PublicKey and SSHClientCert set to a private key.
The subject of the SSH client certificate.
When loading a certificate the subject is used to locate the certificate in the store.
If an exact match is not found, the store is searched for subjects containing the value of the property.
If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks the first certificate in the certificate store.
The certificate subject is a comma separated list of distinguished name fields and values. For instance "CN=www.server.com, OU=test, C=US, [email protected]". Common fields and their meanings are displayed below.
Field | Meaning |
CN | Common Name. This is commonly a host name like www.server.com. |
O | Organization |
OU | Organizational Unit |
L | Locality |
S | State |
C | Country |
E | Email Address |
If a field value contains a comma it must be quoted.
The type of SSHClientCert private key.
This property can take one of the following values:
Types | Description | Allowed Blob Values |
MACHINE/USER | Blob values are not supported. | |
JKSFILE/JKSBLOB | base64-only | |
PFXFILE/PFXBLOB | A PKCS12-format (.pfx) file. Must contain both a certificate and a private key. | base64-only |
PEMKEY_FILE/PEMKEY_BLOB | A PEM-format file. Must contain an RSA, DSA, or OPENSSH private key. Can optionally contain a certificate matching the private key. | base64 or plain text. Newlines may be replaced with spaces when providing the blob as text. |
PPKFILE/PPKBLOB | A PuTTY-format private key created using the puttygen tool. | base64-only |
XMLFILE/XMLBLOB | An XML key in the format generated by the .NET RSA class: RSA.ToXmlString(true). | base64 or plain text. |
The SSH user.
The SSH user.
The SSH password.
The SSH password.
This section provides a complete list of the Firewall properties you can configure in the connection string for this provider.
Property | Description |
FirewallType | The protocol used by a proxy-based firewall. |
FirewallServer | The name or IP address of a proxy-based firewall. |
FirewallPort | The TCP port for a proxy-based firewall. |
FirewallUser | The user name to use to authenticate with a proxy-based firewall. |
FirewallPassword | A password used to authenticate to a proxy-based firewall. |
The protocol used by a proxy-based firewall.
This property specifies the protocol that the Sync App will use to tunnel traffic through the FirewallServer proxy. Note that by default, the Sync App connects to the system proxy; to disable this behavior and connect to one of the following proxy types, set ProxyAutoDetect to false.
Type | Default Port | Description |
TUNNEL | 80 | When this is set, the Sync App opens a connection to CSV and traffic flows back and forth through the proxy. |
SOCKS4 | 1080 | When this is set, the Sync App sends data through the SOCKS 4 proxy specified by FirewallServer and FirewallPort and passes the FirewallUser value to the proxy, which determines if the connection request should be granted. |
SOCKS5 | 1080 | When this is set, the Sync App sends data through the SOCKS 5 proxy specified by FirewallServer and FirewallPort. If your proxy requires authentication, set FirewallUser and FirewallPassword to credentials the proxy recognizes. |
To connect to HTTP proxies, use ProxyServer and ProxyPort. To authenticate to HTTP proxies, use ProxyAuthScheme, ProxyUser, and ProxyPassword.
The name or IP address of a proxy-based firewall.
This property specifies the IP address, DNS name, or host name of a proxy allowing traversal of a firewall. The protocol is specified by FirewallType: Use FirewallServer with this property to connect through SOCKS or do tunneling. Use ProxyServer to connect to an HTTP proxy.
Note that the Sync App uses the system proxy by default. To use a different proxy, set ProxyAutoDetect to false.
The TCP port for a proxy-based firewall.
This specifies the TCP port for a proxy allowing traversal of a firewall. Use FirewallServer to specify the name or IP address. Specify the protocol with FirewallType.
The user name to use to authenticate with a proxy-based firewall.
The FirewallUser and FirewallPassword properties are used to authenticate against the proxy specified in FirewallServer and FirewallPort, following the authentication method specified in FirewallType.
A password used to authenticate to a proxy-based firewall.
This property is passed to the proxy specified by FirewallServer and FirewallPort, following the authentication method specified by FirewallType.
This section provides a complete list of the Proxy properties you can configure in the connection string for this provider.
Property | Description |
ProxyAutoDetect | This indicates whether to use the system proxy settings or not. This takes precedence over other proxy settings, so you'll need to set ProxyAutoDetect to FALSE in order use custom proxy settings. |
ProxyServer | The hostname or IP address of a proxy to route HTTP traffic through. |
ProxyPort | The TCP port the ProxyServer proxy is running on. |
ProxyAuthScheme | The authentication type to use to authenticate to the ProxyServer proxy. |
ProxyUser | A user name to be used to authenticate to the ProxyServer proxy. |
ProxyPassword | A password to be used to authenticate to the ProxyServer proxy. |
ProxySSLType | The SSL type to use when connecting to the ProxyServer proxy. |
ProxyExceptions | A semicolon separated list of destination hostnames or IPs that are exempt from connecting through the ProxyServer . |
This indicates whether to use the system proxy settings or not. This takes precedence over other proxy settings, so you'll need to set ProxyAutoDetect to FALSE in order use custom proxy settings.
This takes precedence over other proxy settings, so you'll need to set ProxyAutoDetect to FALSE in order use custom proxy settings.
To connect to an HTTP proxy, see ProxyServer. For other proxies, such as SOCKS or tunneling, see FirewallType.
The hostname or IP address of a proxy to route HTTP traffic through.
The hostname or IP address of a proxy to route HTTP traffic through. The Sync App can use the HTTP, Windows (NTLM), or Kerberos authentication types to authenticate to an HTTP proxy.
If you need to connect through a SOCKS proxy or tunnel the connection, see FirewallType.
By default, the Sync App uses the system proxy. If you need to use another proxy, set ProxyAutoDetect to false.
The TCP port the ProxyServer proxy is running on.
The port the HTTP proxy is running on that you want to redirect HTTP traffic through. Specify the HTTP proxy in ProxyServer. For other proxy types, see FirewallType.
The authentication type to use to authenticate to the ProxyServer proxy.
This value specifies the authentication type to use to authenticate to the HTTP proxy specified by ProxyServer and ProxyPort.
Note that the Sync App will use the system proxy settings by default, without further configuration needed; if you want to connect to another proxy, you will need to set ProxyAutoDetect to false, in addition to ProxyServer and ProxyPort. To authenticate, set ProxyAuthScheme and set ProxyUser and ProxyPassword, if needed.
The authentication type can be one of the following:
If you need to use another authentication type, such as SOCKS 5 authentication, see FirewallType.
A user name to be used to authenticate to the ProxyServer proxy.
The ProxyUser and ProxyPassword options are used to connect and authenticate against the HTTP proxy specified in ProxyServer.
You can select one of the available authentication types in ProxyAuthScheme. If you are using HTTP authentication, set this to the user name of a user recognized by the HTTP proxy. If you are using Windows or Kerberos authentication, set this property to a user name in one of the following formats:
user@domain domain\user
A password to be used to authenticate to the ProxyServer proxy.
This property is used to authenticate to an HTTP proxy server that supports NTLM (Windows), Kerberos, or HTTP authentication. To specify the HTTP proxy, you can set ProxyServer and ProxyPort. To specify the authentication type, set ProxyAuthScheme.
If you are using HTTP authentication, additionally set ProxyUser and ProxyPassword to HTTP proxy.
If you are using NTLM authentication, set ProxyUser and ProxyPassword to your Windows password. You may also need these to complete Kerberos authentication.
For SOCKS 5 authentication or tunneling, see FirewallType.
By default, the Sync App uses the system proxy. If you want to connect to another proxy, set ProxyAutoDetect to false.
The SSL type to use when connecting to the ProxyServer proxy.
This property determines when to use SSL for the connection to an HTTP proxy specified by ProxyServer. This value can be AUTO, ALWAYS, NEVER, or TUNNEL. The applicable values are the following:
AUTO | Default setting. If the URL is an HTTPS URL, the Sync App will use the TUNNEL option. If the URL is an HTTP URL, the component will use the NEVER option. |
ALWAYS | The connection is always SSL enabled. |
NEVER | The connection is not SSL enabled. |
TUNNEL | The connection is through a tunneling proxy. The proxy server opens a connection to the remote host and traffic flows back and forth through the proxy. |
A semicolon separated list of destination hostnames or IPs that are exempt from connecting through the ProxyServer .
The ProxyServer is used for all addresses, except for addresses defined in this property. Use semicolons to separate entries.
Note that the Sync App uses the system proxy settings by default, without further configuration needed; if you want to explicitly configure proxy exceptions for this connection, you need to set ProxyAutoDetect = false, and configure ProxyServer and ProxyPort. To authenticate, set ProxyAuthScheme and set ProxyUser and ProxyPassword, if needed.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
Property | Description |
LogModules | Core modules to be included in the log file. |
Core modules to be included in the log file.
Only the modules specified (separated by ';') will be included in the log file. By default all modules are included.
See the Logging page for an overview.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
Property | Description |
Location | A path to the directory that contains the schema files defining tables, views, and stored procedures. |
BrowsableSchemas | This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC. |
Tables | This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA,TableB,TableC. |
Views | Restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC. |
SchemaIniLocation | A path to the directory that contains the schema.ini file. |
AggregateFiles | When set to true, the provider will aggregate into a single table called AggregatedFiles all the files located in the URI directory which have the same schema. |
MetadataDiscoveryURI | Used together with AggregateFiles , this property specifies a specific file to read the schema of the AggregatedFiles result set. |
TypeDetectionScheme | Determines how to determine the data types of columns. |
ColumnCount | The number of columns to detect when dynamically determining columns for the table. |
RowScanDepth | The number of rows to scan when dynamically determining columns for the table. |
A path to the directory that contains the schema files defining tables, views, and stored procedures.
The path to a directory which contains the schema files for the Sync App (.rsd files for tables and views, .rsb files for stored procedures). The folder location can be a relative path from the location of the executable. The Location property is only needed if you want to customize definitions (for example, change a column name, ignore a column, and so on) or extend the data model with new tables, views, or stored procedures.
If left unspecified, the default location is "%APPDATA%\\CData\\CSV Data Provider\\Schema" with %APPDATA% being set to the user's configuration directory:
This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC.
Listing the schemas from databases can be expensive. Providing a list of schemas in the connection string improves the performance.
This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA,TableB,TableC.
Listing the tables from some databases can be expensive. Providing a list of tables in the connection string improves the performance of the Sync App.
This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.
Specify the tables you want in a comma-separated list. Each table should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Tables=TableA,[TableB/WithSlash],WithCatalog.WithSchema.`TableC With Space`.
Note that when connecting to a data source with multiple schemas or catalogs, you will need to provide the fully qualified name of the table in this property, as in the last example here, to avoid ambiguity between tables that exist in multiple catalogs or schemas.
Restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC.
Listing the views from some databases can be expensive. Providing a list of views in the connection string improves the performance of the Sync App.
This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.
Specify the views you want in a comma-separated list. Each view should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Views=ViewA,[ViewB/WithSlash],WithCatalog.WithSchema.`ViewC With Space`.
Note that when connecting to a data source with multiple schemas or catalogs, you will need to provide the fully qualified name of the table in this property, as in the last example here, to avoid ambiguity between tables that exist in multiple catalogs or schemas.
A path to the directory that contains the schema.ini file.
A path to the directory that contains the schema.ini file. Might additionally be used to specify a different name for the schema.ini file.
When set to true, the provider will aggregate into a single table called AggregatedFiles all the files located in the URI directory which have the same schema.
By default, the first file in the folder is used to define the schema, however MetadataDiscoveryURI can be specified to use a different file instead.
Take for example the following CSV contents.
File 1
ItemID,Name,NumInStock 1,Peanuts - Salted,76 2,Peanuts - Unsalted,43 3,Raisins,26
File 2
ItemID,Name,NumInStock 4,Pretzels - Original,55 5,Pretzels - Chocolate,35 6,Toffee,44
The resulting aggregate table is shown below. Note that only the columns present in the defined schema are used in the aggregate.
AggregateFiles
ItemID,Name,NumInStock 1,Peanuts - Salted,76 2,Peanuts - Unsalted,43 3,Raisins,26 4,Pretzels - Original,55 5,Pretzels - Chocolate,35 6,Toffee,44
Used together with AggregateFiles , this property specifies a specific file to read the schema of the AggregatedFiles result set.
Used together with AggregateFiles, this property specifies a specific file to read the schema of the AggregatedFiles result set.
Determines how to determine the data types of columns.
None | Setting TypeDetectionScheme to None will return all columns as the string type. |
RowScan | Setting TypeDetectionScheme to RowScan will scan rows to heuristically determine the data type. The RowScanDepth determines the number of rows to be scanned. |
ColumnCount | Setting TypeDetectionScheme to ColumnCount will control the number for columns to detect and will return all columns as the string type. The ColumnCount property determines the number of columns. |
The number of columns to detect when dynamically determining columns for the table.
The number of columns to detect when dynamically determining columns for the table. Columns are dynamically determined when a schema (RSD) file is not available for the table, such as when using GenerateSchemaFiles. To specify the number of columns set the ColumnCount connection property.
The number of rows to scan when dynamically determining columns for the table.
The number of rows to scan when dynamically determining columns for the table. Columns are dynamically determined when a schema (RSD) file is not available for the table, such as when using GenerateSchemaFiles.
Higher values will result in a longer request, but will be more accurate.
Setting this value to 0 (zero) will parse the entire CSV document.
This section provides a complete list of the Data Formatting properties you can configure in the connection string for this provider.
Property | Description |
IncludeColumnHeaders | Whether to get column names from the first line of the specified files. |
FMT | The format to be used to parse all text files. |
ExtendedProperties | The Microsoft Jet OLE DB 4.0-compatible extended properties for text files. |
RowDelimiter | The character which will be used to detect the end of a CSV row. |
SkipTop | Skips the amount of rows specified starting from the top. |
IgnoreBlankRows | Indicates whether to skip the empty rows. |
IncludeEmptyHeaders | Whether to include empty value for a column name within column header. |
SkipHeaderComments | If set to true, skips rows at the top of the file beginning with #. |
Charset | Specifies the session character set for encoding and decoding character data transferred to and from the CSV file. The default value is UTF-8. |
QuoteEscapeCharacter | Determines the character which will be used to escape quotes. |
QuoteCharacter | Determines the character which will be used to quote values in CSV file. |
TrimSpaces | Set to True if you want the provider to trim preceeding and trailing spaces in a cell containing a quoted value. |
PushEmptyValuesAsNull | Indicates whether to read the empty values as empty or as null. |
NullValues | A comma separated list which will be replaced with nulls if there are found in the CSV file. |
PathSeparator | Determines the character which will be used to replace the file separator. |
IgnoreIncompleteRows | Indicates whether to ignore incomplete rows. |
MaxCellLength | Max length a cell can hold, after passing this number the cell content will get truncated. If value is -1 then we don't limit the cell content length. |
Whether to get column names from the first line of the specified files.
When this property is set to True, the Sync App will derive column names for each table from the first row of each file. When this property is set to False, column names are simply the column numbers; that is, if column names have not been defined in Schema.ini.
As with Microsoft Jet OLE DB 4.0, this property can also be specified in ExtendedProperties. The IncludeColumnHeaders value specified in ExtendedProperties overrides this property.
The following connection string parses .csv and .log files as CSV without headers:
DataSource=C:\mycsvlogs;IncludeColumnHeaders=False;Include Files='CSV,LOG'
The format to be used to parse all text files.
When this connection property is set, the Sync App will parse all text files in the URI according to the specified file format. The text file format can also be specified in ExtendedProperties, as with the Microsoft Jet OLE DB 4.0 ExtendedProperties for text files. The format specified in ExtendedProperties overrides the format set as a stand-alone connection property. The Format property in Schema.ini overrides the file format set in the connection string.
The FMT can be set to one of the following values:
The following connection string parses all text files in the folder specified in the URI as tab-delimited values with headers:
URI=C:\mytsv;FMT=TabDelimited
If the property is set to anything other than the values specified above, the literal character of the specified input will be read as the delimiter, for example:
URI=C:\mypipdelimitedfile;FMT=||
The Microsoft Jet OLE DB 4.0-compatible extended properties for text files.
The Microsoft Jet OLE DB 4.0-compatible extended properties for text files. You can specify the format of text files in this property. When processing local files, the format specified for an individual file in Schema.ini overrides the format specified in ExtendedProperties.
The following example can be used to parse all text files in the URI folder as tab-delimited values with headers: ExtendedProperties='text;FMT=TabDelimited'.
Parse .csv and .log files as CSV without headers:
ExtendedProperties='text;IncludeColumnHeaders=False';Include Files='CSV,LOG'To make the connection string -- which is itself delimited -- easier to read, you can specify IncludeColumnHeaders and FMT. These stand-alone properties are overriden by ExtendedProperties.
The character which will be used to detect the end of a CSV row.
It is not necessary to specify this property if your CSV file has \r, \n, \r\n, \n\r as row delimiters. It is possible to specify a HEX string as row delimiter (Ex: RowDelimiter=0x01).
Skips the amount of rows specified starting from the top.
Skips the amount of rows specified starting from the top.
Indicates whether to skip the empty rows.
Indicates whether to skip the empty rows.
Whether to include empty value for a column name within column header.
When this property is set to False, the Sync App will not include columns which do not have a value within column header. When set to True, the Sync App will derive column names from the column numbers. This will be only for columns without header value. If a column has a header value, it will remain the same.
If set to true, skips rows at the top of the file beginning with #.
This will skip rows starting with # until a row is found that does not start with #. Subsequent rows will be read regardless of whether they begin with #.
Specifies the session character set for encoding and decoding character data transferred to and from the CSV file. The default value is UTF-8.
Specifies the session character set for encoding and decoding character data transferred to and from the CSV file. The default value is UTF-8.
Determines the character which will be used to escape quotes.
Determines the character which will be used to escape quotes.
Determines the character which will be used to quote values in CSV file.
Determines the character which will be used to quote values in CSV file.
Note: This property works only for CSV files. Set this property to "NONE" if you want to insert fields in a CSV file without quoting them.
Set to True if you want the provider to trim preceeding and trailing spaces in a cell containing a quoted value.
Set to True if you want the provider to trim preceeding and trailing spaces in a cell containing a quoted value. If set to False and the first character in a cell is not the quote character, the cell will be treated as a literal unquoted value.
Indicates whether to read the empty values as empty or as null.
Indicates whether to read the empty values as empty or as null.
A comma separated list which will be replaced with nulls if there are found in the CSV file.
A comma separated list which will be replaced with nulls if there are found in the CSV file. Example: "NaN,\N,N/A". If any of the specified values is found in a CSV row, it will be pushed as null.
Determines the character which will be used to replace the file separator.
Determines the character which will be used to replace the file separator. If there is a CSV file located in "Test/CSVFiles/Test.csv" and if this property is set to "_", then the table name for this file would be "Test_CSVFiles_Test.csv".
Indicates whether to ignore incomplete rows.
Indicates whether to ignore the rows that do not match the headers.
Max length a cell can hold, after passing this number the cell content will get truncated. If value is -1 then we don't limit the cell content length.
Max length a cell can hold, after passing this number the cell content will get truncated. If value is -1 then we don't limit the cell content length.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
Property | Description |
BatchNamingConvention | Determines the naming convention of batch files. |
ClientCulture | This property can be used to specify the format of data (e.g., currency values) that is accepted by the client application. This property can be used when the client application does not support the machine's culture settings. For example, Microsoft Access requires 'en-US'. |
CreateBatchFolder | Whether to create a folder or not when InsertMode is set to FilePerBatch. |
Culture | This setting can be used to specify culture settings that determine how the provider interprets certain data types that are passed into the provider. For example, setting Culture='de-DE' will output German formats even on an American machine. |
CustomHeaders | Other headers as determined by the user (optional). |
CustomUrlParams | The custom query string to be included in the request. |
DirectoryRetrievalDepth | Limit the subfolders recursively scanned when IncludeSubdirectories is enabled. |
ExcludeFileExtensions | Set to true if file extensions should be excluded from table names. |
ExcludeFiles | Comma-separated list of file extensions to exclude from the set of the files modeled as tables. |
GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
IncludeDropboxTeamResources | Indicates if you want to include Dropbox team files and folders. |
IncludeFiles | Comma-separated list of file extensions to include into the set of the files modeled as tables. |
IncludeSubdirectories | Whether to read files from nested folders. In the case of a name collision, table names are prefixed by the underscore-separated folder names. |
InsertMode | Specifies the mode for inserting data into CSV files. |
MaxRows | Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time. |
Other | These hidden properties are used only in specific use cases. |
Pagesize | The maximum number of results to return per page from CSV. |
PseudoColumns | This property indicates whether or not to include pseudo columns as columns to the table. |
Timeout | The value in seconds until the timeout error is thrown, canceling the operation. |
TruncateOnInserts | Set to True if you want the provider to truncate on every (batch) insert. |
UserDefinedViews | A filepath pointing to the JSON configuration file containing your custom views. |
UseRowNumbers | Set this to true if you are deleting or updating in CSV and you do not want to specify a custom schema. This will create a new column with the name RowNumber which will be used as key for that table. |
Determines the naming convention of batch files.
Determines the naming convention of batch files.
This property can be used to specify the format of data (e.g., currency values) that is accepted by the client application. This property can be used when the client application does not support the machine's culture settings. For example, Microsoft Access requires 'en-US'.
This option affects the format of Sync App output. To specify the format that defines how input should be interpreted, use the Culture option. By default the Sync App uses the current locale settings of the machine to interpret input and format output.
Whether to create a folder or not when InsertMode is set to FilePerBatch.
Whether to create a folder or not when InsertMode is set to FilePerBatch.
This setting can be used to specify culture settings that determine how the provider interprets certain data types that are passed into the provider. For example, setting Culture='de-DE' will output German formats even on an American machine.
This property affects the Sync App input. To interpret values in a different cultural format, use the Client Culture property. By default the Sync App uses the current locale settings of the machine to interpret input and format output.
Other headers as determined by the user (optional).
This property can be set to a string of headers to be appended to the HTTP request headers created from other properties, like ContentType, From, and so on.
The headers must be of the format "header: value" as described in the HTTP specifications. Header lines should be separated by the carriage return and line feed (CRLF) characters.
Use this property with caution. If this property contains invalid headers, HTTP requests may fail.
This property is useful for fine-tuning the functionality of the Sync App to integrate with specialized or nonstandard APIs.
The custom query string to be included in the request.
The CustomUrlParams allow you to specify custom query string parameters that are included with the HTTP request. The parameters must be encoded as a query string in the form field1=value1&field2=value2&field3=value3. The values in the query string must be URL encoded.
Limit the subfolders recursively scanned when IncludeSubdirectories is enabled.
When IncludeSubdirectories is enabled, DirectoryRetrievalDepth specifies how many subfolders will be recursively scanned before stopping. -1 specifies that all subfolders are scanned.
Set to true if file extensions should be excluded from table names.
Set to true if file extensions should be excluded from table names. For example, if set to True, this will make table `users.csv` appear as `users`.
Comma-separated list of file extensions to exclude from the set of the files modeled as tables.
It is also possible to specify datetime filters. We currently support CreatedDate and ModifiedDate. All extension filters are evaluated in disjunction (using OR operator), and then the resulting filter is evaluated in conjunction (using AND operator) with the datetime filters.
Examples:
ExcludeFiles="TXT,CreatedDate<='2020-11-26T07:39:34-05:00'"
ExcludeFiles="TXT,ModifiedDate<=DATETIMEFROMPARTS(2020, 11, 26, 7, 40, 50, 000)"
ExcludeFiles="ModifiedDate>=DATETIMEFROMPARTS(2020, 11, 26, 7, 40, 49, 000),ModifiedDate<=CURRENT_TIMESTAMP()"
Indicates the user preference as to when schemas should be generated and saved.
This property outputs schemas to .rsd files in the path specified by Location.
Available settings are the following:
Columns defined in .rsd files take precedence over the definitions in Schema.ini. Note that if you want to regenerate a file, you will first need to delete it.
When you set GenerateSchemaFiles to OnUse, the Sync App generates schemas as you execute SELECT queries. Schemas are generated for each table referenced in the query.
When you set GenerateSchemaFiles to OnCreate, schemas are only generated when a CREATE TABLE query is executed.
Another way to use this property is to obtain schemas for every table in your database when you connect. To do so, set GenerateSchemaFiles to OnStart and connect.
Indicates if you want to include Dropbox team files and folders.
In order to access Dropbox team folders and files, please set this connection property to True.
Comma-separated list of file extensions to include into the set of the files modeled as tables.
Comma-separated list of file extensions to include into the set of the files modeled as tables. For example, IncludeFiles=TXT,TAB. The default is CSV, TAB, and TXT.
A 'NOEXT' value can be specified to include files without an extension.
The following archive types are also supported (currently only when AggregateFiles is true): ZIP, TAR, and GZ.
When archive files are found, they will be downloaded to the local machine so the Sync App can extract and parse the contained files. Note: Files contained within an archive must match an extension listed in IncludeFiles to be included in the set of files modeled as tables.
File masks can be specified using an asterisk (*) to provide enhanced filtering capabilities; e.g. IncludeFiles=2020*.csv,TXT.
Files specified in Schema.ini are honored in addition to the files included by this property.
It is also possible to specify datetime filters. We currently support CreatedDate and ModifiedDate. All extension filters are evaluated in disjunction (using OR operator), and then the resulting filter is evaluated in conjunction (using AND operator) with the datetime filters.
Examples:
IncludeFiles="TXT,CreatedDate<='2020-11-26T07:39:34-05:00'"
IncludeFiles="TXT,ModifiedDate<=DATETIMEFROMPARTS(2020, 11, 26, 7, 40, 50, 000)"
IncludeFiles="ModifiedDate>=DATETIMEFROMPARTS(2020, 11, 26, 7, 40, 49, 000),ModifiedDate<=CURRENT_TIMESTAMP()"
Whether to read files from nested folders. In the case of a name collision, table names are prefixed by the underscore-separated folder names.
Whether to read files from nested folders. When accessing local CSV, the Sync App honors Schema.ini defined in subfolders. Table names are prefixed by each nested folder name separated by underscores only in the case of a table name conflict. For example,
Root\subfolder1\tableA | Root\subfolder1\subfolder2\tableA |
subfolder1_tableA | subfolder1_subfolder2_tableA |
Specifies the mode for inserting data into CSV files.
There are two modes available for inserting data to CSV file:
Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time.
Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time.
These hidden properties are used only in specific use cases.
The properties listed below are available for specific use cases. Normal driver use cases and functionality should not require these properties.
Specify multiple properties in a semicolon-separated list.
DefaultColumnSize | Sets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000. |
ConvertDateTimeToGMT | Determines whether to convert date-time values to GMT, instead of the local time of the machine. |
RecordToFile=filename | Records the underlying socket data transfer to the specified file. |
The maximum number of results to return per page from CSV.
The Pagesize property affects the maximum number of results to return per page from CSV. Setting a higher value may result in better performance at the cost of additional memory allocated per page consumed.
This property indicates whether or not to include pseudo columns as columns to the table.
This setting is particularly helpful in Entity Framework, which does not allow you to set a value for a pseudo column unless it is a table column. The value of this connection setting is of the format "Table1=Column1, Table1=Column2, Table2=Column3". You can use the "*" character to include all tables and all columns; for example, "*=*".
The value in seconds until the timeout error is thrown, canceling the operation.
If Timeout = 0, operations do not time out. The operations run until they complete successfully or until they encounter an error condition.
If Timeout expires and the operation is not yet complete, the Sync App throws an exception.
Set to True if you want the provider to truncate on every (batch) insert.
Set to True if you want the provider to truncate on every (batch) insert.
A filepath pointing to the JSON configuration file containing your custom views.
User Defined Views are defined in a JSON-formatted configuration file called UserDefinedViews.json. The Sync App automatically detects the views specified in this file.
You can also have multiple view definitions and control them using the UserDefinedViews connection property. When you use this property, only the specified views are seen by the Sync App.
This User Defined View configuration file is formatted as follows:
For example:
{ "MyView": { "query": "SELECT * FROM NorthwindOData WHERE MyColumn = 'value'" }, "MyView2": { "query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)" } }Use the UserDefinedViews connection property to specify the location of your JSON configuration file. For example:
"UserDefinedViews", "C:\\Users\\yourusername\\Desktop\\tmp\\UserDefinedViews.json"
Set this to true if you are deleting or updating in CSV and you do not want to specify a custom schema. This will create a new column with the name RowNumber which will be used as key for that table.
Set this to true if you are deleting or updating in CSV and you do not want to specify a custom schema. This will create a new column with the name RowNumber which will be used as key for that table.