The CData Sync App provides a straightforward way to continuously pipeline your Google BigQuery data to any database, data lake, or data warehouse, making it easily available for Analytics, Reporting, AI, and Machine Learning.
The Google BigQuery connector can be used from the CData Sync application to pull data from Google BigQuery and move it to any of the supported destinations.
The Sync App enables read/write SQL-92 access to the BigQuery tables in your Google account or Google Apps domain. The complete aggregate and join syntax in BigQuery is supported. Additionally, statements in the BigQuery syntax can be passed through. The Sync App uses version 2.0 of the BigQuery Web services API: You must enable this API by creating a project in the Google Developers Console. See Connecting to Google for a guide to creating a project and authenticating to this API.
For required properties, see the Settings tab.
For connection properties that are not typically required, see the Advanced tab.
The Sync App supports using user accounts and GCP instance accounts for authentication.
The following sections discuss the available authentication schemes for Google BigQuery:
AuthScheme must be set to OAuth in all user account flows.
Get an OAuth Access Token
Set the following connection properties to obtain the OAuthAccessToken:
Then call stored procedures to complete the OAuth exchange:
Once you have obtained the access and refresh tokens, you can connect to data and refresh the OAuth access token either automatically or manually.
Automatic Refresh of the OAuth Access Token
To have the driver automatically refresh the OAuth access token, set the following on the first data connection:
Manual Refresh of the OAuth Access Token
The only value needed to manually refresh the OAuth access token when connecting to data is the OAuth refresh token.
Use the RefreshOAuthAccessToken stored procedure to manually refresh the OAuthAccessToken after the ExpiresIn parameter value returned by GetOAuthAccessToken has elapsed, then set the following connection properties:
Then call RefreshOAuthAccessToken with OAuthRefreshToken set to the OAuth refresh token returned by GetOAuthAccessToken. After the new tokens have been retrieved, open a new connection by setting the OAuthAccessToken property to the value returned by RefreshOAuthAccessToken.
Finally, store the OAuth refresh token so that you can use it to manually refresh the OAuth access token after it has expired.
Option 1: Obtain and Exchange a Verifier Code
To obtain a verifier code, you must authenticate at the OAuth authorization URL.
Follow the steps below to authenticate from the machine with an Internet browser and obtain the OAuthVerifier connection property.
On the headless machine, set the following connection properties to obtain the OAuth authentication values:
After the OAuth settings file is generated, you need to re-set the following properties to connect:
Option 2: Transfer OAuth Settings
Prior to connecting on a headless machine, you need to create and install a connection with the driver on a device that supports an Internet browser. Set the connection properties as described in "Desktop Applications" above.
After completing the instructions in "Desktop Applications", the resulting authentication values are encrypted and written to the location specified by OAuthSettingsLocation. The default filename is OAuthSettings.txt.
Once you have successfully tested the connection, copy the OAuth settings file to your headless machine.
On the headless machine, set the following connection properties to connect to data:
When running on a GCP virtual machine, the Sync App can authenticate using a service account tied to the virtual machine. To use this mode, set AuthScheme to GCPInstanceAccount.
When Workload Identity Federation is set up, the driver authenticates to an identity provider and provides the Google Security Token Service with an authentication token. The Google STS validates this token and produces an OAuth token that can access Google services.
The following identity providers are currently supported:
Optionally, service account impersonation can also be configured by setting RequestingServiceAccount to the service account that will impersonate the credentials.
The following sections detail Sync App settings that may be needed in advanced integrations.
Large result sets must be saved in a temporary or permanent table. You can use the following properties to control table persistence:
Enable the AllowLargeResultSets property to make the Sync App automatically create destination tables when needed. If a query result is too large to fit the BigQuery query cache, the Sync App creates a hidden dataset within the data project and re-executes the query with a destination table in that dataset. The dataset is configured so that all tables created within it expire in 24 hours.
In some situations you may want to change the name of the dataset created by the Sync App. For example, if multiple users are using the Sync App and do not have permissions to write to datasets created by the other users. See TempTableDataset for details on how to do this.
Set MaximumBillingTier to override your project limits on the maximum cost for any given query in a connection.
Google BigQuery provides several interfaces for operating on batches of rows. The Sync App supports these methods through the InsertMode option, each of which are specialized to different use cases:
In addition to bulk INSERTs, the Sync App also supports performing bulk UPDATE and DELETE operations. This requires the Sync App to upload the data containing the filters and rows to set into a new table in BigQuery, then perform a MERGE between the two tables and drop the temporary table. InsertMode determines how the rows are inserted into the temporary table but the Streaming and DML modes are not supported.
In most cases the Sync App can determine what columns need to be part of the SET vs. WHERE clauses of a bulk update. If you receive an error like "Primary keys must be defined for bulk UPDATE support," you can use PrimaryKeyIdentifiers to tell the Sync App what columns to treat as keys. In an update the values of key columns are used only to find matching rows and cannot be updated.
This section details a selection of advanced features of the Google BigQuery Sync App.
The Sync App supports the use of user defined views, virtual tables whose contents are decided by a pre-configured user defined query. These views are useful when you cannot directly control queries being issued to the drivers. For an overview of creating and configuring custom views, see User Defined Views .
Use SSL Configuration to adjust how Sync App handles TLS/SSL certificate negotiations. You can choose from various certificate formats;. For further information, see the SSLServerCert property under "Connection String Options" .
Configure the Sync App for compliance with Firewall and Proxy, including Windows proxies and HTTP proxies. You can also set up tunnel connections.
For further information, see Query Processing.
By default, the Sync App attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
To authenticate to an HTTP proxy, set the following:
Set the following properties:
Once connected, the Sync App mimics the hierarchy in Google BigQuery by modeling each project in Google BigQuery as its own catalog. Within a catalog, the datasets in the corresponding project are modeled as individual schemas. The tables and views within a dataset are modeled as tables and views within the respective schema. Additionally, the Sync App includes a static 'CData' catalog, containing a static 'Google BigQuery' schema, which contains information found outside the Google BigQuery hierarchy.
Additionally, the data model contains a single static 'CData' catalog, which contains data on client-side views. Details on how to use it will be discussed further in the next section.
The 'CData' catalog contains one static 'Google BigQuery' schema. This schema contains client-side views such as 'PartitionsList' and 'PartitionsValues'. These client-side views can be accessed by setting catalog to 'CData' and schema to 'Google BigQuery'. For instance:
SELECT * FROM [CData].[Google BigQuery].PartitionsList
SELECT * FROM [test-project].[BusinessData].Accounts
By setting the ProjectId and DatasetId properties, a connection can be configured to retrieve data from a specific project and dataset so these do not need to be included in the query. For instance, if ProjectId is set to 'test-project' and DatasetId is set to 'BusinessData', then the query only needs to contain the table name, as shown below.
SELECT * FROM Accounts
Views are client-side tables that cannot be modified. The Sync App uses these to report metadata about the Google BigQuery projects and datsets it is connected to. The following views are included with the Sync App:
| Table | Description |
| Datasets | Lists all the accessible datasets for a given project. |
| PartitionsList | Lists the partitioning definitions for tables. |
| PartitionsValues | Lists the partitioning ranges for tables. |
| Projects | Lists all the projects for the authorized user. |
The Sync App also supports server-side views defined within Google BigQuery. These views can be used in SELECT statements the same way as tables. However, view schemas can easily become out of date and the Sync App must refresh them. See RefreshViewSchemas for details.
Stored Procedures are actions that are invoked via SQL queries. The Sync App uses these to manage Google BigQuery tables and jobs and to perform OAuth operations.
In addition to the client-side stored procedures offered by the Sync App, support is also provided for server-side stored procedures defined in Google BigQuery. The Sync App supports both CALL and EXEC using the procedure's parameter names.
Note: The Sync App only supports IN parameters and resultset return values.
CALL `psychic-valve-137816`.Northwind.MostPopularProduct() CALL `psychic-valve-137816`.Northwind.GetStockedValue(24, 0.75) EXEC `psychic-valve-137816`.Northwind.MostPopularProduct EXEC `psychic-valve-137816`.Northwind.GetStockedValue productId = 24, discountRate = 0.75
Views are similar to tables in the way that data is represented; however, views are read-only.
Queries can be executed against a view as if it were a normal table.
| Name | Description |
| Datasets | Lists all the accessible datasets for a given project. |
| PartitionsList | Lists the partitioning definitions for tables. |
| PartitionsValues | Lists the partitioning ranges for tables. |
| Projects | Lists all the projects for the authorized user. |
Lists all the accessible datasets for a given project.
| Name | Type | Description |
| Id [KEY] | String | The fully qualified and unique identifier for the dataset, used internally by BigQuery to reference the dataset across projects and regions. |
| Kind | String | The type of resource this record represents. For datasets, this typically returns 'bigquery#dataset'. |
| FriendlyName | String | A human-readable, descriptive name for the dataset. This name does not need to be unique and is often used in user interfaces. |
| DatasetReference_ProjectId | String | The ID of the project that contains the dataset. This serves as the container for the dataset and its resources. |
| DatasetReference_DatasetId | String | The ID of the dataset within the specified project. This is a unique name scoped to the project, excluding the project name itself. |
| Location | String | The geographic location where the dataset resides. |
Lists the partitioning definitions for tables.
| Name | Type | Description |
| Id [KEY] | String | A unique identifier for the table partition, which typically includes the partition key and the partition value. This helps distinguish each partition within the table. |
| ProjectId | String | The ID of the Google Cloud project that owns the table containing the partitioned data. |
| DatasetId | String | The ID of the BigQuery dataset where the partitioned table is located. |
| TableName | String | The name of the BigQuery table that is partitioned. This table contains multiple partitions based on the specified column. |
| ColumnName | String | The name of the column that is used to define partitions in the table. This is typically a date or integer field. |
| ColumnType | String | The data type of the column used for partitioning. Common values include DATE, INTEGER, or TIMESTAMP depending on the partitioning strategy. |
| Kind | String | The method of partitioning applied to the table. Options include DATE (partitioned by date field), RANGE (partitioned by numeric ranges), or INGESTION (partitioned by data load time). |
| RequireFilter | Boolean | If the value is 'true', queries must include a filter on the partition column to avoid full table scans. If the value is 'false', filters are not mandatory when querying the table. |
Lists the partitioning ranges for tables.
| Name | Type | Description |
| Id | String | The unique identifier of the partition, which distinguishes it from other partitions in the same table. |
| RangeLow | String | The starting boundary of the partition’s value range. This is expressed as an integer for RANGE partitioning or a date for TIME or INGESTION partitioning. |
| RangeHigh | String | The ending boundary of the partition’s value range. This is expressed as an integer for RANGE partitioning or a date for TIME or INGESTION partitioning. |
| RangeInterval | String | The size of each partitioned range. Applies only to RANGE partitioning and defines how values are grouped into partitions. |
| DateResolution | String | The level of granularity applied to TIME or INGESTION partitioning. Valid values include DAY, HOUR, MONTH, and YEAR. |
| ProjectId | String | The ID of the Google Cloud project that owns the table associated with the partition. |
| DatasetId | String | The ID of the dataset that contains the partitioned table. |
| TableName | String | The name of the table that is partitioned and to which this partition belongs. |
Lists all the projects for the authorized user.
| Name | Type | Description |
| Id [KEY] | String | The globally unique identifier of the Google Cloud project, typically used in Application Programming Interface (API) requests and resource naming. |
| Kind | String | The type of resource represented by this entry. For example, 'bigquery#project'. |
| FriendlyName | String | The human-readable display name assigned to the project, often used for easier identification in the User Interface (UI). |
| NumericId | String | The numeric identifier automatically assigned to the project by Google Cloud. This ID is unique across all projects. |
| ProjectReference_ProjectId | String | A reference value that uniquely identifies the project, commonly used in API calls and schema definitions. |
Stored procedures are function-like interfaces that extend the functionality of the Sync App beyond simple SELECT/INSERT/UPDATE/DELETE operations with Google BigQuery.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Google BigQuery, along with an indication of whether the procedure succeeded or failed.
| Name | Description |
| CancelJob | Cancels a running BigQuery job. |
| DeleteObject | Deletes an object from a bucket. |
| DeleteTable | Deletes the specified table from Google BigQuery. |
| GetJob | Retrieves the configuration information and execution state for an existing job. |
| InsertJob | Inserts a Google BigQuery job, which can then be selected later to retrieve the query results. |
| InsertLoadJob | Inserts a Google BigQuery load job, which adds data from Google Cloud Storage into an existing table. |
Cancels a running BigQuery job.
| Name | Type | Description |
| JobId | String | The unique identifier of the BigQuery job you want to cancel. |
| Region | String | The geographic location where the job is running. Required for jobs outside the default US or EU multi-regions. |
| Name | Type | Description |
| JobId | String | The unique identifier of the job that was cancelled. |
| Region | String | The geographic location where the job was executing when it was cancelled. |
| Configuration_query_query | String | The SQL query text associated with the job that was cancelled. |
| Configuration_query_destinationTable_tableId | String | The table ID of the destination table that the cancelled job was configured to write results to. |
| Configuration_query_destinationTable_projectId | String | The project ID of the destination table that was specified in the cancelled job's configuration. |
| Configuration_query_destinationTable_datasetId | String | The dataset ID of the destination table that was specified in the cancelled job's configuration. |
| Status_State | String | The final state of the job, such as 'DONE' or 'CANCELLED'. |
| Status_errorResult_reason | String | A brief code indicating the reason the job failed or was cancelled, such as 'jobCancelled' or 'accessDenied'. |
| Status_errorResult_message | String | A detailed, human-readable message describing the error that occurred during job execution or cancellation. |
Deletes an object from a bucket.
| Name | Type | Description |
| RemotePath | String | Path from which the object will be deleted, such as 'gs://cdata_test_bucket/temp.csv'. |
| Name | Type | Description |
| Success | String | Indicator if the stored procedure was successful or not. |
Deletes the specified table from Google BigQuery.
| Name | Type | Description |
| TableId | String | Specifies the ID of the table to delete. The Project ID and Dataset ID can be sourced from the connection properties or overridden using the format projectId:datasetId.TableId. |
| Name | Type | Description |
| Success | String | Returns 'true' if the table was successfully deleted. If the deletion fails, an exception is thrown instead of returning 'false'. |
Retrieves the configuration information and execution state for an existing job.
| Name | Type | Description |
| JobId | String | Specifies the unique identifier of the BigQuery job to retrieve. This is typically assigned when the job is created. |
| Region | String | Identifies the geographic location where the job is executing. This value is required for non-US and non-EU regions. |
| Name | Type | Description |
| JobId | String | Returns the unique identifier of the retrieved job. Matches the job ID specified in the input. |
| Region | String | Returns the region where the job is or was executing. Useful for region-specific configurations and troubleshooting. |
| Configuration_query_query | String | Returns the full SQL query string that was executed by the job. |
| Configuration_query_destinationTable_tableId | String | Returns the table ID where the query results were stored, if applicable. |
| Configuration_query_destinationTable_projectId | String | Returns the project ID that contains the destination table for the job results. |
| Configuration_query_destinationTable_datasetId | String | Returns the dataset ID that contains the destination table for the job results. |
| Status_State | String | Indicates the current lifecycle state of the job. Possible values include 'PENDING', 'RUNNING', and 'DONE'. |
| Status_errorResult_reason | String | Provides a concise error code representing the reason for job failure, if an error occurred. |
| Status_errorResult_message | String | Provides a detailed message describing the error encountered during job execution, if applicable. |
Obtains the OAuth access token to be used for authentication with various Google services.
NOTE: If, after running this stored procedure, the OAuthRefreshToken was not returned as part of the result set, change the Prompt value to CONSENT and run the procedure again. This forces the app to reauthenticate and send new token information.
| Name | Type | Description |
| AuthMode | String | The type of authentication mode to use.
The allowed values are APP, WEB. The default value is WEB. |
| Verifier | String | The verifier code returned by Google after permissions have been granted for the app to connect. WEB Authmode only. |
| Scope | String | The scope of access to Google APIs. By default, access to all APIs used by this data provider will be specified. |
| CallbackURL | String | Determines where the response is sent. The value of this parameter must exactly match one of the values registered in the APIs Console (including the http or https schemes, case, and trailing '/'). |
| Prompt | String | This field indicates the prompt to present the user. It accepts one of the following values: NONE, CONSENT, SELECT ACCOUNT. The default is SELECT_ACCOUNT, so a given user will be prompted to select the account to connect to. If it is set to CONSENT, the user will see a consent page every time, even if they have previously given consent to the application for a given set of scopes. Lastly, if it is set to NONE, no authentication or consent screens will be displayed to the user.
The default value is SELECT_ACCOUNT. |
| AccessType | String | Indicates if your application needs to access a Google API when the user is not present at the browser. This parameter defaults to offline. If your application needs to refresh access tokens when the user is not present at the browser, then use offline. This will result in your application obtaining a refresh token the first time your application exchanges an authorization code for a user.
The allowed values are ONLINE, OFFLINE. The default value is OFFLINE. |
| State | String | Indicates any state which may be useful to your application upon receipt of the response. The Google Authorization Server roundtrips this parameter, so your application receives the same value it sent. Possible uses include redirecting the user to the correct resource in your site, nonces, and cross-site-request-forgery mitigations. |
| Name | Type | Description |
| OAuthAccessToken | String | The authentication token returned from Google. This can be used in subsequent calls to other operations for this particular service. |
| OAuthRefreshToken | String | A token that may be used to obtain a new access token. |
| ExpiresIn | String | The remaining lifetime on the access token. |
Obtains the OAuth authorization URL for authentication with various Google services.
| Name | Type | Description |
| Scope | String | The scope of access to Google APIs. By default, access to all APIs used by this data provider will be specified. |
| CallbackURL | String | Determines where the response is sent. The value of this parameter must exactly match one of the values registered in the APIs Console (including the http or https schemes, case, and trailing '/'). |
| Prompt | String | This field indicates the prompt to present the user. It accepts one of the following values: NONE, CONSENT, SELECT ACCOUNT. The default is SELECT_ACCOUNT, so a given user will be prompted to select the account to connect to. If it is set to CONSENT, the user will see a consent page every time, even if they have previously given consent to the application for a given set of scopes. Lastly, if it is set to NONE, no authentication or consent screens will be displayed to the user.
The default value is SELECT_ACCOUNT. |
| AccessType | String | Indicates if your application needs to access a Google API when the user is not present at the browser. This parameter defaults to offline. If your application needs to refresh access tokens when the user is not present at the browser, then use offline. This will result in your application obtaining a refresh token the first time your application exchanges an authorization code for a user.
The allowed values are ONLINE, OFFLINE. The default value is OFFLINE. |
| State | String | Indicates any state which may be useful to your application upon receipt of the response. The Google Authorization Server roundtrips this parameter, so your application receives the same value it sent. Possible uses include redirecting the user to the correct resource in your site, nonces, and cross-site-request-forgery mitigations. |
| Name | Type | Description |
| URL | String | The URL to complete user authentication. |
Inserts a Google BigQuery job, which can then be selected later to retrieve the query results.
| Name | Type | Description |
| Query | String | The SQL query to execute in Google BigQuery. This can be a data retrieval query or a Data Manipulation Language (DML) operation. |
| IsDML | String | If the value is 'true', the query is treated as a DML statement, such as INSERT, UPDATE, or DELETE. If the value is 'false', the query is treated as a read-only operation.
The default value is false. |
| DestinationTable | String | The fully qualified destination table for storing the query results, using the format projectId:datasetId.tableId. This field is required when using write dispositions other than 'WRITE_EMPTY'. |
| WriteDisposition | String | Specifies how the results should be written to the destination table. Possible options include truncating the existing table, appending to it, or writing only if the table is empty.
The allowed values are WRITE_TRUNCATE, WRITE_APPEND, WRITE_EMPTY. The default value is WRITE_TRUNCATE. |
| DryRun | String | If the value is 'true', BigQuery performs a dry run to validate the query without executing it. If the value is 'false', the query runs normally. |
| MaximumBytesBilled | String | Sets an upper limit for the number of bytes BigQuery is allowed to process. If the query exceeds this limit, the job is cancelled before execution. |
| Region | String | The geographic region where the job should be executed. If not provided, defaults to the region specified in the connection or job configuration. |
| Name | Type | Description |
| JobId | String | The unique identifier assigned to the newly submitted BigQuery job. |
| Region | String | The region in which the job was submitted and is being executed. |
| Configuration_query_query | String | The SQL query text used in the job execution. |
| Configuration_query_destinationTable_tableId | String | The ID of the destination table where the query results were written. |
| Configuration_query_destinationTable_projectId | String | The ID of the Google Cloud project that contains the destination table. |
| Configuration_query_destinationTable_datasetId | String | The ID of the dataset that contains the destination table. |
| Status_State | String | The current status of the job, such as PENDING, RUNNING, or DONE. |
| Status_errorResult_reason | String | A brief error code explaining why the job failed, if applicable. |
| Status_errorResult_message | String | A detailed, human-readable error message returned by BigQuery, if the job encountered an error. |
Inserts a Google BigQuery load job, which adds data from Google Cloud Storage into an existing table.
| Name | Type | Description |
| SourceURIs | String | A space-separated list of Google Cloud Storage (GCS) Uniform Resource Identifiers (URIs) that point to the source files for the load job. Each URI must follow the format gs://bucket/path/to/file. |
| SourceFormat | String | Specifies the format of the input files, such as CSV, JSON, AVRO, or PARQUET.
The allowed values are AVRO, NEWLINE_DELIMITED_JSON, DATASTORE_BACKUP, PARQUET, ORC, CSV. |
| DestinationTable | String | The fully qualified table where the data should be loaded, formatted as projectId.datasetId.tableId. |
| DestinationTableProperties | String | A JavaScript Object Notation (JSON) object specifying metadata properties for the destination table, such as its friendly name, description, and any associated labels. |
| DestinationTableSchema | String | A JSON array defining the schema fields for the destination table. Each field includes a name, type, and mode. |
| DestinationEncryptionConfiguration | String | A JSON object containing Customer-managed Encryption Key (CMEK) settings for encrypting the destination table. |
| SchemaUpdateOptions | String | A JSON array of schema update options to apply when the destination table exists. Options may include allowing field addition or relaxing field modes. |
| TimePartitioning | String | A JSON object specifying how the destination table should be partitioned by time, including partition type and optional partitioning field. |
| RangePartitioning | String | A JSON object defining range-based partitioning for the destination table. Includes the partitioning field, start, end, and interval values. |
| Clustering | String | A JSON object listing the fields to use for clustering the destination table to improve query performance. |
| Autodetect | String | If the value is 'true', BigQuery automatically detects schema and format options for CSV and JSON files. |
| CreateDisposition | String | Specifies whether the destination table should be created if it does not already exist. Options include CREATE_IF_NEEDED and CREATE_NEVER.
The allowed values are CREATE_IF_NEEDED, CREATE_NEVER. The default value is CREATE_IF_NEEDED. |
| WriteDisposition | String | Determines how data is written to the destination table. Options include WRITE_TRUNCATE, WRITE_APPEND, and WRITE_EMPTY.
The allowed values are WRITE_TRUNCATE, WRITE_APPEND, WRITE_EMPTY. The default value is WRITE_APPEND. |
| Region | String | The region where the load job should be executed. Both the source GCS files and the destination BigQuery dataset must reside in the same region. |
| DryRun | String | If the value is 'true', BigQuery validates the job without executing it. Useful for estimating costs or checking errors.
The default value is false. |
| MaximumBadRecords | String | The number of invalid records allowed before the entire job is aborted. If this value is not set, all records must be valid.
The default value is 0. |
| IgnoreUnknownValues | String | If the value is 'true', fields in the input data that are not part of the table schema are ignored. If 'false', such fields cause errors.
The default value is false. |
| AvroUseLogicalTypes | String | If the value is 'true', Avro logical types are used when mapping Avro data to BigQuery schema types.
The default value is true. |
| CSVSkipLeadingRows | String | The number of header rows to skip at the beginning of each CSV file. |
| CSVEncoding | String | The character encoding used in the CSV files, such as UTF-8 or ISO-8859-1.
The allowed values are ISO-8859-1, UTF-8. The default value is UTF-8. |
| CSVNullMarker | String | If set, specifies the string used to represent NULL values in the CSV files. By default, NULL values are not allowed. |
| CSVFieldDelimiter | String | The character used to separate fields in the CSV files. Common values include commas (,), tabs (\t), or pipes (|).
The default value is ,. |
| CSVQuote | String | The character used to quote fields in CSV files. Set to an empty string to disable quoting.
The default value is ". |
| CSVAllowQuotedNewlines | String | If the value is 'true', quoted fields in CSV files are allowed to contain newline characters.
The default value is false. |
| CSVAllowJaggedRows | String | If the value is 'true', rows in CSV files may have fewer fields than expected. If 'false', missing fields cause an error.
The default value is false. |
| DSBackupProjectionFields | String | A JSON list of field names to import from a Cloud Datastore backup. |
| ParquetOptions | String | A JSON object containing import-specific options for Parquet files, such as whether to interpret INT96 timestamps. |
| DecimalTargetTypes | String | A JSON list specifying the order of preference for converting decimal data types to BigQuery types, such as NUMERIC or BIGNUMERIC. |
| HivePartitioningOptions | String | A JSON object describing the source-side Hive-style partitioning used in the input files. |
| Name | Type | Description |
| JobId | String | The unique identifier assigned to the newly created load job. |
| Region | String | The region where the load job was executed. |
| Configuration_load_destinationTable_tableId | String | The ID of the destination table that received the loaded data. |
| Configuration_load_destinationTable_projectId | String | The ID of the project containing the destination table for the load job. |
| Configuration_load_destinationTable_datasetId | String | The ID of the dataset containing the destination table for the load job. |
| Status_State | String | The current execution state of the job, such as PENDING, RUNNING, or DONE. |
| Status_errorResult_reason | String | A brief error code that explains why the load job failed, if applicable. |
| Status_errorResult_message | String | A detailed message describing the reason for the job failure, if any. |
Obtains the OAuth access token to be used for authentication with various Google services.
| Name | Type | Description |
| OAuthRefreshToken | String | The refresh token returned from the original authorization code exchange. |
| Name | Type | Description |
| OAuthAccessToken | String | The authentication token returned from Google. This can be used in subsequent calls to other operations for this particular service. |
| OAuthRefreshToken | String | A token that may be used to obtain a new access token. |
| ExpiresIn | String | The remaining lifetime on the access token. |
Uploads objects in a single operation. Use the SimpleUploadLimit connection property to adjust the threshold in bytes in order to perform a multipart upload.
| Name | Type | Description |
| LocalFilePath | String | The path to the file that will be uploaded in the bucket, such as 'C:/temp/my_file.txt'. If this is a path to a folder, then all the files in the folder will be uploaded in the bucket. |
| RemotePath | String | Path to where the object will be uploaded, such as 'gs://my_bucket/my_file.txt'. |
| Name | Type | Description |
| Object | String | Object name for the object that is uploaded. |
| Success | String | Indicator if the stored procedure was successful or not. |
Google BigQuery allows you to create external datasets that store data in Amazon S3 regions (like aws-us-east-1) or Azure Storage regions (like azure-useast2). The Sync App supports these datasets with two major limitations:
The Sync App maps types from the data source to the corresponding data type available in the schema. The table below documents these mappings.
| Google BigQuery | CData Schema | |
| STRING | string | |
| BYTES | binary | |
| INTEGER | long | |
| FLOAT | double | |
| NUMERIC | decimal | |
| BIGNUMERIC | decimal | |
| BOOLEAN | bool | |
| DATE | date | |
| TIME | time | |
| DATETIME | datetime | |
| TIMESTAMP | datetime | |
| STRUCT | See below | |
| ARRAY | See below | |
| GEOGRAPHY | string | |
| JSON | string | |
| INTERVAL | string |
Note that the NUMERIC type supports 38 digits of precision and the BIGDECIMAL type supports 76 digits of precision. Most platforms do not have a decimal type that supports the full precision of these values (.NET decimal supports 28 digits, and Java BigDecimal supports 38 by default). If this is the case, then you can cast these columns to a string when queried, or the connection can be configured to ignore them by setting IgnoreTypes=decimal.
Google BigQuery supports two kinds of types for storing compound values in a single row, STRUCT and ARRAY. In some places within Google BigQuery these are also known as RECORD and REPEATED types.
A STRUCT is a fixed-size group of values that are accessed by name and can have different types.
The Sync App flattens structs so their individual fields can be accessed using dotted names.
Note that these dotted names must be quoted.
-- trade_value STRUCT<currency STRING, value FLOAT> SELECT CONCAT([trade_value.value], ' ', NULLIF([trade_value.currency], 'USD')) FROM trades
An ARRAY is a group of values with the same type that can have any size. The Sync App treats the array as a single compound value and reports it as a JSON aggregate.
These types may be combined such that a STRUCT type contains an ARRAY field, or an ARRAY field is a list of STRUCT values.
The outer type takes precedence in how the field is processed:
/* Table contains fields:
stocks STRUCT<symbol STRING, prices ARRAY<FLOAT>>
offers: ARRAY<STRUCT<currency STRING, value FLOAT>>
*/
SELECT [stocks.symbol], /* ARRAY field can be read from STRUCT, but is converted to JSON */
[stocks.prices],
[offers] /* STRUCT fields in an ARRAY cannot be accessed */
FROM market
The Sync App represents INTERVAL types as strings. Whenever a query requires an INTERVAL type, it must specify the INTERVAL using the BigQuery SQL INTERVAL format:
YEAR-MONTH DAY HOUR:MINUTE:SECOND.FRACTION. All queries that return INTERVAL values use this format unless they appear in an ARRAY aggregate, where the format depends upon how the Sync App reads the data.
For example, the value "5 years and 11 months, minus 10 days and 3 hours and 2.5 seconds" in the correct format is:
5-11 -10 -3:0:0.2.5
The Sync App exposes parameters on the following types. In each case the type parameters are optional, Google BigQuery has default values for types that are not parameterized.
These parameters are primarily for restricting the data written to the table. They are included in the table metadata as the column size for STRING and BYTES, and the numeric precision and scale for NUMERIC and BIGNUMERIC.
Type parameters have no effect on queries and are not reported within query metadata.
For example, in the example below the output of CONCAT is a plain STRING even though its inputs are a STRING(100) and b STRING(100).
SELECT CONCAT(a, b) FROM table_with_length_params
Google BigQuery supports setting descriptions on tables but the Sync App does not report these by default. Use ShowTableDescriptions to report table descriptions.
Google BigQuery does not support primary keys natively, but the Sync App allows you to define them so they can be used in environments that require primary keys to modify data. Use PrimaryKeyIdentifiers to define primary keys.
If policy tags from the Data Catalog service are defined on a table, you can retrieve them from the system tables using the PolicyTags column:
SELECT ColumnName, PolicyTags FROM sys_tablecolumns WHERE CatalogName = 'psychic-valve-137816' AND SchemaName = 'Northwind' AND TableName = 'Customers'
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
| Property | Description |
| AuthScheme | Specifies the authentication method used to connect to Google BigQuery. |
| ProjectId | Specifies the Google Cloud project used to resolve unqualified table names and execute jobs in Google BigQuery. |
| DatasetId | Specifies the dataset used to resolve unqualified table references in SQL queries. |
| Property | Description |
| AllowLargeResultSets | Specifies whether large result sets are allowed to be stored in temporary tables. |
| UseQueryCache | Specifies whether to use Google BigQuery's built-in query cache for eligible queries. |
| PageSize | Specifies the number of results to return per page from Google BigQuery when paging through query results. |
| PollingInterval | Specifies the number of seconds to wait between status checks when polling for query completion. |
| UseLegacySQL | Specifies whether to use Google BigQuery's Legacy SQL dialect instead of Standard SQL when generating queries. |
| Property | Description |
| UseStorageAPI | Specifies whether to use the Google BigQuery Storage API for bulk data reads instead of the standard REST API. |
| UseArrowFormat | Specifies whether to use the Arrow format instead of Avro when reading data through the Google BigQuery Storage API. |
| StorageThreshold | Specifies the minimum number of rows a query must return for the provider to use the Google BigQuery Storage API to read results. |
| StoragePageSize | Specifies the number of rows to buffer per page when executing queries using the Google BigQuery Storage API. |
| StorageTimeout | Specifies the maximum time, in seconds, that a Storage API connection may remain active before the provider resets the connection. |
| Property | Description |
| InsertMode | Specifies the method used to insert data into Google BigQuery. |
| WaitForBatchResults | Specifies whether the provider should wait for Google BigQuery batch load jobs to complete before returning from an INSERT operation. |
| GCSBucket | Specifies the name of the Google Cloud Storage (GCS) bucket where bulk data is uploaded for staging. |
| GCSBucketFolder | Specifies the name of the folder within the GCS bucket where bulk data is uploaded for staging. |
| TempTableDataset | Specifies the prefix of the dataset used to store temporary tables during bulk UPDATE or DELETE operations. |
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| DelegatedServiceAccounts | Specifies a space-delimited list of service account emails for delegated requests. |
| RequestingServiceAccount | Specifies a service account email to make a delegated request. |
| Property | Description |
| OAuthJWTCert | Supplies the name of the client certificate's JWT Certificate store. |
| OAuthJWTCertType | Identifies the type of key store containing the JWT Certificate. |
| OAuthJWTCertPassword | Provides the password for the OAuth JWT certificate used to access a password-protected certificate store. If the certificate store does not require a password, leave this property blank. |
| OAuthJWTCertSubject | Identifies the subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate. |
| OAuthJWTIssuer | The issuer of the Java Web Token. |
| OAuthJWTSubject | The user subject for which the application is requesting delegated access. |
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
| FirewallType | Specifies the protocol the provider uses to tunnel traffic through a proxy-based firewall. |
| FirewallServer | Identifies the IP address, DNS name, or host name of a proxy used to traverse a firewall and relay user queries to network resources. |
| FirewallPort | Specifies the TCP port to be used for a proxy-based firewall. |
| FirewallUser | Identifies the user ID of the account authenticating to a proxy-based firewall. |
| FirewallPassword | Specifies the password of the user account authenticating to a proxy-based firewall. |
| Property | Description |
| ProxyAutoDetect | Specifies whether the provider checks your system proxy settings for existing proxy server configurations, rather than using a manually specified proxy server. |
| ProxyServer | Identifies the hostname or IP address of the proxy server through which you want to route HTTP traffic. |
| ProxyPort | Identifies the TCP port on your specified proxy server that has been reserved for routing HTTP traffic to and from the client. |
| ProxyAuthScheme | Specifies the authentication method the provider uses when authenticating to the proxy server specified in the ProxyServer connection property. |
| ProxyUser | Provides the username of a user account registered with the proxy server specified in the ProxyServer connection property. |
| ProxyPassword | Specifies the password of the user specified in the ProxyUser connection property. |
| ProxySSLType | Specifies the SSL type to use when connecting to the proxy server specified in the ProxyServer connection property. |
| ProxyExceptions | Specifies a semicolon-separated list of destination hostnames or IPs that are exempt from connecting through the proxy server set in the ProxyServer connection property. |
| Property | Description |
| LogModules | Specifies the core modules to include in the log file. Use a semicolon-separated list of module names. By default, all modules are logged. |
| Property | Description |
| Location | Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path. |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| BrowsableCatalogs | Optional setting that restricts the catalogs reported to a subset of all available catalogs. For example, BrowsableCatalogs=CatalogA,CatalogB,CatalogC . |
| Tables | Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC . |
| Views | Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC . |
| RefreshViewSchemas | Specifies whether the provider should automatically refresh view schemas by querying the views directly. |
| PrimaryKeyIdentifiers | Specifies rules for assigning primary keys to tables. |
| AllowedTableTypes | Specifies which types of tables are visible when listing tables in the dataset. |
| FlattenObjects | Specifies whether STRUCT fields in Google BigQuery are flattened into individual top-level columns. |
| Property | Description |
| AllowAggregateParameters | Specifies whether raw aggregate values can be used in parameters when the QueryPassthrough connection property is enabled. |
| ApplicationName | Specifies the name of the application using the provider, in the format application/version. For example, AcmeReporting/1.0. |
| AuditLimit | Specifies the maximum number of rows that can be stored in the in-memory audit table. |
| AuditMode | Specifies which provider actions should be recorded in audit tables. |
| AWSWorkloadIdentityConfig | Configuration properties to provide when using Workload Identity Federation via AWS. |
| AzureWorkloadIdentityConfig | Configuration properties to provide when using Workload Identity Federation via Azure. |
| BigQueryOptions | Specifies a comma-separated list of custom Google BigQuery provider options. |
| EmptyArraysAsNull | Specifies whether empty arrays are represented as null or as an empty array. |
| GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
| HidePartitionColumns | Specifies whether the pseudocolumns _PARTITIONDATE and _PARTITIONTIME are hidden in partitioned tables. |
| MaximumBillingTier | Specifies the maximum billing tier for a query, represented as a positive integer multiplier of the standard cost per terabyte. |
| MaximumBytesBilled | Specifies the maximum number of bytes a Google BigQuery job is allowed to process before it is cancelled. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| Other | Specifies advanced connection properties for specialized scenarios. Use this property only under the guidance of our Support team to address specific issues. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| QueryPassthrough | This option passes the query to the Google BigQuery server as is. |
| SupportCaseSensitiveTables | Specifies whether the provider distinguishes between tables and datasets with the same name but different casing. |
| TableSamplePercent | Specifies the percentage of each table to sample when generating queries using the TABLESAMPLE clause. |
| Timeout | Specifies the maximum number of seconds to wait before timing out an operation. |
| UserDefinedViews | Specifies a filepath to a JSON configuration file that defines custom views. The provider automatically detects and uses the views specified in this file. |
| WorkloadPoolId | The ID of your Workload Identity Federation pool. |
| WorkloadProjectId | The ID of the Google Cloud project that hosts your Workload Identity Federation pool. |
| WorkloadProviderId | The ID of your Workload Identity Federation pool provider. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
| Property | Description |
| AuthScheme | Specifies the authentication method used to connect to Google BigQuery. |
| ProjectId | Specifies the Google Cloud project used to resolve unqualified table names and execute jobs in Google BigQuery. |
| DatasetId | Specifies the dataset used to resolve unqualified table references in SQL queries. |
Specifies the authentication method used to connect to Google BigQuery.
Specifies the Google Cloud project used to resolve unqualified table names and execute jobs in Google BigQuery.
This property works in combination with BillingProjectId to determine how queries are billed and how table names are resolved.
The Sync App must create a Google BigQuery job to execute certain operations, including:
The job’s billing project is selected using the following priority:
SELECT FirstName, LastName FROM `psychic-valve-137816`.`Northwind`.`customers`
This query runs under the psychic-valve-137816 project.
Note: When QueryPassthrough is enabled, only rules 1 and 2 apply. Either BillingProjectId or this property must be set to execute passthrough queries.
This property also defines the default data project used to resolve unqualified table names.
In contrast to job execution (which prioritizes BillingProjectId), unqualified table references are resolved using ProjectId first.
When a table reference does not include a project, the Sync App uses the following order to determine the project:
/* Unqualified table: resolved using ProjectId */ SELECT FirstName, LastName FROM `Northwind`.`customers` /* Fully qualified table: resolved using specified project */ SELECT FirstName, LastName FROM `psychic-valve-137816`.`Northwind`.`customers` /* Mixed example: 'orders' is resolved using project from 'customers' */ SELECT * FROM `psychic-valve-137816`.`Northwind`.`customers` INNER JOIN `Northwind`.`orders` ON ...
Note: When QueryPassthrough is enabled, only this property and BillingProjectId can be used to resolve unqualified tables. All cross-project references must be fully qualified.
Set this property to your active Google Cloud project to control billing and resolve table references when queries omit full project names.
Specifies the dataset used to resolve unqualified table references in SQL queries.
When a query references a table without specifying a dataset, this property determines how the Sync App resolves the dataset. Using a defined DatasetId can reduce ambiguity and improve reliability in query parsing, particularly in passthrough scenarios.
Tables in Google BigQuery can be referenced either with or without a dataset:
/* Unqualified reference (dataset resolved from connection) */ SELECT FirstName, LastName FROM `customers` /* Fully qualified reference */ SELECT FirstName, LastName FROM `project-id`.`Northwind`.`customers`
The Sync App uses the following rules to resolve unqualified tables:
For example, in the following query, orders is treated as part of the Northwind dataset:
SELECT * FROM `project-id`.`Northwind`.`customers` INNER JOIN `orders` ON ...
When QueryPassthrough is enabled, only the first rule applies. In passthrough mode, either set this property or qualify all table names explicitly.
Set this property when working with queries that include unqualified table names, especially if you're using passthrough or querying across multiple datasets.
This section provides a complete list of the BigQuery properties you can configure in the connection string for this provider.
| Property | Description |
| AllowLargeResultSets | Specifies whether large result sets are allowed to be stored in temporary tables. |
| UseQueryCache | Specifies whether to use Google BigQuery's built-in query cache for eligible queries. |
| PageSize | Specifies the number of results to return per page from Google BigQuery when paging through query results. |
| PollingInterval | Specifies the number of seconds to wait between status checks when polling for query completion. |
| UseLegacySQL | Specifies whether to use Google BigQuery's Legacy SQL dialect instead of Standard SQL when generating queries. |
Specifies whether large result sets are allowed to be stored in temporary tables.
When set to true, the Sync App permits queries that return large result sets to write results to a temporary table. This is required when query results exceed Google BigQuery’s default response limits.
When set to false, large result sets may cause queries to fail unless pagination or result limiting is used.
Enable this property if you expect queries to return large datasets and want the Sync App to store those results using temporary tables in Google BigQuery.
Storing large result sets in temporary tables may increase query execution time and storage usage. Enable this option only when necessary.
Specifies whether to use Google BigQuery's built-in query cache for eligible queries.
Google BigQuery automatically caches the results of recent queries. By default, if a matching cached result exists and the underlying data has not changed, Google BigQuery returns the cached result instead of re-executing the query. This improves performance and reduces cost without returning stale data since the cache is invalidated automatically when the referenced tables are modified.
When this property is set to true, the Sync App allows Google BigQuery to use cached results when available.
When set to false, the query is always executed directly against the current table data, bypassing the cache entirely.
Use this property to control whether cached results should be used for performance optimization. Disable caching for scenarios where full re-evaluation is necessary—such as benchmarking or auditing.
Specifies the number of results to return per page from Google BigQuery when paging through query results.
This property controls how many rows are returned in each page of results from Google BigQuery. A higher value reduces the number of HTTP requests by returning more data at once, but may increase response time and memory usage. A lower value returns fewer rows per page and requires more requests, which may help avoid timeouts or reduce memory usage in constrained environments.
This property has no effect when UseStorageAPI is enabled and the query is eligible to use the Google BigQuery Storage API. In that case, use StoragePageSize to control paging behavior.
Adjust this property to balance throughput and stability based on your workload and network environment.
Larger page sizes reduce request overhead, but may increase the risk of timeouts. Smaller sizes improve reliability at the cost of increased request frequency.
Specifies the number of seconds to wait between status checks when polling for query completion.
This property applies only to queries where results are stored to a table instead of streamed directly to the Sync App. Polling occurs in the following scenarios:
In these cases, the Sync App submits the query and checks periodically to determine if results are ready. PollingInterval defines how many seconds to wait between each status check.
For example: PollingInterval=5 causes the Sync App to wait 5 seconds between polling attempts.
Using a shorter interval increases the number of API requests, which may be unnecessary for longer-running queries. A longer interval reduces polling frequency, but may delay result retrieval slightly after query completion.
Specifies whether to use Google BigQuery's Legacy SQL dialect instead of Standard SQL when generating queries.
By default, the Sync App uses Standard SQL, which is the recommended and more feature-rich dialect supported by Google BigQuery.
When this property is set to true, the Sync App generates queries using Google BigQuery’s Legacy SQL dialect. Legacy SQL has different syntax and semantics and does not support certain modern features.
Key behavioral differences:
Enable this property only if your environment requires compatibility with Legacy SQL, such as when working with legacy views, tools, or scripts that depend on that dialect. Standard SQL is generally more performant and flexible and is recommended for most use cases.
This section provides a complete list of the Storage API properties you can configure in the connection string for this provider.
| Property | Description |
| UseStorageAPI | Specifies whether to use the Google BigQuery Storage API for bulk data reads instead of the standard REST API. |
| UseArrowFormat | Specifies whether to use the Arrow format instead of Avro when reading data through the Google BigQuery Storage API. |
| StorageThreshold | Specifies the minimum number of rows a query must return for the provider to use the Google BigQuery Storage API to read results. |
| StoragePageSize | Specifies the number of rows to buffer per page when executing queries using the Google BigQuery Storage API. |
| StorageTimeout | Specifies the maximum time, in seconds, that a Storage API connection may remain active before the provider resets the connection. |
Specifies whether to use the Google BigQuery Storage API for bulk data reads instead of the standard REST API.
When this property is set to true, the Sync App uses the Google BigQuery Storage API, which is optimized for high-throughput, low-latency data access.
Depending on the complexity of the query, the Sync App chooses one of two execution paths:
The Storage API typically offers better performance than the REST API but:
If this property is set to false, the Sync App uses the Google BigQuery REST API, which:
Keep this property enabled for faster and more efficient data access, especially when working with large datasets. Disable it only if you require simpler authentication or need to reduce dependency on the Storage API.
Specifies whether to use the Arrow format instead of Avro when reading data through the Google BigQuery Storage API.
This property only takes effect when UseStorageAPI is enabled. When reading data from Google BigQuery using the Storage API, the Sync App can request the result set in different formats. By default, it uses Avro, but enabling this property switches the format to Arrow.
Using Arrow can offer performance benefits for certain workloads, particularly those involving time series data or tables with many date, time, datetime, or timestamp fields. In these cases, Arrow can result in faster reads and more efficient memory usage.
For most other datasets, the difference in performance between Avro and Arrow is minimal. Enable this property when working with temporal data types or when you observe performance bottlenecks with Avro in Storage API reads.
Specifies the minimum number of rows a query must return for the provider to use the Google BigQuery Storage API to read results.
This property is only applicable when UseStorageAPI is set to true.
When UseStorageAPI is true, the Sync App attempts to use the Google BigQuery Storage API for efficient result retrieval. If a query is too complex to run directly on the Storage API, the Sync App creates a query job and stores the results in a temporary table.
This property defines the minimum number of rows the job must return for the Sync App to use the Storage API to read from that table. If the result set contains fewer rows than the specified value, the Sync App returns the results directly without using the Storage API.
Valid values range from 1 to 100,000. For example: StorageThreshold=50000
This means the Storage API will be used only if the query job returns 50,000 rows or more. Setting a lower value allows more queries to use the Storage API which may improve performance for smaller result sets, but could increase API costs. Setting a higher value limits Storage API usage to only large result sets, which can help control usage and cost, but may result in slower performance for medium-sized queries.
This property has no effect on queries that can be executed directly on the Storage API, as those do not require query jobs. Adjust this setting based on the typical size of your query results.
Specifies the number of rows to buffer per page when executing queries using the Google BigQuery Storage API.
This property applies only when UseStorageAPI is enabled and the query is eligible to run on the Google BigQuery Storage API. It controls how many rows the Sync App retrieves and buffers from the API in each page.
Larger values typically improve performance by reducing the number of round trips to the API, but will increase memory consumption. Smaller values reduce memory usage but may slow down query execution due to more frequent network calls.
Adjust this value based on your environment’s memory capacity and performance needs. For large, high-throughput queries, increasing the value may help. For resource-constrained systems, consider lowering it.
Specifies the maximum time, in seconds, that a Storage API connection may remain active before the provider resets the connection.
Some networks, proxies, or firewalls automatically close idle connections after a period of inactivity. This can affect Storage API operations if the Sync App streams data faster than it can be consumed. While the consumer is catching up, the connection may be idle long enough to be closed externally.
To avoid connection failures, the Sync App resets the Storage API connection after it has been open for the number of seconds specified by this property. For example: StorageTimeout=600. This causes the Sync App to reset the connection after 10 minutes.
Set this value to 0 to disable automatic connection resets.
This section provides a complete list of the Uploading properties you can configure in the connection string for this provider.
| Property | Description |
| InsertMode | Specifies the method used to insert data into Google BigQuery. |
| WaitForBatchResults | Specifies whether the provider should wait for Google BigQuery batch load jobs to complete before returning from an INSERT operation. |
| GCSBucket | Specifies the name of the Google Cloud Storage (GCS) bucket where bulk data is uploaded for staging. |
| GCSBucketFolder | Specifies the name of the folder within the GCS bucket where bulk data is uploaded for staging. |
| TempTableDataset | Specifies the prefix of the dataset used to store temporary tables during bulk UPDATE or DELETE operations. |
Specifies the method used to insert data into Google BigQuery.
This property determines how data is uploaded during insert operations. Choose the insert mode based on your performance, data volume, and staging requirements.
Supported insert modes:
When UseLegacySQL is set to true, only Streaming and Upload modes are supported. The legacy SQL dialect does not support DML statements.
Use this property to control how the Sync App handles insert operations, especially for high-volume or real-time data ingestion scenarios. For detailed guidance on tuning and usage, refer to Advanced Integrations.
Specifies whether the provider should wait for Google BigQuery batch load jobs to complete before returning from an INSERT operation.
This property only applies when InsertMode is set to Upload.
By default, this property is set to true, meaning the Sync App waits until the batch load job has completed. This ensures that any errors encountered during execution are detected and reported immediately. It also helps manage Google BigQuery load job limits by preventing multiple concurrent jobs on the same connection.
If this property is set to false, the Sync App submits the load job and returns control to the application immediately without checking the final status. While this may reduce perceived latency, it introduces the risk of silent failures and requires the application to manually track job status. It also increases the chance of exceeding Google BigQuery rate limits if multiple jobs are submitted too quickly.
Leave this property enabled for more reliable insert behavior and automatic error handling. Disable it only if your application handles job monitoring and rate-limiting logic independently.
Specifies the name of the Google Cloud Storage (GCS) bucket where bulk data is uploaded for staging.
This property applies only when InsertMode is set to GCSStaging. In that mode, the Sync App stages data in the specified GCS bucket before loading it into Google BigQuery.
If InsertMode is set to GCSStaging and this property is not set, bulk operations will fail.
Set this property to the name of an existing GCS bucket that your authentication method can write to. For example: GCSBucket=my-staging-bucket.
Specifies the name of the folder within the GCS bucket where bulk data is uploaded for staging.
This property applies only when InsertMode is set to GCSStaging.
If this property is not set, the Sync App uploads staged data to the root of the specified GCS bucket.
Set this property to organize staged files under a specific folder path within the bucket. This helps prevent file collisions during concurrent operations and improves data organization across environments or workflows.
For example: GCSBucketFolder=staging/datahub/temp
This setting writes staged files to: gs://<GCSBucket>/staging/datahub/temp/
Specifies the prefix of the dataset used to store temporary tables during bulk UPDATE or DELETE operations.
The Sync App uses Google BigQuery MERGE statements to perform bulk UPDATE and DELETE operations. These operations require staging the modified data in a temporary table. This property defines the prefix used to name the dataset where those temporary tables are created.
The full dataset name is derived by appending the region of the target table to the specified prefix. This ensures that the temporary and target tables reside in the same region, which is required by Google BigQuery and helps avoid cross-region data transfer charges.
For example, if this property is set to the default value (_CDataTempTableDataset), the Sync App generates region-specific datasets by appending the region name to the prefix.
/* Used for tables in the US region */ _CDataTempTableDataset_US /* Used for tables in the Asia Southeast 1 region */ _CDataTempTableDataset_asia_southeast1
This ensures that temporary tables used during bulk operations are stored in the same region as the target tables. Google BigQuery requires this for MERGE operations, and it helps avoid additional latency or data transfer costs.
Each Google BigQuery region must have its own temporary dataset, based on the specified prefix.
Use this property to customize the prefix used for temporary datasets in bulk write operations. This can help align with naming conventions or avoid naming conflicts in shared environments.
This section provides a complete list of the OAuth properties you can configure in the connection string for this provider.
| Property | Description |
| OAuthClientId | Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication. |
| OAuthClientSecret | Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.). |
| DelegatedServiceAccounts | Specifies a space-delimited list of service account emails for delegated requests. |
| RequestingServiceAccount | Specifies a service account email to make a delegated request. |
Specifies the client ID (also known as the consumer key) assigned to your custom OAuth application. This ID is required to identify the application to the OAuth authorization server during authentication.
This property is required in two cases:
(When the driver provides embedded OAuth credentials, this value may already be provided by the Sync App and thus not require manual entry.)
OAuthClientId is generally used alongside other OAuth-related properties such as OAuthClientSecret and OAuthSettingsLocation when configuring an authenticated connection.
OAuthClientId is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can usually find this value in your identity provider’s application registration settings. Look for a field labeled Client ID, Application ID, or Consumer Key.
While the client ID is not considered a confidential value like a client secret, it is still part of your application's identity and should be handled carefully. Avoid exposing it in public repositories or shared configuration files.
For more information on how this property is used when configuring a connection, see Establishing a Connection.
Specifies the client secret assigned to your custom OAuth application. This confidential value is used to authenticate the application to the OAuth authorization server. (Custom OAuth applications only.).
This property (sometimes called the application secret or consumer secret) is required when using a custom OAuth application in any flow that requires secure client authentication, such as web-based OAuth, service-based connections, or certificate-based authorization flows. It is not required when using an embedded OAuth application.
The client secret is used during the token exchange step of the OAuth flow, when the driver requests an access token from the authorization server. If this value is missing or incorrect, authentication fails with either an invalid_client or an unauthorized_client error.
OAuthClientSecret is one of the key connection parameters that need to be set before users can authenticate via OAuth. You can obtain this value from your identity provider when registering the OAuth application.
Notes:
For more information on how this property is used when configuring a connection, see Establishing a Connection
Specifies a space-delimited list of service account emails for delegated requests.
The service account emails must be specified in a space-delimited list.
Each service account must be granted the roles/iam.serviceAccountTokenCreator role on its next service account in the chain.
The last service account in the chain must be granted the roles/iam.serviceAccountTokenCreator role on the requesting service account. The requesting service account is the one specified in the RequestingServiceAccount property.
Note that for delegated requests, the requesting service account must have the permission iam.serviceAccounts.getAccessToken, which can also be granted through the serviceAccountTokenCreator role.
Specifies a service account email to make a delegated request.
The service account email of the account for which the credentials are requested in a delegated request. With the list of delegated service accounts in DelegatedServiceAccounts, this property is used to make a delegated request.
You must have the IAM permission iam.serviceAccounts.getAccessToken on this service account.
This section provides a complete list of the JWT OAuth properties you can configure in the connection string for this provider.
| Property | Description |
| OAuthJWTCert | Supplies the name of the client certificate's JWT Certificate store. |
| OAuthJWTCertType | Identifies the type of key store containing the JWT Certificate. |
| OAuthJWTCertPassword | Provides the password for the OAuth JWT certificate used to access a password-protected certificate store. If the certificate store does not require a password, leave this property blank. |
| OAuthJWTCertSubject | Identifies the subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate. |
| OAuthJWTIssuer | The issuer of the Java Web Token. |
| OAuthJWTSubject | The user subject for which the application is requesting delegated access. |
Supplies the name of the client certificate's JWT Certificate store.
The OAuthJWTCertType field specifies the type of the certificate store specified in OAuthJWTCert. If the store is password-protected, use OAuthJWTCertPassword to supply the password..
OAuthJWTCert is used in conjunction with the OAuthJWTCertSubject field in order to specify client certificates. If OAuthJWTCert has a value, and OAuthJWTCertSubject is set, the CData Sync App initiates a search for a certificate. For further information, see OAuthJWTCertSubject.
Designations of certificate stores are platform-dependent.
Notes
Identifies the type of key store containing the JWT Certificate.
| Value | Description | Notes |
| USER | A certificate store owned by the current user. | Only available in Windows. |
| MACHINE | A machine store. | Not available in Java or other non-Windows environments. |
| PFXFILE | A PFX (PKCS12) file containing certificates. | |
| PFXBLOB | A string (base-64-encoded) representing a certificate store in PFX (PKCS12) format. | |
| JKSFILE | A Java key store (JKS) file containing certificates. | Only available in Java. |
| JKSBLOB | A string (base-64-encoded) representing a certificate store in Java key store (JKS) format. | Only available in Java. |
| PEMKEY_FILE | A PEM-encoded file that contains a private key and an optional certificate. | |
| PEMKEY_BLOB | A string (base64-encoded) that contains a private key and an optional certificate. | |
| PUBLIC_KEY_FILE | A file that contains a PEM- or DER-encoded public key certificate. | |
| PUBLIC_KEY_BLOB | A string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate. | |
| SSHPUBLIC_KEY_FILE | A file that contains an SSH-style public key. | |
| SSHPUBLIC_KEY_BLOB | A string (base-64-encoded) that contains an SSH-style public key. | |
| P7BFILE | A PKCS7 file containing certificates. | |
| PPKFILE | A file that contains a PPK (PuTTY Private Key). | |
| XMLFILE | A file that contains a certificate in XML format. | |
| XMLBLOB | Astring that contains a certificate in XML format. | |
| BCFKSFILE | A file that contains an Bouncy Castle keystore. | |
| BCFKSBLOB | A string (base-64-encoded) that contains a Bouncy Castle keystore. | |
| GOOGLEJSON | A JSON file containing the service account information. | Only valid when connecting to a Google service. |
| GOOGLEJSONBLOB | A string that contains the service account JSON. | Only valid when connecting to a Google service. |
Provides the password for the OAuth JWT certificate used to access a password-protected certificate store. If the certificate store does not require a password, leave this property blank.
This property specifies the password needed to open a password-protected certificate store. To determine if a password is necessary, refer to the documentation or configuration for your specific certificate store.
This is not required when using the GOOGLEJSON OAuthJWTCertType. Google JSON keys are not encrypted.
Identifies the subject of the OAuth JWT certificate used to locate a matching certificate in the store. Supports partial matches and the wildcard '*' to select the first certificate.
The value of this property is used to locate a matching certificate in the store. The search process works as follows:
You can set the value to '*' to automatically select the first certificate in the store. The certificate subject is a comma-separated list of distinguished name fields and values. For example: CN=www.server.com, OU=test, C=US, [email protected].
Common fields include:
| Field | Meaning |
| CN | Common Name. This is commonly a host name like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma, enclose it in quotes. For example: "O=ACME, Inc.".
The issuer of the Java Web Token.
The issuer of the Java Web Token. Enter the value of the service account email address.
This is not required when using the GOOGLEJSON OAuthJWTCertType. Google JSON keys contain a copy of the issuer account.
The user subject for which the application is requesting delegated access.
The user subject for which the application is requesting delegated access. Enter the email address of the user for which the application is requesting delegated access.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
| Property | Description |
| SSLServerCert | Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
If you are using a TLS/SSL connection, use this property to specify the TLS/SSL certificate to be accepted from the server. If you specify a value for this property, all other certificates that are not trusted by the machine are rejected.
This property can take the following forms:
| Description | Example |
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space- or colon-separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space- or colon-separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
Note: It is possible to use '*' to signify that all certificates should be accepted, but due to security concerns this is not recommended.
This section provides a complete list of the Firewall properties you can configure in the connection string for this provider.
| Property | Description |
| FirewallType | Specifies the protocol the provider uses to tunnel traffic through a proxy-based firewall. |
| FirewallServer | Identifies the IP address, DNS name, or host name of a proxy used to traverse a firewall and relay user queries to network resources. |
| FirewallPort | Specifies the TCP port to be used for a proxy-based firewall. |
| FirewallUser | Identifies the user ID of the account authenticating to a proxy-based firewall. |
| FirewallPassword | Specifies the password of the user account authenticating to a proxy-based firewall. |
Specifies the protocol the provider uses to tunnel traffic through a proxy-based firewall.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
Note: By default, the Sync App connects to the system proxy. To disable this behavior and connect to one of the following proxy types, set ProxyAutoDetect to false.
The following table provides port number information for each of the supported protocols.
| Protocol | Default Port | Description |
| TUNNEL | 80 | The port where the Sync App opens a connection to Google BigQuery. Traffic flows back and forth via the proxy at this location. |
| SOCKS4 | 1080 | The port where the Sync App opens a connection to Google BigQuery. SOCKS 4 then passes theFirewallUser value to the proxy, which determines whether the connection request should be granted. |
| SOCKS5 | 1080 | The port where the Sync App sends data to Google BigQuery. If the SOCKS 5 proxy requires authentication, set FirewallUser and FirewallPassword to credentials the proxy recognizes. |
To connect to HTTP proxies, use ProxyServer and ProxyPort. To authenticate to HTTP proxies, use ProxyAuthScheme, ProxyUser, and ProxyPassword.
Identifies the IP address, DNS name, or host name of a proxy used to traverse a firewall and relay user queries to network resources.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
Specifies the TCP port to be used for a proxy-based firewall.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
Identifies the user ID of the account authenticating to a proxy-based firewall.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
Specifies the password of the user account authenticating to a proxy-based firewall.
A proxy-based firewall (or proxy firewall) is a network security device that acts as an intermediary between user requests and the resources they access. The proxy accepts the request of an authenticated user, tunnels through the firewall, and transmits the request to the appropriate server.
Because the proxy evaluates and transfers data backets on behalf of the requesting users, the users never connect directly with the servers, only with the proxy.
This section provides a complete list of the Proxy properties you can configure in the connection string for this provider.
| Property | Description |
| ProxyAutoDetect | Specifies whether the provider checks your system proxy settings for existing proxy server configurations, rather than using a manually specified proxy server. |
| ProxyServer | Identifies the hostname or IP address of the proxy server through which you want to route HTTP traffic. |
| ProxyPort | Identifies the TCP port on your specified proxy server that has been reserved for routing HTTP traffic to and from the client. |
| ProxyAuthScheme | Specifies the authentication method the provider uses when authenticating to the proxy server specified in the ProxyServer connection property. |
| ProxyUser | Provides the username of a user account registered with the proxy server specified in the ProxyServer connection property. |
| ProxyPassword | Specifies the password of the user specified in the ProxyUser connection property. |
| ProxySSLType | Specifies the SSL type to use when connecting to the proxy server specified in the ProxyServer connection property. |
| ProxyExceptions | Specifies a semicolon-separated list of destination hostnames or IPs that are exempt from connecting through the proxy server set in the ProxyServer connection property. |
Specifies whether the provider checks your system proxy settings for existing proxy server configurations, rather than using a manually specified proxy server.
When this connection property is set to True, the Sync App checks your system proxy settings for existing proxy server configurations (no need to manually supply proxy server details).
This connection property takes precedence over other proxy settings. If you want to configure the Sync App to connect to a specific proxy server, set ProxyAutoDetect to False.
To connect to an HTTP proxy, see ProxyServer. For other proxies, such as SOCKS or tunneling, see FirewallType.
Identifies the hostname or IP address of the proxy server through which you want to route HTTP traffic.
The Sync App only routes HTTP traffic through the proxy server specified in this connection property when ProxyAutoDetect is set to False.
If ProxyAutoDetect is set to True (the default), the Sync App instead routes HTTP traffic through the proxy server specified in your system proxy settings.
Identifies the TCP port on your specified proxy server that has been reserved for routing HTTP traffic to and from the client.
The Sync App only routes HTTP traffic through the ProxyServer port specified in this connection property when ProxyAutoDetect is set to False.
If ProxyAutoDetect is set to True (the default), the Sync App instead routes HTTP traffic through the proxy server port specified in your system proxy settings.
For other proxy types, see FirewallType.
Specifies the authentication method the provider uses when authenticating to the proxy server specified in the ProxyServer connection property.
Supported authentication types :
For all values other than NONE, you must also set the ProxyUser and ProxyPassword connection properties.
If you need to use another authentication type, such as SOCKS 5 authentication, see FirewallType.
Provides the username of a user account registered with the proxy server specified in the ProxyServer connection property.
The ProxyUser and ProxyPassword connection properties are used to connect and authenticate against the HTTP proxy specified in ProxyServer.
After selecting one of the available authentication types in ProxyAuthScheme, set this property as follows:
| ProxyAuthScheme Value | Value to set for ProxyUser |
| BASIC | The username of a user registered with the proxy server. |
| DIGEST | The username of a user registered with the proxy server. |
| NEGOTIATE | The username of a Windows user who is a valid user in the domain or trusted domain that the proxy server is part of, in the format user@domain or domain\user. |
| NTLM | The username of a Windows user who is a valid user in the domain or trusted domain that the proxy server is part of, in the format user@domain or domain\user. |
| NONE | Do not set the ProxyPassword connection property. |
Note: The Sync App only uses this username if ProxyAutoDetect is set to False. If ProxyAutoDetect is set to True (the default), the Sync App instead uses the username specified in your system proxy settings.
Specifies the password of the user specified in the ProxyUser connection property.
The ProxyUser and ProxyPassword connection properties are used to connect and authenticate against the HTTP proxy specified in ProxyServer.
After selecting one of the available authentication types in ProxyAuthScheme, set this property as follows:
| ProxyAuthScheme Value | Value to set for ProxyPassword |
| BASIC | The password associated with the proxy server user specified in ProxyUser. |
| DIGEST | The password associated with the proxy server user specified in ProxyUser. |
| NEGOTIATE | The password associated with the Windows user account specified in ProxyUser. |
| NTLM | The password associated with the Windows user account specified in ProxyUser. |
| NONE | Do not set the ProxyPassword connection property. |
For SOCKS 5 authentication or tunneling, see FirewallType.
Note: The Sync App only uses this password if ProxyAutoDetect is set to False. If ProxyAutoDetect is set to True (the default), the Sync App instead uses the password specified in your system proxy settings.
Specifies the SSL type to use when connecting to the proxy server specified in the ProxyServer connection property.
This property determines when to use SSL for the connection to the HTTP proxy specified by ProxyServer. You can set this connection property to the following values :
| AUTO | Default setting. If ProxyServer is set to an HTTPS URL, the Sync App uses the TUNNEL option. If ProxyServer is set to an HTTP URL, the component uses the NEVER option. |
| ALWAYS | The connection is always SSL enabled. |
| NEVER | The connection is not SSL enabled. |
| TUNNEL | The connection is made through a tunneling proxy. The proxy server opens a connection to the remote host and traffic flows back and forth through the proxy. |
Specifies a semicolon-separated list of destination hostnames or IPs that are exempt from connecting through the proxy server set in the ProxyServer connection property.
The ProxyServer is used for all addresses, except for addresses defined in this property. Use semicolons to separate entries.
Note: The Sync App uses the system proxy settings by default, without further configuration needed. If you want to explicitly configure proxy exceptions for this connection, set ProxyAutoDetect to False.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
| Property | Description |
| LogModules | Specifies the core modules to include in the log file. Use a semicolon-separated list of module names. By default, all modules are logged. |
Specifies the core modules to include in the log file. Use a semicolon-separated list of module names. By default, all modules are logged.
The Sync App writes details about each operation it performs into the logfile specified by the Logfile connection property.
Each of these logged operations are assigned to a themed category called a module, and each module has a corresponding short code used to labels individual Sync App operations as belonging to that module.
When this connection property is set to a semicolon-separated list of module codes, only operations belonging to the specified modules are written to the logfile. Note that this only affects which operations are logged moving forward and doesn't retroactively alter the existing contents of the logfile. For example: INFO;EXEC;SSL;META;
By default, logged operations from all modules are included.
You can explicitly exclude a module by prefixing it with a "-". For example: -HTTP
To apply filters to submodules, identify them with the syntax <module name>.<submodule name>. For example, the following value causes the Sync App to only log actions belonging to the HTTP module, and further refines it to exclude actions belonging to the Res submodule of the HTTP module: HTTP;-HTTP.Res
Note that the logfile filtering triggered by the Verbosity connection property takes precedence over the filtering imposed by this connection property. This means that operations of a higher verbosity level than the level specified in the Verbosity connection property are not printed in the logfile, even if they belong to one of the modules specified in this connection property.
The available modules and submodules are:
| Module Name | Module Description | Submodules |
| INFO | General Information. Includes the connection string, product version (build number), and initial connection messages. |
|
| EXEC | Query Execution. Includes execution messages for user-written SQL queries, parsed SQL queries, and normalized SQL queries. Success/failure messages for queries and query pages appear here as well. |
|
| HTTP | HTTP protocol messages. Includes HTTP requests/responses (including POST messages), as well as Kerberos related messages. |
|
| WSDL | Messages pertaining to the generation of WSDL/XSD files. | — |
| SSL | SSL certificate messages. |
|
| AUTH | Authentication related failure/success messages. |
|
| SQL | Includes SQL transactions, SQL bulk transfer messages, and SQL result set messages. |
|
| META | Metadata cache and schema messages. |
|
| FUNC | Information related to executing SQL functions. |
|
| TCP | Incoming and outgoing raw bytes on TCP transport layer messages. |
|
| FTP | Messages pertaining to the File Transfer Protocol. |
|
| SFTP | Messages pertaining to the Secure File Transfer Protocol. |
|
| POP | Messages pertaining to data transferred via the Post Office Protocol. |
|
| SMTP | Messages pertaining to data transferred via the Simple Mail Transfer Protocol. |
|
| CORE | Messages relating to various internal product operations not covered by other modules. | — |
| DEMN | Messages related to SQL remoting. | — |
| CLJB | Messages about bulk data uploads (cloud job). |
|
| SRCE | Miscellaneous messages produced by the product that don't belong in any other module. | — |
| TRANCE | Advanced messages concerning low-level product operations. | — |
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
| Property | Description |
| Location | Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path. |
| BrowsableSchemas | Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC . |
| BrowsableCatalogs | Optional setting that restricts the catalogs reported to a subset of all available catalogs. For example, BrowsableCatalogs=CatalogA,CatalogB,CatalogC . |
| Tables | Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC . |
| Views | Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC . |
| RefreshViewSchemas | Specifies whether the provider should automatically refresh view schemas by querying the views directly. |
| PrimaryKeyIdentifiers | Specifies rules for assigning primary keys to tables. |
| AllowedTableTypes | Specifies which types of tables are visible when listing tables in the dataset. |
| FlattenObjects | Specifies whether STRUCT fields in Google BigQuery are flattened into individual top-level columns. |
Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path.
The Location property is only needed if you want to either customize definitions (for example, change a column name, ignore a column, etc.) or extend the data model with new tables, views, or stored procedures.
If left unspecified, the default location is %APPDATA%\\CData\\GoogleBigQuery Data Provider\\Schema, where %APPDATA% is set to the user's configuration directory:
| Platform | %APPDATA% |
| Windows | The value of the APPDATA environment variable |
| Linux | ~/.config |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC .
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
Optional setting that restricts the catalogs reported to a subset of all available catalogs. For example, BrowsableCatalogs=CatalogA,CatalogB,CatalogC .
Listing all available database catalogs can take extra time, thus degrading performance. Providing a list of catalogs in the connection string saves time and improves performance.
Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC .
Listing all available tables from some databases can take extra time, thus degrading performance. Providing a list of tables in the connection string saves time and improves performance.
If there are lots of tables available and you already know which ones you want to work with, you can use this property to restrict your viewing to only those tables. To do this, specify the tables you want in a comma-separated list. Each table should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Tables=TableA,[TableB/WithSlash],WithCatalog.WithSchema.`TableC With Space`.
Note: If you are connecting to a data source with multiple schemas or catalogs, you must specify each table you want to view by its fully qualified name. This avoids ambiguity between tables that may exist in multiple catalogs or schemas.
Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC .
Listing all available views from some databases can take extra time, thus degrading performance. Providing a list of views in the connection string saves time and improves performance.
If there are lots of views available and you already know which ones you want to work with, you can use this property to restrict your viewing to only those views. To do this, specify the views you want in a comma-separated list. Each view should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Views=ViewA,[ViewB/WithSlash],WithCatalog.WithSchema.`ViewC With Space`.
Note: If you are connecting to a data source with multiple schemas or catalogs, you must specify each view you want to examine by its fully qualified name. This avoids ambiguity between views that may exist in multiple catalogs or schemas.
Specifies whether the provider should automatically refresh view schemas by querying the views directly.
Google BigQuery stores a static schema with each view. However, this schema is not updated when the underlying tables change. As a result, stored view schemas can become outdated, potentially causing query failures.
When this property is set to true, the Sync App queries each view to retrieve the current schema instead of relying on the stored schema. This ensures accuracy but may trigger a query job and incur additional overhead.
When set to false, the Sync App uses the stored view schema without validating it. This avoids creating query jobs, which can reduce overhead in environments where schema stability is guaranteed, but introduces the risk of failures if the view is out of sync with its base tables.
Keep this property enabled unless you're certain that your view schemas are stable or you need to avoid query jobs during schema discovery.
Specifies rules for assigning primary keys to tables.
Google BigQuery does not natively support primary keys. However, certain operations such as updates, deletes, or integrations with external tools may require primary key definitions. This property allows you to define primary keys manually using a semicolon-separated list of rules.
Each rule follows the format: <table_pattern>=<comma-separated list of columns>
For example: PrimaryKeyIdentifiers="*=key;transactions=tx_date,tx_serial;user_comments="
This defines three rules:
Rules may match just the table name, the dataset and table, or the project, dataset, and table for increasing specificity:
/* Rules with just table names use the connection ProjectId (or DataProjectId) and DatasetId. All these rules refer to the same table when ProjectId=someProject and DatasetId=someDataset */ someTable=a,b,c someDataset.someTable=a,b,c someProject.someDataset.someTable=a,b,c
You may quote table and column names using any valid SQL quoting style:
/* Any table or column name may be quoted */ `someProject`."someDataset".[someTable]=`a`,[b],"c"
If this property is not set, the Sync App uses schema files defined through Location to determine primary keys. Otherwise, all tables are treated as having no primary key by default.
Specifies which types of tables are visible when listing tables in the dataset.
This property accepts a comma-separated list of table type values. The Sync App includes only the table types you specify when listing tables during metadata discovery. All other table-like entities are excluded from the results.
For example, to return only standard tables and views, set this property to: TABLE,VIEW.
Use this property to filter out unnecessary table types and streamline metadata results based on your application's needs.
Specifies whether STRUCT fields in Google BigQuery are flattened into individual top-level columns.
When set to true, the Sync App flattens each field in a STRUCT column into its own column. The original STRUCT column is omitted from the results. This flattening is applied recursively for nested STRUCT fields.
For example, the following table is reported as three columns when flattening is enabled: location.coords.lat, location.coords.lon, and location.country
CREATE TABLE t(location STRUCT<coords STRUCT<lat FLOAT64, lon FLOAT644>, country STRING4>);
When set to false, the Sync App returns the STRUCT column as a single column containing a JSON object. In the example above, only the location column is reported.
Enable this property to access nested STRUCT fields as individual columns. Disable it if your application prefers to handle STRUCTs as JSON values.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
| Property | Description |
| AllowAggregateParameters | Specifies whether raw aggregate values can be used in parameters when the QueryPassthrough connection property is enabled. |
| ApplicationName | Specifies the name of the application using the provider, in the format application/version. For example, AcmeReporting/1.0. |
| AuditLimit | Specifies the maximum number of rows that can be stored in the in-memory audit table. |
| AuditMode | Specifies which provider actions should be recorded in audit tables. |
| AWSWorkloadIdentityConfig | Configuration properties to provide when using Workload Identity Federation via AWS. |
| AzureWorkloadIdentityConfig | Configuration properties to provide when using Workload Identity Federation via Azure. |
| BigQueryOptions | Specifies a comma-separated list of custom Google BigQuery provider options. |
| EmptyArraysAsNull | Specifies whether empty arrays are represented as null or as an empty array. |
| GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
| HidePartitionColumns | Specifies whether the pseudocolumns _PARTITIONDATE and _PARTITIONTIME are hidden in partitioned tables. |
| MaximumBillingTier | Specifies the maximum billing tier for a query, represented as a positive integer multiplier of the standard cost per terabyte. |
| MaximumBytesBilled | Specifies the maximum number of bytes a Google BigQuery job is allowed to process before it is cancelled. |
| MaxRows | Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY. |
| Other | Specifies advanced connection properties for specialized scenarios. Use this property only under the guidance of our Support team to address specific issues. |
| PseudoColumns | Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'. |
| QueryPassthrough | This option passes the query to the Google BigQuery server as is. |
| SupportCaseSensitiveTables | Specifies whether the provider distinguishes between tables and datasets with the same name but different casing. |
| TableSamplePercent | Specifies the percentage of each table to sample when generating queries using the TABLESAMPLE clause. |
| Timeout | Specifies the maximum number of seconds to wait before timing out an operation. |
| UserDefinedViews | Specifies a filepath to a JSON configuration file that defines custom views. The provider automatically detects and uses the views specified in this file. |
| WorkloadPoolId | The ID of your Workload Identity Federation pool. |
| WorkloadProjectId | The ID of the Google Cloud project that hosts your Workload Identity Federation pool. |
| WorkloadProviderId | The ID of your Workload Identity Federation pool provider. |
Specifies whether raw aggregate values can be used in parameters when the QueryPassthrough connection property is enabled.
When set to false, string parameters are automatically quoted and escaped. This ensures safe query construction, but prevents the use of raw aggregate values such as arrays or structs as parameters.
/*
* If @x is set to: test value ' contains quote
*
* Result is a valid query
*/
INSERT INTO proj.data.tbl(x) VALUES ('test value \' contains quote')
/*
* If @x is set to: ['valid', ('aggregate', 'value')]
*
* Result contains string instead of aggregate:
*/
INSERT INTO proj.data.tbl(x) VALUES ('[\'valid\', (\'aggregate\', \'value\')]')
When set to true, string parameters are inserted directly into the query without quoting or escaping. This allows raw aggregate values such as arrays or structs to be passed as parameters, but it requires that all literal strings are properly escaped by the user.
/*
* If @x is set to: test value ' contains quote
*
* Result is an invalid query
*/
INSERT INTO proj.data.tbl(x) VALUES (test value ' contains quote)
/*
* If @x is set to: ['valid', ('aggregate', 'value')]
*
* Result is an aggregate
*/
INSERT INTO proj.data.tbl(x) VALUES (['valid', ('aggregate', 'value')])
Enable this property if you need to pass raw aggregate values through parameters and can ensure proper manual escaping of strings.
Specifies the name of the application using the provider, in the format application/version. For example, AcmeReporting/1.0.
The Sync App identifies itself to Google BigQuery using a custom User-Agent header.
This header includes a fixed portion that identifies the client as a specific build of the CData Sync App, and an optional portion that reports the application name and version specified through this property.
Providing an application name helps with query attribution and monitoring in environments where multiple tools or services connect to Google BigQuery.
Set this property if you want your application name to appear in the User-Agent string sent in Google BigQuery API requests.
Specifies the maximum number of rows that can be stored in the in-memory audit table.
When auditing is enabled using the AuditMode property, AuditLimit controls how many rows are retained in the audit table at one time.
By default, this property is set to 1000, meaning only the 1000 most recent audit events are preserved. Older entries are removed as new ones are added.
To disable the limit and retain all audit rows, set the property to -1. This may significantly increase memory usage. In that case, clear the audit table periodically to manage resource consumption.
You can clear the audit table using a command like:
DELETE FROM AuditJobs#TEMP
Adjust this property based on your logging needs and available memory. Use higher values or disable the limit only if you plan to manage audit data manually.
Specifies which provider actions should be recorded in audit tables.
The Sync App can log internal actions it performs when running queries. When this property is set, the Sync App creates temporary in-memory audit tables to track the specified actions, including the timestamp, triggering query, and other relevant details.
By default, no audit modes are enabled, and the Sync App does not log any audit information. To enable auditing, set this property to a comma-separated list of supported modes.
The following audit mode is currently available:
| Mode Name | Audit Table | Description | Columns |
| start-jobs | AuditJobs#TEMP | Records all jobs started by the Sync App | Timestamp,Query,ProjectId,Location,JobId |
For example, to track Google BigQuery jobs started by the Sync App, set this property to: start-jobs.
Use this property to gain visibility into internal operations for monitoring or troubleshooting.
Refer to AuditLimit for guidance on managing the size of audit tables.
Configuration properties to provide when using Workload Identity Federation via AWS.
The properties are formatted as a semicolon-separated list of Key=Value properties, where the value is optionally quoted.
For example, this setting authenticates in AWS using a user's root keys:
AWSWorkloadIdentityConfig="AuthScheme=AwsRootKeys;AccessKey='AKIAABCDEF123456';SecretKey=...;Region=us-east-1"
Configuration properties to provide when using Workload Identity Federation via Azure.
The properties are formatted as a semicolon-separated list of Key=Value properties, where the value is optionally quoted.
For example, this setting authenticates in Azure using client credentials:
AzureWorkloadIdentityConfig="AuthScheme=AzureServicePrincipal;AzureTenant=directory (tenant) id;OAuthClientID=application (client) id;OAuthClientSecret=client secret;AzureResource=application id uri;"
Specifies a comma-separated list of custom Google BigQuery provider options.
This property enables specialized Google BigQuery behaviors that are not exposed through standard connection settings.
Supported options:
| Option | Description |
| gbqoImplicitJoinAsUnion | Preserves implicit joins rather than rewriting them as CROSS JOINs, which is the expected SQL92 behavior. BigQuery interprets implicit joins as UNION ALL, which may be useful for supporting legacy query patterns or specific transformations. |
Use this property when you need to control specific Google BigQuery behaviors that aren’t handled through other settings.
Specifies whether empty arrays are represented as null or as an empty array.
When this property is set to true, the Sync App represents empty arrays as "null". This aligns with how the Sync App handles empty aggregates and can help simplify downstream comparisons or processing logic.
When set to false, empty arrays are represented as "[]", which mimics the behavior of the native Google BigQuery Sync App.
Enable this property to normalize the handling of empty values by treating empty arrays as "null". Disable it if your application or tools expect an explicit empty array instead.
Indicates the user preference as to when schemas should be generated and saved.
This property outputs schemas to .rsd files in the path specified by Location.
Available settings are the following:
When you set GenerateSchemaFiles to OnUse, the Sync App generates schemas as you execute SELECT queries. Schemas are generated for each table referenced in the query.
When you set GenerateSchemaFiles to OnCreate, schemas are only generated when a CREATE TABLE query is executed.
Another way to use this property is to obtain schemas for every table in your database when you connect. To do so, set GenerateSchemaFiles to OnStart and connect.
Specifies whether the pseudocolumns _PARTITIONDATE and _PARTITIONTIME are hidden in partitioned tables.
When this property is set to false, partitioned tables include the pseudocolumns _PARTITIONDATE and _PARTITIONTIME in the reported schema. These columns can help filter queries and understand partition structure.
When set to true, the Sync App hides these columns, matching the behavior of the native Google BigQuery Sync App and the Google BigQuery web console.
Enable this property to suppress internal partition columns from metadata and result sets when they are not needed by your application.
Hiding these columns does not affect query execution, but may simplify schema handling in environments where internal fields are unnecessary.
Specifies the maximum billing tier for a query, represented as a positive integer multiplier of the standard cost per terabyte.
This property limits the maximum billing tier that Google BigQuery can use when executing a query. If the query requires more resources than the specified tier allows, it fails with a "billingTierLimitExceeded" error. You are not charged for failed queries.
The billing tier is a positive integer that acts as a multiplier of the standard per-terabyte pricing. For example, setting MaximumBillingTier to 2 allows the query to consume up to twice the standard cost per TB.
If this property is not set, Google BigQuery uses the default billing tier configured for your Google Cloud project.
Use this property to control the cost exposure of complex or resource-intensive queries. If a query fails due to billing tier limits, the error message typically includes the estimated required tier.
Restricting the billing tier helps prevent runaway costs but may block queries that require higher compute capacity. Adjust the tier upward as needed based on the query’s resource demands and Google BigQuery’s cost estimate.
Specifies the maximum number of bytes a Google BigQuery job is allowed to process before it is cancelled.
This property sets a billing cap for each job. If the job attempts to process more data than the specified limit, Google BigQuery cancels the job and you are not billed.
By default, there is no cap, and jobs are billed for all bytes processed.
This property only applies when using DestinationTable or when submitting jobs via the InsertJob stored procedure. Standard query jobs do not support byte limits and ignore this setting.
For example, setting MaximumBytesBilled to 1000000000 caps the job at approximately 1 GB of processed data.
Use this property to prevent unexpected billing charges from large queries. It is especially useful in environments where cost control is a priority.
Specifies the maximum number of rows returned for queries that do not include either aggregation or GROUP BY.
The default value for this property, -1, means that no row limit is enforced unless the query explicitly includes a LIMIT clause. (When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting.)
Setting MaxRows to a whole number greater than 0 ensures that queries do not return excessively large result sets by default.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
Specifies advanced connection properties for specialized scenarios. Use this property only under the guidance of our Support team to address specific issues.
This property allows advanced users to configure hidden properties for specialized situations, with the advice of our Support team. These settings are not required for normal use cases but can address unique requirements or provide additional functionality. To define multiple properties, use a semicolon-separated list.
Note: It is strongly recommended to set these properties only when advised by the Support team to address specific scenarios or issues.
| Property | Description |
| DefaultColumnSize | Sets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000. |
| ConvertDateTimeToGMT=True | Converts date-time values to GMT, instead of the local time of the machine. The default value is False (use local time). |
| RecordToFile=filename | Records the underlying socket data transfer to the specified file. |
Specifies the pseudocolumns to expose as table columns, expressed as a string in the format 'TableName=ColumnName;TableName=ColumnName'.
This property allows you to define which pseudocolumns the Sync App exposes as table columns.
To specify individual pseudocolumns, use the following format:
Table1=Column1;Table1=Column2;Table2=Column3
To include all pseudocolumns for all tables use:
*=*
This option passes the query to the Google BigQuery server as is.
When this is set, queries are passed through directly to Google BigQuery.
Specifies whether the provider distinguishes between tables and datasets with the same name but different casing.
By default, the Sync App treats table and dataset names as case-insensitive when retrieving metadata. If multiple tables or datasets exist with the same name but different casing (for example: Customers, customers, and CUSTOMERS), only one of them is shown in system views such as sys_tables.
When this property is set to true, the Sync App includes all case-variant tables and datasets in metadata. To prevent name collisions, the Sync App renames duplicate entries by appending disambiguating information to their names (for example: customers becomes customers_1).
This setting affects both metadata and queries. When the Sync App disambiguates table or dataset names in metadata, those renamed versions must also be used in SQL queries. For example, if two tables exist such as Customers and customers, you may need to query them as: "SELECT * FROM Customers" and "SELECT * FROM customers_1".
Enable this property if your environment contains tables and datasets with the same name in different casing and you need all of them represented in the metadata.
Note that this property will be automatically disabled if QueryPassthrough is enabled, due to the properties being incompatable with one another.
Specifies the percentage of each table to sample when generating queries using the TABLESAMPLE clause.
When this property is set to a value greater than 0, the Sync App adds a TABLESAMPLE SYSTEM (n PERCENT) clause to eligible table references during query generation.
/* Input SQL */ SELECT * FROM `tbl` /* Generated Google BigQuery SQL when TableSamplePercent=10 */ SELECT * FROM `tbl` TABLESAMPLE SYSTEM (10 PERCENT)
This instructs Google BigQuery to return a sample of approximately the specified percentage of rows.
Use this property to limit result size during exploration or testing of large tables. Set a value between 1 and 100 to indicate the sampling percentage.
Limitations:
Specifies the maximum number of seconds to wait before timing out an operation.
This property controls how long the Sync App waits for a query or API operation to complete. If the operation does not finish within the specified time, the operation is cancelled and an exception is thrown.
If Timeout is set to 0, operations do not time out. They continue until they complete or encounter an error.
If Timeout is set to a positive number, and the operation exceeds the configured limit, the Sync App cancels the operation and returns a timeout error. For example: Timeout=600. This sets the timeout to 10 minutes.
Use this property to enforce a maximum execution time for long-running operations. Increase the value for large datasets or complex queries. Decrease it if you need to limit resource usage or responsiveness.
Specifies a filepath to a JSON configuration file that defines custom views. The provider automatically detects and uses the views specified in this file.
UserDefinedViews allows you to define and manage custom views through a JSON-formatted configuration file called UserDefinedViews.json. These views are automatically recognized by the Sync App and enable you to execute custom SQL queries as if they were standard database views. The JSON file defines each view as a root element with a child element called "query", which contains the SQL query for the view.
For example:
{
"MyView": {
"query": "SELECT * FROM [publicdata].[samples].github_nested WHERE MyColumn = 'value'"
},
"MyView2": {
"query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
}
}
You can use this property to define multiple views in a single file and specify the filepath.
For example:
UserDefinedViews=C:\Path\To\UserDefinedViews.jsonWhen you specify a view in UserDefinedViews, the Sync App only sees that view.
For further information, see User Defined Views.
The ID of your Workload Identity Federation pool.
The ID of your Workload Identity Federation pool.
The ID of the Google Cloud project that hosts your Workload Identity Federation pool.
The ID of the Google Cloud project that hosts your Workload Identity Federation pool.
The ID of your Workload Identity Federation pool provider.
The ID of your Workload Identity Federation pool provider.
LZMA from 7Zip LZMA SDK
LZMA SDK is placed in the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original LZMA SDK code, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
LZMA2 from XZ SDK
Version 1.9 and older are in the public domain.
Xamarin.Forms
Xamarin SDK
The MIT License (MIT)
Copyright (c) .NET Foundation Contributors
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NSIS 3.10
Copyright (C) 1999-2025 Contributors THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
1. DEFINITIONS
"Contribution" means:
a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and b) in the case of each subsequent Contributor:
i) changes to the Program, and
ii) additions to the Program;
where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
"Contributor" means any person or entity that distributes the Program.
"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
"Program" means the Contributions distributed in accordance with this Agreement.
"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
2. GRANT OF RIGHTS
a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
3. REQUIREMENTS
A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
a) it complies with the terms and conditions of this Agreement; and
b) its license agreement:
i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
When the Program is made available in source code form:
a) it must be made available under this Agreement; and
b) a copy of this Agreement must be included with each copy of the Program.
Contributors may not remove or alter any copyright notices contained within the Program.
Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
4. COMMERCIAL DISTRIBUTION
Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
5. NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
6. DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. GENERAL
If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.
protobuf v. 3.5.1
Copyright 2008 Google Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Code generated by the Protocol Buffer compiler is owned by the owner of the input file used when generating it. This code is not standalone and requires a support library to be linked with it. This support library is itself covered by the above license.
Google API Protobuf Definitions (Arrow)
v1beta1/arrow.proto
Apache License Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Google API Protobuf Definitions (Avro)
v1/avro.proto
Apache License Version 2.0, January 2004
http://www.apache.org/licenses/v
vro TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.