Google BigQuery は、分析用のフルマネージドクラウドデータウェアハウスです。Google BigQuery は、Google Storage と連携して大規模なデータセットをインタラクティブに分析することができ、テラバイトを数秒、ペタバイトを数分でスキャンすることができます。

Google BigQuery as Analytical Storage

CALL SYSADMIN.createConnection('dwh','bigquery','projectId=XXXX,[email protected],password=$${jboss.server.config.dir}/../deployments/bigquery.p12') ;;
CALL SYSADMIN.createDatasource('dwh','bigquery','importer.schemaPattern=dataset_name,importer.defaultSchema=dataset_name,importer.useFullSchemaName=false,importer.useCatalogName=false','supportsNativeQueries=true,uploadMode=CLOUDSTORAGE,bucketName=dv_upload_bucket,useDdl=false') ;;
CALL SYSADMIN.setCurrentDWH('dwh', 'dataset_name') ;;

Please note that Google BigQuery does not support recommended indexes.

Type Name

bigquery

Connection Properties

テンプレート名:bigquery

適切なトランスレータ名: bigquery

プロパティ:

  • projectId (必須)
  • transformQuery(デフォルト:TRUE);(廃止)
  • credentialFactory(デフォルト:com.datavirtuality.dv.core.oauth.credential.BigQueryOAuthCredentialFactory

  • allowLargeResults (obsolete)
  • largeResultsDefaultDataset
  • tableMaxResults
  • fetchSize
  • refreshToken

  • accessToken

  • expirationTimeMilliseconds

  • region

  • authCode

  • redirectUri
  • user-name(必須)
  • password (デフォルト:  空)
  • ClientId
  • ClientSecret
  • driver(デフォルト: bigquery
  • driver-class (デフォルト: com.datavirtuality.jdbc.bq.BQDriver
  • storageProjectId(デフォルト:空)
  • storageUser (デフォルト:空)
  • storagePassword (デフォルト:空)
  • new-connection-sql
  • check-valid-connection-sql(デフォルト:select 1
  • min-pool-size(デフォルト:2
  • max-pool-size(デフォルト:70
  • readTimeout (デフォルト:200000は無限大、負数はデフォルト)
  • connectTimeout (デフォルト:200000は無限大、負数はデフォルト)

readTimeout and connectTimeout connection properties available since v4.6

以下はその例です:

CALL SYSADMIN.createConnection('bq2','bigquery','projectId=XXXX,[email protected],password=$${jboss.server.config.dir}/../deployments/bigquery.p12') ;;
CALL SYSADMIN.createDatasource('bq2','bigquery','importer.useFullSchemaName=false,importer.useCatalogName=false','supportsNativeQueries=true,uploadMode=CLOUDSTORAGE,bucketName=dv_upload_bucket,useDdl=false') ;;

以下の理由から、BigQuery データソースを作成する際にはbucketName を指定することを強くお勧めします:

  • 複数のデータ挿入を実行している間に超過する可能性のあるバケット作成制限。詳しくはGoogle Cloudstorage documentation を参照してください;
  • データの挿入時にエラーが発生する可能性があるBigQuery データセットと一時バケット間のリージョンの不一致。詳細はGoogle BigQuery documentation を参照してください。

Translator Properties

Translator Properties Shared by All JDBC Connectors

(アルファベット順)

To view the full table, click the expand button in its top right corner


Name

Description

Default value

comparisonStringConversion

Sets a template to convert WHERE clauses on strings

Examples
comparisonStringConversion=%s
-- no conversion will be applied
 
comparisonStringConversion=binary %s
-- WHERE binary someStringExpression LIKE someOtherStringExpression
 
comparisonStringConversion=(%s COLLATE Latin1_General_CS_AS)
-- WHERE (someStringExpression COLLATE Latin1_General_CS_AS LIKE someOtherStringExpression)


  • MS SQL Server:
    (%s COLLATE Latin1_General_CS_AS)
  • MySQL, SingleStore:
    binary %s
  • Others:
    %s

DatabaseTimeZone

Database time zone, used when fetching date, time, or timestamp values

System default time zone

DatabaseVersion

Specific database version, used to fine-tune pushdown support

Automatically detected by the server through the data source JDBC driver, if possible

forbidNonMatchingNames

Only considered when importer.tableNamePattern is set. When set to FALSE, allows creation of tables with names that do not match the tableNamePattern. Tables with non-matching names will only be accessible until the server restarts

TRUE

ForceQuotedIdentifiers

If TRUE, all identifiers in the final query (that is being generated by a translator to be sent directly to the target DBMS) will be quoted

TRUE

MaxPreparedInsertBatchSize

Maximum size of prepared insert batch

2048

OrderByStringConversion

Sets a template to convert ORDER BY clauses on strings

Examples
OrderByStringConversion=%s
-- no conversion will be applied
 
OrderByStringConversion=(binary %s)
-- ORDER BY (binary someStringExpression)
 
OrderByStringConversion=cast((%s) as varchar(2000) ccsid 1208)
-- ORDER BY cast((someStringExpression) as varchar(2000) ccsid 1208)
  • DB2 for AS/400:
    cast((%s) as varchar(2000) ccsid 1208)
  • DB2 for zOS:
    cast((%s) as varchar(2000) ccsid ASCII)
  • MS SQL Server:
    (cast(cast(%s as varchar) as varbinary)
  • MySQL, SingleStore:
    (binary %s)
  • PostgreSQL:
    (%s COLLATE \"C\")
  • Others:
    %s

supportsConvertFromClobToString

If TRUE, indicates that the translator supports the CONVERT/CAST function from clob to string

  • MySQL, PostgreSQL, Redshift, Greenplum, Oracle, MS SQL Server:
    TRUE 
  • Others:
    FALSE

supportsNativeQueries

Forces a translator to issue a native() system procedure that can be used to pass native queries directly to an underlying DBMS

Example
SELECT x.* 
FROM table
CALL "dwh.native"("request" => 'SELECT query, pid, elapsed, substring FROM svl_qlog ORDER BY STARTTIME DESC LIMIT 200'
) w
, ARRAYTABLE( w.tuple COLUMNS query string, pid integer , elapsed string, "substring" string ) x;

More usage examples

  • MySQL, PostgreSQL, Redshift, Snowflake, Oracle, MS SQL Server; Exasol, Vertica, Bigquery, SingleStore, Azure:
    TRUE
  • Others:
    FALSE

supportsOrderByAlias

If FALSE, the CData Virtuality Server does not generate aliases in the ORDER BY clause.

If TRUE, indicates that the source supports aliases in the ORDER BY clause

  • PostgreSQL:
    TRUE
  • Others:
    • if OrderByStringConversion is not set, same as supportsOrderBy
    • if OrderByStringConversion is set, default for supportsOrderByAlias is  FALSE

supportsOrderByString

If FALSE, the CData Virtuality Server does not push down the ORDER BY clause if it contains at least one expression of string type.

If TRUE, indicates that the source supports strings in the ORDER BY clause

TrimStrings

If TRUE, trims trailing whitespace from fixed-length character strings. Please note that the CData Virtuality Server only has string or varchar type that treats trailing whitespace as meaningful

FALSE

UseBindVariables

if TRUE, indicates that PreparedStatements should be used and that literal values in the source query should be replaced with bind variables. If FALSE, only LOB values will trigger the use of PreparedStatements

TRUE

UseCommentsInSourceQuery

Embeds a / comment / leading comment with session/request id in the source SQL query for informational purposes

FALSE

The names of the translator properties are case-sensitive.

Translator Properties Specific for Google BigQuery

To view the full table, click the expand button in its top right corner

Name

Description

Default value

replaceNullCharsWith

String property. If set, the translator replaces all null characters in strings before executing INSERT / UPDATE with the string specified as the property value. You may specify an empty string as well

Single space

uploadMode

Values:  CLOUDSTORAGE (default), STREAMING (preview), INSERTS (preview)

  • CLOUDSTORAGE mode uses a Google Cloud Storage bucket to upload data. The bucket can be configured via bucketName or bucketPrefix translator parameters. If no bucketName is specified, the bucket will be created;
  • STREAMING (preview) mode streams data via the BigQuery API;
  • INSERTS (preview) mode uses JDBC prepared statements for inserting data

CLOUDSTORAGE

bucketName

Value: bucket name

Only for the default (CLOUDSTORAGE) upload mode.
Specifies a bucket name to upload data files to. If specified, the specified bucket is assumed to exist (no creation/deletion operations performed). The region specified for the bucket must be the same as the region specified via the region translator property. If no bucketName is specified, a bucket will be created and deleted after the operation is finished.


bucketPrefix

Value: bucket prefix

Only for the default (CLOUDSTORAGE) upload mode.
Specifies a prefix of a bucket name to upload data files to. The name of the bucket created for uploading will start with the specified prefix. The bucket will be deleted after the operation is finished.


folderInBucket

Value: name of a folder in a bucket

Only for the default (CLOUDSTORAGE) upload mode.
Specifies a name of a folder in a bucket to upload data files to. If specified, the specified folder is assumed to exist (no creation/deletion operations performed) within a bukcet. 


useDdl

Value: boolean

If TRUE, creates and drops are performed using JDBC statements instead of API

FALSE

setACLforBucket

If set to TRUE, ACL roles are set for the elements of the bucket used by the data source.

If set to FALSE, only IAM permissions are used. Thus, it should be set when uniform bucket-level access is configured for the bucket

TRUE

folderInBucket translator property available since v4.9

Data Source Properties

Data Source Properties Shared by All JDBC Connectors

(アルファベット順)

To view the full table, click the expand button in its top right corner


Name

Description

Default

importer.autoCorrectColumnNames

Replaces . in a column name with _ as the period character is not supported by the CData Virtuality Server in column names

TRUE

importer.catalog

Database catalogs to use. Can be used if the db connection property is omitted.

Only for Microsoft SQL Server and Snowflake:

  • Multiple catalogs (multiple databases) usage is supported;
  • '%' as a wildcard is allowed: for example, importer.catalog="foo%" will import foo, foobar, etc.
  • To specify several catalog names or/and patterns, values should be comma-separated and enclosed within double quotes: importer.catalog="schema1,schema2,pattern1%,pattern2%";
  • To use all catalogs, the value should be '%': importer.catalog="%"
  • To escape a wildcard in a catalog name, use the '\' escape character. The value should be as follows: importer.catalog="test\_tables"

Exasol:

EXA_DB

All others: empty

importer.defaultSchema

  • Only for data sources that are also supported as Analytical Storage (SQLServer, MySQL, Oracle, PostgreSQL, Redshift);
  • When the property is correctly set, SELECT INTO, CREATE, and DROP TABLE commands are enabled for that data source;
  • Must point to the original schema name in the DBMS (e.g. importer.defaultSchema=public);
  • Needs specifying where tables will be created or dropped in the source DBMS;
  • Not meant to be used with Analytical Storage data source

Please note that writing into a data source is only possible if this parameter is set.

Empty

importer.enableMetadataCache

 Turns on metadata cache for a single data source even when the global option is turned off. Together with importer.skipMetadataLoadOnStartup=true, it allows using materialized views after server restart when the original source is unavailable

FALSE

importer.excludeProcedures

Case-insensitive regular expression that will exclude a matching fully qualified procedure name from import

Empty

importer.excludeSchemas

Comma-separated list of schemas (no % or ? wildcards allowed) to exclude listed schemas from import. A schema specified in defaultSchema or schemaPattern will be imported despite being listed in excludeSchemas. Helps to speed up metadata loading

Oracle:

APEX_PUBLIC_USER,
DIP,
FLOWS_040100,
FLOWS_020100,
FLOWS_FILES,
MDDATA,
ORACLE_OCM,
SPATIAL_CSW_ADMIN_USR,
SPATIAL_WFS_ADMIN_USR,
XS$NULL,
BI,
HR,
OE,
PM,
IX,
SH,
SYS,
SYSTEM,
MDSYS,
CTXSYS

All others: empty

importer.excludeTables

Case-insensitive regular expression that will exclude a matching fully qualified table name from import. Does not speed up metadata loading. Here are some examples:

1. Excluding all tables in the (source) schemas sys and INFORMATION_SCHEMA:

importer.excludeTables=(.*[.]sys[.].*|.*[.]INFORMATION_SCHEMA[.].*)

2. Excluding all tables except the ones starting with "public.br" and "public.mk" using a negative lookahead: 

importer.excludeTables=(?!public\.(br|mk)).*

3. Excluding "tablename11" from the list ["tablename1", "tablename11", "company", "companies"]:

importer.excludeTables=.*\.(?!\btablename1\b|\bcompan).*
  • ".*\." will match the schema portion of the table name;
  • "\b" is a word boundary. It is used to define the boundary between text characters and non-character text;
  • "\btablename1\b" will only match tablename1 and not tablename11 because of the use of "\b";
  • "\bcompan" will match "company" and "companies" because we did not use "\b" at the end to delimit a word boundary;
  • ".*" at the end is necessary to match the trailing characters after the name

Empty

importer.fetchSize

Fetch size assigned to a resultset on loading metadata

No default value

importer.importApproximateIndexes

If set to TRUE, imports approximate index information

TRUE

importer.importIndexes

If set to TRUE, imports index/unique key/cardinality information

FALSE

importer.importKeys

If set to TRUE, imports primary and foreign keys

FALSE

importer.importProcedures

If set to TRUE, imports procedures and procedure columns. Overloaded procedures can be imported with an additional option useProcedureSpecificName.

Please note that it is currently not possible to import procedures which use the same name for more than one parameter (e.g. same name for IN and OUT parameters). Such procedures can be excluded from import with the parameter excludeProcedures.
Please note that it is not always possible to import procedure result set columns due to database limitations

FALSE

TRUE only for CData connector

importer.loadColumnsTableByTable

Set to TRUE to force table by table metadata processing

FALSE/TRUE only for Netsuite and SAP Advantage Database Server

importer.loadMetadataWithJdbc

If set to TRUE, turns off all custom metadata load ways

FALSE

importer.loadSourceSystemFunctions

If set to TRUE, data source-specific functions are loaded. Supported for Microsoft SQL Server and Azure

FALSE

importer.procedureNamePattern

Procedure(s) to import. If omitted, all procedures will be imported. % as a wildcard is allowed: for example, importer.procedureNamePattern=foo% will import foo , foobar, etc. W orks only in combination with importProcedures

Empty

importer.quoteNameInSource

If set to FALSE, directs the Data Virtuality Server to create source queries using unquoted identifiers

TRUE

importer.renameDuplicateColumns

If set to TRUE,  renames duplicate columns caused by either mixed case collisions or autoCorrectColumnNames replacing . with _. The suffix _n where n is an integer will be added to make the name unique

TRUE

importer.renameDuplicateTables

If set to TRUE, renames duplicate tables caused by mixed case collisions. The suffix _n where n is an integer will be added to make the name unique

TRUE

importer.replaceSpecSymbsInColNames

If set to TRUE, replaces all special symbols (any symbols not in the ^A-Za-z0-9_ sequence) to the _ symbol in column names of tables

FALSE / TRUE only for BigQuery

importer.schemaPattern

Schema(s) to import. If omitted or has "" value, all schemas will be imported. % as wildcard is allowed: for example, importer.schemaPattern=foo% will import foo, foobar, etc. To specify several schema names or/and patterns, values should be comma-separated and enclosed within double quotes: importer.schemaPattern="schema1,schema2,pattern1%,pattern2%". For proper escaping of special characters depending on the type of data source, check Escaping special characters in schema names or use wildcards instead: "[schema_name]" can be rewritten as "%schema%name%". Helps to speed up metadata loading

Empty

importer.skipMetadataLoadOnStartup

If set to TRUE, allows skipping metadata loading on server startup if there's cached metadata for the data source. . Together with importer.enableMetadataCache=true, it allows using materialized views after server restart when the original source is unavailable

FALSE

importer.tableNamePattern

Table(s) to import. If omitted, all tables will be imported. % as a wildcard is allowed: for example, importer.tableNamePattern=foo% will import foo, foobar, etc

Empty

importer.tableTypes

Comma-separated list (without spaces) of table types to import. Available types depend on the DBMS. Usual format: "TABLE,VIEW".

Other typical types are "SYSTEM TABLE", "GLOBAL TEMPORARY", "LOCAL TEMPORARY", "ALIAS", "SYNONYM"

Empty

importer.useCatalogName

If set to TRUE, uses any non-null/non-empty catalogue name as part of the name in source, e.g. "catalogue"."table"."column", and in the CData Virtuality Server runtime name if useFullSchemaName is TRUE. If set to FALSE, will not use the catalogue name in either the name in source or the CData Virtuality Server runtime name. Should be set to FALSE for sources that do not fully support a catalogue concept, but return a non-null catalogue name in their metadata - such as HSQL

TRUE / FALSE only for Hive and EXASOL

importer.useFullSchemaName

If set to FALSE, directs the importer to drop the source catalogue/schema from the CData Virtuality Server object name so that the CData Virtuality Server fully qualified name will be in the form of <model name>.<table name>.

Please note that this may lead to objects with duplicate names when importing from multiple schemas, which results in an exception

TRUE

importer.useProcedureSpecificName

If set to TRUE, allows the import of overloaded procedures (which will normally result in a duplicate procedure error) by using the unique procedure specific name in the CData Virtuality Server. This option will only work with JDBC 4.0 compatible drivers that report specific names

FALSE

importer.widenUnsignedTypes

If set to TRUE, converts unsigned types to the next widest type. For example, SQL Server reports tinyint as an unsigned type. With this option enabled, tinyint would be imported as a short instead of a byte

TRUE

The names of the data source properties are case-sensitive.

Escaping wildcards in importer.catalog available since v4.0.8

Default values importer.catalog='EXA_DB' and importer.useCatalogName=FALSE available since v4.4

importer.loadSourceSystemFunctions is available since v4.6

importer.importProcedures set to TRUE by default for CData connector since v4.7

Creating/Dropping Tables and Inserting Data

BigQeury データソースにテーブルを作成 / 削除してデータを挿入するには、importer.defaultSchema  データソースプロパティをターゲットデータセットの名前に設定する必要があります。以下はその例です:

CALL SYSADMIN.createConnection(name => 'bq', jbossCliTemplateName => 'bigquery', connectionOrResourceAdapterProperties => 'projectId=XXXX,[email protected],password=PATH_TO_KEY_FILE') ;;
CALL SYSADMIN.createDatasource(name => 'bq', translator => 'bigquery', modelProperties => 'importer.useCatalogName=false,importer.defaultSchema=TARGET_DATA_SET,importer.schemaPattern=SCHEMA_PATTERN_INCLUDING_TARGET_DATASET,importer.useFullSchemaName=false') ;;

Segmenting with Partitioned Tables

BigQuery のパーティショニング機能は、API モードとDDL(useDdl=TRUE)モードの両方でサポートされています。

パーティショニングはOPTIONS 句でサポートされます。 

Types of Partitioning

Integer range partitioning

特定の INTEGER  カラムの値の範囲に基づいてテーブルをパーティショニングできます。 

Time-unit column partitioning

テーブル内の DATE またはTIMESTAMP  カラムでテーブルをパーティショニングできます。 

 TIMESTAMP カラムの場合、パーティションは時間単位、日単位、月単位、年単位のいずれかになります。 DATE  カラムの場合、パーティションは日、月、年の粒度を持つことができます。パーティションの境界はUTC 時間に基づいています。

Ingestion time partitioning

DDL モードでのみ有効です。

インジェスチョンタイムパーティションテーブルには、 _PARTITIONTIME という名前の疑似カラムがあります。このカラムの値は、パーティション境界(1時間ごとや1日ごとなど)で切り捨てられた、各行の取り込み時間です。 _PARTITIONTIME を使う代わりに、 _PARTITIONDATE を使うこともできます。 _PARTITIONDATE  擬似カラムには、 _PARTITIONTIME  擬似カラムの値に対応するUTC 日付が含まれます。

パーティションの粒度は、時間単位、日単位、月単位、年単位から選択できます。 

Partitioning Options

partition_expiration_days 

取り込み時間または時間単位カラムでパーティショニングされたテーブルを作成する場合、パーティションの有効期限を指定できます。この設定は、BigQuery が各パーティションのデータを保持する期間を指定します。この設定はテーブル内のすべてのパーティションに適用されますが、パーティション時間に基づいてパーティションごとに独立して計算されます。

FLOAT の値を受け付けます。

require_partition_filter

このテーブルに対するクエリに、パーティショニングカラムでフィルタリングする述語フィルタを含める必要があるかどうかを指定します。 デフォルト値は FALSE です。

partition_expression

Partition_expression  はテーブルの分割方法を決定する式です。パーティション式には以下の値を含めることができます:

partition_expression 

description

partitioning type

valid for mode

_PARTITIONDATE

Partition by ingestion time with daily partitions

Ingestion time

DDL

DATE(_PARTITIONTIME)

Equivalent to _PARTITIONDATE

Ingestion time

DDL

<date_column>

Partition by the DATE column with daily partitions

Time-unit column

API/DDL

DATE( <timestamp_column> ) 

Partition by the TIMESTAMP column with daily partitions.

Time-unit column

API/DDL

DATETIME_TRUNC(<timestamp_column>, { DAY | HOUR | MONTH | YEAR }) 

Partition by the TIMESTAMP column with the specified partitioning type.

Time-unit column

API/DDL

TIMESTAMP_TRUNC(<timestamp_column>, { DAY | HOUR | MONTH | YEAR })

Partition by the TIMESTAMP column with the specified partitioning type.

Time-unit column

DDL

TIMESTAMP_TRUNC(_PARTITIONTIME, { DAY | HOUR | MONTH | YEAR })

Partition by ingestion time with the specified partitioning type.

Ingestion time

DDL

DATE_TRUNC(<date_column>, { MONTH | YEAR }) 

Partition by the DATE column with the specified partitioning type.

Time-unit column

API/DDL

RANGE_BUCKET(<int64_column>, GENERATE_ARRAY(<start>, <end>[, <interval>]))

Partition by an integer column with the specified range, where:

    • start is the start of range partitioning, inclusive.
    • end is the end of range partitioning, exclusive.
    • interval is the width of each range within the partition. Defaults to 1.

Integer range

API/DDL

Examples

日単位の粒度でDATE カラムでパーティショニングし、パーティションに3日間のデータを保持し、テーブルからクエリを実行するためにパーティショニングカラムでフィルタ処理するフィルタが必要です。つまり、SELECT クエリにはDATE カラムを含むWhere 句が含まれている必要があります。

CREATE TABLE bigquery.partition1 (i INTEGER, d DATE, s STRING) OPTIONS (partition_by 'd',partition_expiration_days '3', require_partition_filter 'true') ;;

月単位の粒度でタイムスタンプ範囲にパーティショニングされたテーブルを作成する:

CREATE TABLE bigquery_ddl.partition_ddl_date (i integer, d timestamp, s string) OPTIONS (partition_by 'DATE_TRUNC(d, MONTH)') ;;

整数の範囲でパーティショニングされたテーブルを作成するには、テーブルからのクエリに対してパーティションカラムでフィルタ処理するフィルタが必要です。つまり、SELECT クエリにはINTEGER カラムを含むWhere 句が含まれている必要があります。

CREATE TABLE bigquery_ddl.partition_ddl_integer (i integer, d timestamp, s string) OPTIONS (partition_by 'RANGE_BUCKET(i, GENERATE_ARRAY(0, 100, 10))', require_partition_filter 'true') ;;

Partitioning options and integer-range, date-unit partitioning for API mode are available since v4.10

Clustering

クラスタリングはOPTIONS 句でサポートされます:

CREATE TABLE bigquery.cluster (i integer, d date, ts timestamp, s string, b boolean) OPTIONS (cluster_by 'i,d,ts,s') ;;

Configuring 3-legged OAuth for BigQuery

BigQuery で3-legged OAuth を設定するには、以下の手順に従います:

1. BigQuery プロジェクトのDeveloper Console にログインします。

2. 新しいOAuth2 認証情報(CLIENT_ID  とCLIENT_SECRET キーのペア)を生成します。

3. 以下のリンクの CLIENT_ID を自分の CLIENT_ID に置き換え、ブラウザでこのリンクにアクセスします:

4. 'Allow' を選択してアクセスを許可します:

5. 存在しないページにリダイレクトされますが、これは正しいです。ページのURL をコピーしてコードを取得します:

(警告) OAuth コードは10分間のみ有効です。

http: //localhost:9000/redirect.html?to=connect/bigquery/oauth&state=security_token&code=4/cwHserUpJwJSbKWQX-1Ua-cw5xWtdJPEDBx4nfbRmJ9F0wW2NVlLEhE8umS9bE6cdXas0IK9FZfirA02ZYDeTTw&scope=https://www.googleapis.com/auth/bigquery


6. ステップ3 で取得したCLIENT_IDCLIENT_SECRET、ステップ5 で取得したAUTH_CODE、BigQuery プロジェクトのPROJECT_ID を使用してデータソースを作成します。redirectUrl は、ステップ3 と5 と同じである必要があります:

CALL SYSADMIN.createConnection('bq3legged','bigquery','authType=OAUTH,
projectId=PROJECT_ID ,
authCode=AUTH_CODE ,
redirectUri=http%3A%2F%2Flocalhost%3A9000%2Fredirect.html%3Fto%3Dconnect/bigquery/oauth,
user-name=CLIENT_ID,
password=CLIENT_SECRET') ;;
CALL SYSADMIN.createDatasource('bq3legged','bigquery','importer.schemaPattern=bq_schema_pattern,importer.defaultSchema=bq_schema,
importer.useFullSchemaName=false,importer.useCatalogName=false','supportsNativeQueries=true') ;;

See Also

Partitioning Tables in BigQuery パーティショニングされていないテーブルをパーティショニングされたテーブルに変換してクエリのパフォーマンスを向上させる方法を学ぶには、こちらをご覧ください。

Google BigQuery as analytical storage available since v4.10