InsertLoadJob
Inserts a Google BigQuery load job, which adds data from Google Cloud Storage into an existing table.
Input
| Name | Type | Description |
| SourceURIs | String | A space-separated list of Google Cloud Storage (GCS) Uniform Resource Identifiers (URIs) that point to the source files for the load job. Each URI must follow the format gs://bucket/path/to/file. |
| SourceFormat | String | Specifies the format of the input files, such as CSV, JSON, AVRO, or PARQUET.
使用できる値は次のとおりです。AVRO, NEWLINE_DELIMITED_JSON, DATASTORE_BACKUP, PARQUET, ORC, CSV |
| DestinationTable | String | The fully qualified table where the data should be loaded, formatted as projectId.datasetId.tableId. |
| DestinationTableProperties | String | A JavaScript Object Notation (JSON) object specifying metadata properties for the destination table, such as its friendly name, description, and any associated labels. |
| DestinationTableSchema | String | A JSON array defining the schema fields for the destination table. Each field includes a name, type, and mode. |
| DestinationEncryptionConfiguration | String | A JSON object containing Customer-managed Encryption Key (CMEK) settings for encrypting the destination table. |
| SchemaUpdateOptions | String | A JSON array of schema update options to apply when the destination table exists. Options may include allowing field addition or relaxing field modes. |
| TimePartitioning | String | A JSON object specifying how the destination table should be partitioned by time, including partition type and optional partitioning field. |
| RangePartitioning | String | A JSON object defining range-based partitioning for the destination table. Includes the partitioning field, start, end, and interval values. |
| Clustering | String | A JSON object listing the fields to use for clustering the destination table to improve query performance. |
| Autodetect | String | If the value is 'true', BigQuery automatically detects schema and format options for CSV and JSON files. |
| CreateDisposition | String | Specifies whether the destination table should be created if it does not already exist. Options include CREATE_IF_NEEDED and CREATE_NEVER.
使用できる値は次のとおりです。CREATE_IF_NEEDED, CREATE_NEVER デフォルト値はCREATE_IF_NEEDEDです。 |
| WriteDisposition | String | Determines how data is written to the destination table. Options include WRITE_TRUNCATE, WRITE_APPEND, and WRITE_EMPTY.
使用できる値は次のとおりです。WRITE_TRUNCATE, WRITE_APPEND, WRITE_EMPTY デフォルト値はWRITE_APPENDです。 |
| Region | String | The region where the load job should be executed. Both the source GCS files and the destination BigQuery dataset must reside in the same region. |
| DryRun | String | If the value is 'true', BigQuery validates the job without executing it. Useful for estimating costs or checking errors.
デフォルト値はfalseです。 |
| MaximumBadRecords | String | The number of invalid records allowed before the entire job is aborted. If this value is not set, all records must be valid.
デフォルト値は0です。 |
| IgnoreUnknownValues | String | If the value is 'true', fields in the input data that are not part of the table schema are ignored. If 'false', such fields cause errors.
デフォルト値はfalseです。 |
| AvroUseLogicalTypes | String | If the value is 'true', Avro logical types are used when mapping Avro data to BigQuery schema types.
デフォルト値はtrueです。 |
| CSVSkipLeadingRows | String | The number of header rows to skip at the beginning of each CSV file. |
| CSVEncoding | String | The character encoding used in the CSV files, such as UTF-8 or ISO-8859-1.
使用できる値は次のとおりです。ISO-8859-1, UTF-8 デフォルト値はUTF-8です。 |
| CSVNullMarker | String | If set, specifies the string used to represent NULL values in the CSV files. By default, NULL values are not allowed. |
| CSVFieldDelimiter | String | The character used to separate fields in the CSV files. Common values include commas (,), tabs (\t), or pipes (|).
デフォルト値は,です。 |
| CSVQuote | String | The character used to quote fields in CSV files. Set to an empty string to disable quoting.
デフォルト値は"です。 |
| CSVAllowQuotedNewlines | String | If the value is 'true', quoted fields in CSV files are allowed to contain newline characters.
デフォルト値はfalseです。 |
| CSVAllowJaggedRows | String | If the value is 'true', rows in CSV files may have fewer fields than expected. If 'false', missing fields cause an error.
デフォルト値はfalseです。 |
| DSBackupProjectionFields | String | A JSON list of field names to import from a Cloud Datastore backup. |
| ParquetOptions | String | A JSON object containing import-specific options for Parquet files, such as whether to interpret INT96 timestamps. |
| DecimalTargetTypes | String | A JSON list specifying the order of preference for converting decimal data types to BigQuery types, such as NUMERIC or BIGNUMERIC. |
| HivePartitioningOptions | String | A JSON object describing the source-side Hive-style partitioning used in the input files. |
Result Set Columns
| Name | Type | Description |
| JobId | String | The unique identifier assigned to the newly created load job. |
| Region | String | The region where the load job was executed. |
| Configuration_load_destinationTable_tableId | String | The ID of the destination table that received the loaded data. |
| Configuration_load_destinationTable_projectId | String | The ID of the project containing the destination table for the load job. |
| Configuration_load_destinationTable_datasetId | String | The ID of the dataset containing the destination table for the load job. |
| Status_State | String | The current execution state of the job, such as PENDING, RUNNING, or DONE. |
| Status_errorResult_reason | String | A brief error code that explains why the load job failed, if applicable. |
| Status_errorResult_message | String | A detailed message describing the reason for the job failure, if any. |