CData Python Connector for Google BigQuery

Build 24.0.9062

Petl から

The 本製品 can be used to create ETL applications and pipelines for CSV data in Python using Petl.

Install Required Modules

Install the Petl modules using the pip utility.
pip install petl

Connecting

After you import the modules, including the CData Python Connector for Google BigQuery, you can use the 本製品's connect function to create a connection using a valid Google BigQuery connection string. If you prefer not to use a direct connection, you can use a SQLAlchemy engine.
import petl as etl
import cdata.googlebigquery as mod
cnxn = mod.connect("InitiateOAuth=GETANDREFRESH;ProjectId=NameOfProject;DatasetId=NameOfDataset;")

Extract, Transform, and Load the Google BigQuery Data

Create a SQL query string and store the query results in a DataFrame.
sql = "SELECT	actor.attributes.email, repository.name FROM [publicdata].[samples].github_nested "
table1 = etl.fromdb(cnxn,sql)

Loading Data

With the query results stored in a DataFrame, you can load your data into any supported Petl destination. The following example loads the data into a CSV file.
etl.tocsv(table1,'output.csv')

Modifying Data

Insert new rows into Google BigQuery tables using Petl's appenddb function.
table1 = [['actor.attributes.email','repository.name'],['EntityFramework','CoreCLR']]
etl.appenddb(table1,cnxn,'[publicdata].[samples].github_nested')

Copyright (c) 2024 CData Software, Inc. - All rights reserved.
Build 24.0.9062