Petl から
The 本製品 can be used to create ETL applications and pipelines for CSV data in Python using Petl.
Install Required Modules
Install the Petl modules using the pip utility.
pip install petl
Connecting
Import the modules, including the CData Python Connector for Apache Hive. You can then use the 本製品's connect function to create a connection using a valid Apache Hive connection string. A SQLAlchemy engine may also be used instead of a direct connection.
import petl as etl import cdata.apachehive as mod cnxn = mod.connect("Server=127.0.0.1;Port=10000;TransportMode=BINARY")
Extract, Transform, and Load the Apache Hive Data
Create a SQL query string and store the query results in a DataFrame.
sql = "SELECT City, CompanyName FROM [CData].[Default].Customers " table1 = etl.fromdb(cnxn,sql)
Loading Data
With the query results stored in a DataFrame, you can load your data into any supported Petl destination. The following example loads the data into a CSV file.
etl.tocsv(table1,'output.csv')
Modifying Data
Insert new rows into Apache Hive tables using Petl's appenddb function.
table1 = [['City','CompanyName'],['John Deere','RSSBus Inc.']] etl.appenddb(table1,cnxn,'[CData].[Default].Customers')