phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: Phoenix Bulk Load With Column Overrides
Date Wed, 20 Apr 2016 14:06:49 GMT
Note that it's case sensitive, so try upper casing your column names in
your psql.py call.

On Wednesday, April 20, 2016, Amit Shah <amits.84@gmail.com> wrote:

> Hello,
>
> I am using phoenix 4.6 and trying to bulk load data into a table from a
> csv file using the psql.py utility. How do I map the table columns to the
> header values in the csv file through the "-h" argument?
>
> For e.g. Assume my phoenix table does not match the columns in the csv.
> The phoenix table looks like this
>
> CREATE TABLE PRODUCT(
> PRODUCT_ID BIGINT NOT NULL
> ,PRODUCT_CLASS_ID BIGINT
> ,BRAND_NAME VARCHAR
> ,PRODUCT_NAME VARCHAR
> ,CONSTRAINT pk PRIMARY KEY (PRODUCT_ID)
> );
>
> while the csv has lot of other columns for e.g.
>
> product_class_id,product_id,brand_name,product_name,SKU,SRP,gross_weight,net_weight
>
> Note the order of the columns is also different. Execute the utility with
> the below arguments fails with an error "Don't know how to interpret
> argument 'brand_name,'
>
> python psql.py -t PRODUCT -h product_class_id, product_id, brand_name,
> product_name localhost product.csv
>
> How can I fix this?
>
> Thanks,
> Amit
>

Mime
View raw message