phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: Inserting data in Hbase Phoenix using the thrift api
Date Thu, 08 May 2014 06:31:55 GMT
Hi Unilocal,
Yes, both salting and secondary indexing rely on the Phoenix client in
cooperation with the server.

Would it be possible for the C++ server to generate CSV files instead? Then
these could be pumped into Phoenix through our CSV bulk loader (which could
potentially be invoked through a variety of ways). Another alternative may
be through our Apache Pig integration. Or it'd be pretty easy to adapt our
Pig store func to a Hive SerDe. Then you could use the Hive ODBC driver to
pump in data that's formated in a Phoenix compliant manner.

If none of these are options, you could pump into a Phoenix table and then
transfer the data (using Phoenix APIs) through UPSERT SELECT into a salted
table or a table with secondary indexes.

Thanks,
James


On Mon, May 5, 2014 at 2:42 PM, Localhost shell <
universal.localhost@gmail.com> wrote:

> Hey Folks,
>
> I have a use case where one of the apps(C++ server) will pump data into
> the Hbase.
> Since Phoenix doesn't support ODBC api's so the app will not be able to
> use the Phoenix JDBC api and will use Hbase thirft api to insert the data.
> Note: The app that is inserting data will create the row keys similar to
> the way how Phoenix JDBC creates.
>
> Currently no data resides in Hbase and the table will be freshly created
> using the SQL commands (using phoenix sqlline).
> All the analysis/group-by queries will be triggered by a different app
> using the Phoenix JDBC API's.
>
> In the above mention scenario, Are there any Phoenix functionalities ( for
> ex: Salting, Secondary indexing) that will not be available because Phoenix
> JDBC driver is not used for inserting data.
>
> Can someone please share their thoughts on this?
>
> Hadoop Distro: CDH5
> HBase: 0.96.1
>
> --Unilocal
>
>
>

Mime
View raw message