phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohanraj Ragupathiraj <>
Subject Re: Read Full Phoenix Table
Date Tue, 12 Jul 2016 04:13:41 GMT
Thank you for your reply. I tried passing the PKs through IN clause. But
the number of PKs to match between files and Phoenix table some times can
be 70 million and i felt it will be much slower if i use  IN clause. May i
know how much PKs you passed through IN clause ?

On Tue, Jul 12, 2016 at 12:09 PM, Simon Wang <> wrote:

> I actually recently did something similar. If you are joining on primary
> keys, you can do batch query with the IN clause.
> On Jul 11, 2016, at 9:05 PM, Mohanraj Ragupathiraj <>
> wrote:
> Hi,
> I have a Scenario in which i have to load a phoenix table as a *whole *and
> join it with multiple files in Spark. But it takes around 30 minutes just
> to read 600 million records from the Phoenix table. I feel it is
> inappropriate to load full table data, as HBase works best for Random
> lookups.
> May i know if there is a way to read the Entire phoenix table as a
> file/files rather loading using JDBC or Dataframes.
> Thanks in advance !

Thanks and Regards
VISA Pte Limited, Singapore.

View raw message