phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: Passing sql query to spark phoenix load method
Date Tue, 04 Aug 2015 17:18:18 GMT
The way to establish an offset in Phoenix is using our row value
constructor mechanism: https://phoenix.apache.org/paged.html. This can be
part of a predicate and Phoenix will use it to form a startRow on the scan
(which is essentially an offset).

On Tue, Aug 4, 2015 at 10:15 AM, Josh Mahonin <jmahonin@interset.com> wrote:

> Hi,
>
> The phoenix-spark module allows passing in a custom 'predicate', as long
> as no aggregation functions are used. You can see examples here:
>
> https://github.com/apache/phoenix/blob/master/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala#L215-L234
>
> Re: offsetting, once you have the RDD (or DataFrame), you can skip rows
> there, but there's no way to pass in an offset to the original query.
>
> Josh
>
> On Tue, Aug 4, 2015 at 11:13 AM, Hafiz Mujadid <hafizmujadid00@gmail.com>
> wrote:
>
>>
>> Hi all!
>>
>> How can we pass custom query to spark phoenix load method? do we  have
>> some way to skip first n rows in result just like in sql we have offset
>> keyword to skip first n records.
>>
>> Thanks
>>
>
>

Mime
View raw message