phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From alexander.scherba...@yandex.com
Subject Re: LIMIT statement when loading data in Phoenix Spark module.
Date Wed, 07 Mar 2018 14:25:37 GMT
Is there a documentation which describes which queries and how will be propagated to the server
during data fetching for the Phoenix Spark?

Thanks,
Alexandr.  

07.03.2018, 16:24, "Xavier Jodoin" <xavier@jodoin.me>:
> it will limit the number of rows fetched by the client
>
> On 2018-03-07 07:54 AM, alexander.scherbatiy@yandex.com wrote:
>>  Does it work that only the limited number of rows will be sent from the each HBase
Region Server to the client?
>>
>>  I just ask because I can use the WHERE statement in the same way in the Spark SQL
instead of passing the predicate.
>>
>>  Thanks,
>>  Alexandr.
>>
>>  07.03.2018, 15:35, "Xavier Jodoin" <xavier@jodoin.me>:
>>>  You can do it directly with spark sql
>>>
>>>  Xavier
>>>
>>>  On 2018-03-07 06:38 AM, alexander.scherbatiy@yandex.com wrote:
>>>>    Hello,
>>>>
>>>>    I use the Phoenix Spark plugin to load data from HBase.
>>>>
>>>>    There is the SparkSqlContextFunctions.phoenixTableAsDataFrame() method
which allows to get a Dataset
>>>>    for the given table name, columns and a predicate.
>>>>
>>>>    Is it possible to also provide LIMIT statement so the number of the
retrieved rows were restricted?
>>>>
>>>>    Thanks,
>>>>    Alexander.

Mime
View raw message