phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Samarth Jain <samarth.j...@gmail.com>
Subject Re: ResultSet size
Date Tue, 06 Oct 2015 15:50:55 GMT
To add to what Jesse said, you can override the default scanner fetch size
programmatically via Phoenix by calling statement.setFetchSize(int).
On Tuesday, October 6, 2015, Jesse Yates <jesse.k.yates@gmail.com> wrote:

> So HBase (and by extension, Phoenix) does not do true "streaming" of rows
> - rows are copied into memory from the HFiles and then eventually copied
> en-mass onto the wire. On the client they are pulled off in chunks and
> paged through by the client scanner. You can control the batch size (amount
> of rows in each 'page') via the usual HBase client configurations
>
> On Tue, Oct 6, 2015 at 8:38 AM Sumit Nigam <sumit_only@yahoo.com
> <javascript:_e(%7B%7D,'cvml','sumit_only@yahoo.com');>> wrote:
>
>> Hi,
>>
>> Does Phoenix buffer the result set internally? I mean when I fire a huge
>> skip scan IN clause, then the data being returned may be too huge to
>> contain in memory. So, ideally I'd like to stream data through the
>> resultset.next() method. So, my question is does Phoenix really stream
>> results?
>>
>> And if so, is there a way to control how much is loads in one time in
>> client side before its next() fetches next batch of data from region
>> servers to client?
>>
>> Best regards,
>> Sumit
>>
>

Mime
View raw message