phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Flavio Pompermaier <pomperma...@okkam.it>
Subject Re: ScanningResultIterator resiliency
Date Mon, 13 Apr 2015 09:31:23 GMT
I tried to set  hbase.client.scanner.caching = 1 on both client and server
side and I still get that error :(

On Mon, Apr 13, 2015 at 10:31 AM, Flavio Pompermaier <pompermaier@okkam.it>
wrote:

> Disabling caching will turn off this kind of errors? is that possible?
> Or is it equivalent to set *hbase.client.scanner.caching = 1?*
>
> On Mon, Apr 13, 2015 at 10:25 AM, Ravi Kiran <maghamravikiran@gmail.com>
> wrote:
>
>> Hi Flavio,
>>
>>   Currently, the default scanner caching value that Phoenix runs with is
>> 1000. You can give it a try to reduce that number by updating the property "
>> *hbase.client.scanner.caching*"  in your hbase-site.xml. If you are
>> doing a lot of processing for each record in your Mapper,  you might still
>> notice these errors.
>>
>> Regards
>> Ravi
>>
>> On Mon, Apr 13, 2015 at 12:21 AM, Flavio Pompermaier <
>> pompermaier@okkam.it> wrote:
>>
>>> Hi to all,
>>>
>>> when running a mr job on my Phoenix table I get this exception:
>>>
>>> Caused by: org.apache.phoenix.exception.PhoenixIOException: 299364ms
>>> passed since the last invocation, timeout is currently set to 60000
>>> at
>>> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
>>> at
>>> org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:52)
>>> at
>>> org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:104)
>>> at
>>> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
>>> at
>>> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:67)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:764)
>>> at
>>> org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:131)
>>>
>>> This is due to a long interval between two consecutive next() on the
>>> scan results.
>>> However this error is not a problematic one, it just tells the client
>>> that the server has closed that scanner instance so it could be fixed
>>> regenerating a new scan restarting from the last valid key (obviousli on
>>> next() you should track the last valid key if successful).
>>> What do you think?
>>>
>>> Best,
>>> Flavio
>>>
>>
>>
>

Mime
View raw message