phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kevin <kiss.kevin...@gmail.com>
Subject Re: how dose count(1) works?
Date Wed, 06 Jul 2016 04:51:13 GMT
0: jdbc:phoenix:master> select count(1) from STORE_SALES;
+------------------------------------------+
|                 COUNT(1)                 |
+------------------------------------------+
java.lang.RuntimeException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException:
STORE_SALES,,1467706628930.ca35b82bd80c92d0d501c73956ef836f.: null
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:205)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1340)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1656)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1733)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1695)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1335)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3250)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31068)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2147)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:105)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException
at
org.apache.hadoop.io.compress.snappy.SnappyDecompressor.setInput(SnappyDecompressor.java:111)
at
org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:104)
at
org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
at
org.apache.hadoop.hbase.io.compress.Compression.decompress(Compression.java:426)
at
org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultDecodingContext.prepareDecoding(HFileBlockDefaultDecodingContext.java:91)
at org.apache.hadoop.hbase.io.hfile.HFileBlock.unpack(HFileBlock.java:508)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:398)
at
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:540)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:588)
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:287)
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:201)
at
org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:316)
at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:260)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:740)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:715)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:540)
at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:142)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4205)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4288)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4162)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4149)
at
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:284)
at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
... 12 more

at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
at sqlline.SqlLine.print(SqlLine.java:1653)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)

2016-07-04 9:56 GMT+08:00 kevin <kiss.kevin119@gmail.com>:

> hi,all:
>     I was created tables through phoenix and I load data through pig by
> using org.apache.phoenix.pig.PhoenixHBaseStorage . if hbase was worked on
> hdfs all is fine( a count from hbase or phoenix) ,but if hbase was worked
> on alluxio only hbase works well ,on phoenix I have a test of count(1)
> about table a and table b , a have 2880404 rows and test failed by
> regionserver crashed ,b have 1920800 rows test sucessed. I have try to
> increase ram config : export HBASE_HEAPSIZE=4096   export
> HBASE_OPTS="-XX:NewSize=1024m ...   but still failed , I can't found error
> msg from RegionServer's log. the only unusual  msg is :
> 2016-07-01 17:43:53,705 WARN  [RS_LOG_REPLAY_OPS-slave1:60020-1]
> wal.HLogSplitter: Could not open
> alluxio://master:19998/hbase/WALs/slave1,60020,1467365859688-splitting/slave1%2C60020%2C1467365859688.1467365864899
> for reading. File is empty
> java.io.EOFException
> at java.io.DataInputStream.readFully(DataInputStream.java:197)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1848)
> at
> org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1813)
> at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1762)
> at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1776)
> at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:70)
> at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.reset(SequenceFileLogReader.java:176)
> at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.initReader(SequenceFileLogReader.java:185)
> at
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:70)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
> at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
> at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
> at
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> but the server didn't stop when this happen.
>
> and I can find hs_err_pid35722.log from hbase_home
>
>
> 2016-07-02 2:58 GMT+08:00 Alicia Shu <ashu@hortonworks.com>:
>
>> Kevin,
>>
>> Did you upload data into the table through HBase or Phoenix? If through
>> HBase, there could be some meta information not probably updated in
>> Phoenix and you would get exceptions. Nevertheless, log files will be
>> helpful to see what exactly happened.
>>
>> Alicia
>>
>> On 7/1/16, 10:00 AM, "Josh Elser" <josh.elser@gmail.com> wrote:
>>
>> >Can you share the error that your RegionServers report in the log before
>> >they crash? It's hard to give an explanation without knowing the error
>> >you're facing.
>> >
>> >Thanks.
>> >
>> >kevin wrote:
>> >> hi,all
>> >>      I have a test about hbase run top of alluxio . In my hbase there
>> is
>> >> a table a create by phoenix an have 2880404 rows. I can run : count "A"
>> >>   on hbase ,but not on phoenix(select count(1) from A),if I do so,hbase
>> >> region server will crash.
>> >> so I want to know how dose count(1) works?what is the different between
>> >> the tow way when I do the same thing?
>> >
>>
>>
>

Mime
View raw message