phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wijesinghe, Dimitri" <dwijesin...@ivantagehealth.com>
Subject Re: Issue Creating Indexes in Phoenix
Date Thu, 30 Jan 2014 20:16:48 GMT
Hi James,

Thanks for your reply. Updated to 2.2.2 and the issue persisted. I went
into the master node's regionserver log and found this exception (copied
the entire section that mentioned the index, which I had called idx)

hduser@C-Master:/usr/local/hadoop/hbase/logs$ vim
hbase-hduser-regionserver-C-Ma
                                ster.out
14/01/30 19:52:37 INFO regionserver.HRegion: Starting compaction on _0 in
region
IDX,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1391110815630.18468269b48eec607d41e5e52fa59ffe.
14/01/30 19:52:37 INFO regionserver.Store: Starting compaction of 4 file(s)
in _0 of
IDX,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1391110815630.18468269b48eec607d41e5e52fa59ffe.
into
tmpdir=hdfs://C-Master:54310/hbase/IDX/18468269b48eec607d41e5e52fa59ffe/.tmp,
seqid=375945, totalSize=65.6m
14/01/30 19:52:37 INFO util.FSUtils: FileSystem doesn't support
getDefaultReplication
14/01/30 19:52:37 INFO util.FSUtils: FileSystem doesn't support
getDefaultBlockSize
14/01/30 19:52:37 INFO regionserver.StoreFile: Delete Family Bloom filter
type for
hdfs://C-Master:54310/hbase/IDX/18468269b48eec607d41e5e52fa59ffe/.tmp/97cf82bd088a4b03ae0d169cfeddecdf:
CompoundBloomFilterWriter
14/01/30 19:52:49 INFO regionserver.StoreFile: NO General Bloom and NO
DeleteFamily was added to HFile
(hdfs://C-Master:54310/hbase/IDX/18468269b48eec607d41e5e52fa59ffe/.tmp/97cf82bd088a4b03ae0d169cfeddecdf)
14/01/30 19:52:49 INFO regionserver.Store: Renaming compacted file at
hdfs://C-Master:54310/hbase/IDX/18468269b48eec607d41e5e52fa59ffe/.tmp/97cf82bd088a4b03ae0d169cfeddecdf
to
hdfs://C-Master:54310/hbase/IDX/18468269b48eec607d41e5e52fa59ffe/_0/97cf82bd088a4b03ae0d169cfeddecdf
14/01/30 19:52:50 INFO regionserver.Store: Completed major compaction of 4
file(s) in _0 of
IDX,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1391110815630.18468269b48eec607d41e5e52fa59ffe.
into 97cf82bd088a4b03ae0d169cfeddecdf, size=63.5m; total size for store is
63.5m
14/01/30 19:52:50 INFO compactions.CompactionRequest: completed compaction:
regionName=IDX,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1391110815630.18468269b48eec607d41e5e52fa59ffe.,
storeName=_0, fileCount=4, fileSize=65.6m, priority=3, time=1151466089681;
duration=12sec
14/01/30 19:55:19 WARN ipc.HBaseServer: IPC Server listener on 60020:
readAndProcess threw exception java.io.IOException: Connection reset by
peer. Count of bytes read: 0
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
        at sun.nio.ch.IOUtil.read(IOUtil.java:198)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
        at
org.apache.hadoop.hbase.ipc.HBaseServer.channelRead(HBaseServer.java:1796)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1179)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:722)

Does this shed any light on the issue?

Thanks,
-D


On Thu, Jan 30, 2014 at 12:35 PM, James Taylor <jamestaylor@apache.org>wrote:

> Hi Dimitri,
> Please try with 2.2.2 and let us know if the problem occurs. If yes, then
> check your server logs for any exceptions and send that along as well.
> Thanks,
> James
>
>
> On Thursday, January 30, 2014, Wijesinghe, Dimitri <
> dwijesinghe@ivantagehealth.com> wrote:
>
>> Hi,
>>
>> I am trying to create an index for a table with about 100M records, with
>> 10 SALT buckets across a five-node cluster. Running HBase 0.94.12 and
>> Phoenix 2.1.2.
>>
>> When I issue the following command in sqlline.sh I get the error message
>> below after a roughly 10-minute wait.
>>
>> 0: jdbc:phoenix:C-Master,C-Slave,C-Slave2,C-S> CREATE INDEX idx ON
>> WageRate(HospID,CostCenter,JobCode,Rate,PrimaryKey);
>>
>> Error:  (state=08000,code=101)
>>
>> I believe this is just the general Phoenix error code so I am not what
>> the exact problem is. I had previously tried this operation on a 1M record
>> table (the 100M table is basically a 100 copies of the previous table with
>> different primary keys). The indexing on the 1M table worked perfectly.
>>
>> Any help would be greatly appreciated as un-indexed operations on the
>> cluster are now taking a fairly significant amount of time.
>>
>> Thanks!
>> -D
>>
>> --
>> *Dimitri Wijesinghe*
>>
>> *iVantage Health Analytics*®
>>
>> Web Developer
>>
>> 300 Chestnut Street, Suite 101 | Needham, MA 02492
>>
>> direct: 781-247-20*91* | office: 781-499-5287
>>
>> email: DWijesinghe@iVantageHealth.com | www.iVantageHealth.com<http://www.ivantagehealth.com/>
>>
>


-- 
*Dimitri Wijesinghe*

*iVantage Health Analytics*®

Web Developer

300 Chestnut Street, Suite 101 | Needham, MA 02492

direct: 781-247-20*91* | office: 781-499-5287

email: DWijesinghe@iVantageHealth.com |
www.iVantageHealth.com<http://www.ivantagehealth.com/>

Mime
View raw message