phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Fulin Sun" <>
Subject Re: Re: CANNOT connect to Phoenix in CDH 5.5.1
Date Wed, 02 Mar 2016 05:31:54 GMT
Hi, James
Wow that works ! I just set client side parameter to my hbase-site.xml and now I can successfully
connect to phoenix. 
Thanks so much. 

From: James Taylor
Date: 2016-03-02 12:57
To: user
Subject: Re: CANNOT connect to Phoenix in CDH 5.5.1
Hi Fulin,

I'm not sure why CDH 5.5.1 isn't working with Phoenix, but if you want to turn off the salting
of the sequence table (and assuming you're not using sequences, or indexes on views or local
indexes which rely on sequences), you can do the following:
- set client-side phoenix.sequence.saltBuckets to 0 in your hbase-site.xml (ensuring that
hbase-site.xml is on your classpath).
- open an HBase shell and disable and drop the SYSTEM.SEQUENCE table.
- start sqlline connected to your cluster. At this point, the SYSTEM.SEQUENCE table will be
re-created without being salted.

FWIW, we changed the default of phoenix.sequence.saltBuckets to 0 in 4.6 as this was causing
issues in smaller clusters.

HTH. Thanks,


On Tue, Mar 1, 2016 at 8:28 PM, Fulin Sun <> wrote:
I am trying to use phoenix with CDH 5.5.1 but get no luck of running. Neither the Cloudera-Labs
phoenix nor maven built phoenix on CDH 5.5 version would work. 
And all of them would cause regionservers down and the sqlline hangs there without any response.
My Linux version is Centos 7 with CDH 5.5.1

I can refer to the Cloudera community that maybe the SYSTEM.SEQUENCE table has over 224 regions
and then killed all regionservers when running sqlline command at
the first time. Then sqlline would print the following error message. Can some expert explain
this to me ? Or is there some workaround of resolving this ? 

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36,
Wed Mar 02 12:14:44 CST 2016, null, callTimeout=60000, callDuration=70087:
row 'SYSTEM.SEQUENCE,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740,
hostname=dev-1,60020,1456891735323, seqNum=0

at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(
at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(
... 31 more
Caused by: callTimeout=60000, callDuration=70087: row 'SYSTEM.SEQUENCE,,00000000000000'
on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=dev-1,60020,1456891735323,
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: Connection refused
at Method)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
... 4 more

View raw message