phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: CANNOT connect to Phoenix in CDH 5.5.1
Date Wed, 02 Mar 2016 04:57:24 GMT
Hi Fulin,

I'm not sure why CDH 5.5.1 isn't working with Phoenix, but if you want to
turn off the salting of the sequence table (and assuming you're not using
sequences, or indexes on views or local indexes which rely on sequences),
you can do the following:
- set client-side phoenix.sequence.saltBuckets to 0 in your hbase-site.xml
(ensuring that hbase-site.xml is on your classpath).
- open an HBase shell and disable and drop the SYSTEM.SEQUENCE table.
- start sqlline connected to your cluster. At this point, the
SYSTEM.SEQUENCE table will be re-created without being salted.

FWIW, we changed the default of phoenix.sequence.saltBuckets to 0 in 4.6 as
this was causing issues in smaller clusters.

HTH. Thanks,

    James

On Tue, Mar 1, 2016 at 8:28 PM, Fulin Sun <sunfl@certusnet.com.cn> wrote:

> Hi,
> I am trying to use phoenix with CDH 5.5.1 but get no luck of running.
> Neither the Cloudera-Labs phoenix nor maven built phoenix on CDH 5.5
> version would work.
> And all of them would cause regionservers down and the sqlline hangs there
> without any response. My Linux version is Centos 7 with CDH 5.5.1
>
> I can refer to the Cloudera community that maybe the SYSTEM.SEQUENCE table
> has over 224 regions and then killed all regionservers when running sqlline
> command at
> the first time. Then sqlline would print the following error message. Can
> some expert explain this to me ? Or is there some workaround of resolving
> this ?
>
>
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36,
exceptions:
>
> Wed Mar 02 12:14:44 CST 2016, null, java.net.SocketTimeoutException: callTimeout=60000,
callDuration=70087: row 'SYSTEM.SEQUENCE,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740,
hostname=dev-1,60020,1456891735323, seqNum=0
>
>
> at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
>
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:225)
>
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:63)
>
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>
> at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
>
> at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:397)
>
> at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:358)
>
> at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:190)
> at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
>
> at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:608)
>
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:852)
> ... 31 more
>
> Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=70087: row
'SYSTEM.SEQUENCE,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740,
hostname=dev-1,60020,1456891735323, seqNum=0
>
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
>
> at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
>
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:404)
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:710)
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:890)
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:859)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1193)
>
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
>
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
>
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
>
> at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:213)
>
> at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
>
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:371)
>
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:345)
>
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> ... 4 more
>
> ------------------------------
> ------------------------------
>
>
>

Mime
View raw message