I am using Hortonworks distribution and It comes with phoenix :)

No idea about the patch.
You try posting the error in CDH forum .
It might help.

All the Best !!


On 1 March 2016 at 13:15, Amit Shah <amits.84@gmail.com> wrote:
No, it doesn't work for phoenix 4.6. Attached is the error I get when I execute 'sqlline.py <zookeeper-server-name>:2181'

Can you please give more details about the patch?


On Tue, Mar 1, 2016 at 10:39 AM, Divya Gehlot <divya.htconex@gmail.com> wrote:
Hi Amit,
Is it working ?
No , Mine is phoenix 4.4 .


On 1 March 2016 at 13:00, Amit Shah <amits.84@gmail.com> wrote:
Hi Divya,

Thanks for the patch. Is this for phoenix version 4.6 ? Are the changes made to make phoenix work with CDH 5.5.2?


On Tue, Mar 1, 2016 at 10:08 AM, Divya Gehlot <divya.htconex@gmail.com> wrote:
Hi Amit,
Extract attached jar and try placing it in your hbase classpath

P.S. Please remove the 'x' from the jar extension 
Hope this helps.


On 26 February 2016 at 20:44, Amit Shah <amits.84@gmail.com> wrote:

I have been trying to install phoenix on my cloudera hbase cluster. Cloudera version is CDH5.5.2 while HBase version is 1.0.

I copied the server & core jar (version 4.6-HBase-1.0) on the master and region servers and restarted the hbase cluster. I copied the corresponding client jar on my SQuirrel client but I get an exception on connect. Pasted below. The connection url is “jdbc:phoenix:<zookeeper-server-name>:2181".
I even tried compiling the source by adding cloudera dependencies as suggested on this post but didn't succeed. 

Any suggestions to make this work?



Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG: org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1319)
at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11715)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7388)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1776)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1758)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:1016)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1092)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1266)
... 10 more

P.S - The full stacktrace is attached in the mail.