phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Dimiduk <ndimi...@gmail.com>
Subject Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
Date Thu, 05 Mar 2015 23:47:07 GMT
You need to update the phoenix jar on the servers to match the client
version. Both client and server should be the same versions for now, at
least until our backward compatibility story is more reliable.

Basically, the new client wrote new metadata to hbase schema and the old
server jars don't have what's needed at runtime.

On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <anilgupta84@gmail.com> wrote:

> Hi All,
>
> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> Phoenix4.1 client because i could not find tar file for
> "Phoenix4-0.0-incubating".
> I tried to create a view on existing table and then my entire cluster went
> down(all the RS went down. MAster is still up).
>
>
> This is the exception i am seeing:
>
> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer:
ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
threw an unexpected exception
> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> We tried to restart the cluster. It died again. It seems, its stucks at this point looking
for
>
> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster
until we fix it.
>
> I was thinking of disabling those tables but none of the RS is coming up. Can anyone
suggest me how can i bail out of this BAD situation.
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Mime
View raw message