phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From anil gupta <anilgupt...@gmail.com>
Subject Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
Date Fri, 06 Mar 2015 22:33:02 GMT
4.0.1 failed with a JUnit failure:
Failed tests:   testSkipScan(org.apache.phoenix.end2end.VariableLengthPKIT)
Tests run: 1115, Failures: 1, Errors: 0, Skipped: 4

I disabled JUnit from the build and it ran successfully.


On Fri, Mar 6, 2015 at 1:49 PM, anil gupta <anilgupta84@gmail.com> wrote:

> Update: I checked out 4.0.1 branch from git and local build is underway.
>
> On Fri, Mar 6, 2015 at 12:50 PM, anil gupta <anilgupta84@gmail.com> wrote:
>
>> Hi James/Mujtaba,
>>
>> I am giving a tech talk of HBase on Monday morning. I wanted to demo
>> Phoenix as part of that. Installation of 4.0.0 jars can only be done in
>> office hours because i am dependent on other team to do it. If i can get
>> the jar in 1-2 hours. I would really appreciate it.
>>
>> Thanks,
>> Anil Gupta
>>
>>
>> On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <jamestaylor@apache.org>
>> wrote:
>>
>>> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>>>
>>> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <anilgupta84@gmail.com>
>>> wrote:
>>> > Hi Ted,
>>> >
>>> > In morning today, I downloaded 4.1 from the link you provided. The
>>> problem
>>> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
>>> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade
>>> to 4.0)
>>> > as my client.
>>> > IMO, we should also have 4.0.0-incubating artifacts since its the
>>> compatible
>>> > version with HDP2.1.5(6 month old release of HDP)
>>> >
>>> > Thanks,
>>> > Anil Gupta
>>> >
>>> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yuzhihong@gmail.com> wrote:
>>> >>
>>> >> Ani:
>>> >> You can find Phoenix release artifacts here:
>>> >> http://archive.apache.org/dist/phoenix/
>>> >>
>>> >> e.g. for 4.1.0:
>>> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>>> >>
>>> >> Cheers
>>> >>
>>> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <anilgupta84@gmail.com>
>>> wrote:
>>> >>
>>> >> > @James: Could you point me to a place where i can find tar file
of
>>> >> > Phoenix-4.0.0-incubating release? All the links on this page are
>>> broken:
>>> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>>> >> >
>>> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <anilgupta84@gmail.com>
>>> >> > wrote:
>>> >> >
>>> >> > > I have tried to disable the table but since none of the RS
are
>>> coming
>>> >> > > up.
>>> >> > > I am unable to do it. Am i missing something?
>>> >> > > On the server side, we were using the "4.0.0-incubating".
It seems
>>> >> > > like
>>> >> > my
>>> >> > > only option is to upgrade the server to 4.1.  At-least, the
HBase
>>> >> > > cluster
>>> >> > > to be UP. I just want my cluster to come and then i will disable
>>> the
>>> >> > table
>>> >> > > that has a Phoenix view.
>>> >> > > What would be the possible side effects of using Phoenix 4.1
with
>>> >> > > HDP2.1.5.
>>> >> > > Even after updating to Phoenix4.1, if the problem is not fixed.
>>> What
>>> >> > > is
>>> >> > > the next alternative?
>>> >> > >
>>> >> > >
>>> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <ndimiduk@gmail.com>
>>> >> > > wrote:
>>> >> > >
>>> >> > >> Hi Anil,
>>> >> > >>
>>> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
>>> shipped,
>>> >> > >> or
>>> >> > >> trying out a newer version? As James says, the upgrade
must be
>>> >> > >> servers
>>> >> > >> first, then client. Also, Phoenix versions tend to be
picky about
>>> >> > >> their
>>> >> > >> underlying HBase version.
>>> >> > >>
>>> >> > >> You can also try altering the now-broken phoenix tables
via HBase
>>> >> > >> shell,
>>> >> > >> removing the phoenix coprocessor. I've tried this in the
past
>>> with
>>> >> > >> other
>>> >> > >> coprocessor-loading woes and had mixed results. Try: disable
>>> table,
>>> >> > alter
>>> >> > >> table, enable table. There's still sharp edges around
>>> >> > >> coprocessor-based
>>> >> > >> deployment.
>>> >> > >>
>>> >> > >> Keep us posted, and sorry for the mess.
>>> >> > >>
>>> >> > >> -n
>>> >> > >>
>>> >> > >> [0]:
>>> >> > >>
>>> >> >
>>> >> >
>>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>>> >> > >>
>>> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <
>>> anilgupta84@gmail.com>
>>> >> > wrote:
>>> >> > >>
>>> >> > >>> Unfortunately, we ran out of luck on this one because
we are not
>>> >> > running
>>> >> > >>> the latest version of HBase. This property was introduced
>>> recently:
>>> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044
:(
>>> >> > >>> Thanks, Vladimir.
>>> >> > >>>
>>> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov
<
>>> >> > >>> vladrodionov@gmail.com> wrote:
>>> >> > >>>
>>> >> > >>>> Try the following:
>>> >> > >>>>
>>> >> > >>>> Update hbase-site.xml config, set
>>> >> > >>>>
>>> >> > >>>> hbase.coprocessor.enabed=false
>>> >> > >>>>
>>> >> > >>>> or:
>>> >> > >>>>
>>> >> > >>>> hbase.coprocessor.user.enabed=false
>>> >> > >>>>
>>> >> > >>>> sync config across cluster.
>>> >> > >>>>
>>> >> > >>>> restart the cluster
>>> >> > >>>>
>>> >> > >>>> than update your table's settings in hbase shell
>>> >> > >>>>
>>> >> > >>>> -Vlad
>>> >> > >>>>
>>> >> > >>>>
>>> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
>>> anilgupta84@gmail.com>
>>> >> > >>>> wrote:
>>> >> > >>>>
>>> >> > >>>>> Hi All,
>>> >> > >>>>>
>>> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed
on RS. I was
>>> >> > >>>>> running
>>> >> > >>>>> Phoenix4.1 client because i could not find
tar file for
>>> >> > >>>>> "Phoenix4-0.0-incubating".
>>> >> > >>>>> I tried to create a view on existing table
and then my entire
>>> >> > >>>>> cluster
>>> >> > >>>>> went down(all the RS went down. MAster is
still up).
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> This is the exception i am seeing:
>>> >> > >>>>>
>>> >> > >>>>> 2015-03-05 14:30:53,296 FATAL
>>> [RS_OPEN_REGION-hdpslave8:60020-2]
>>> >> > regionserver.HRegionServer: ABORTING region server
>>> >> > bigdatabox.com,60020,1423589420136:
>>> >> > The coprocessor
>>> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>> >> > threw an unexpected exception
>>> >> > >>>>> java.io.IOException: No jar path specified
for
>>> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>> >> > >>>>>         at
>>> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
>>> Source)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> >> > >>>>>         at
>>> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> We tried to restart the cluster. It died again.
It seems, its
>>> >> > >>>>> stucks
>>> >> > at this point looking for
>>> >> > >>>>>
>>> >> > >>>>> LocalIndexSplitter class. How can i resolve
this error? We
>>> cant do
>>> >> > anything in the cluster until we fix it.
>>> >> > >>>>>
>>> >> > >>>>> I was thinking of disabling those tables but
none of the RS is
>>> >> > coming up. Can anyone suggest me how can i bail out of this BAD
>>> >> > situation.
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> --
>>> >> > >>>>> Thanks & Regards,
>>> >> > >>>>> Anil Gupta
>>> >> > >>>>>
>>> >> > >>>>
>>> >> > >>>>
>>> >> > >>>
>>> >> > >>>
>>> >> > >>> --
>>> >> > >>> Thanks & Regards,
>>> >> > >>> Anil Gupta
>>> >> > >>>
>>> >> > >>
>>> >> > >>
>>> >> > >
>>> >> > >
>>> >> > > --
>>> >> > > Thanks & Regards,
>>> >> > > Anil Gupta
>>> >> > >
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > Thanks & Regards,
>>> >> > Anil Gupta
>>> >> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thanks & Regards,
>>> > Anil Gupta
>>>
>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Mime
View raw message