phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sanooj Padmakumar <p.san...@gmail.com>
Subject Re: Kerberos and bulkload
Date Tue, 26 Apr 2016 03:23:49 GMT
Apologies if its against the convention to re-open an old discussion.

The exception was resolved after I added
"conf.set("fs.permissions.umask-mode", "000");" as per the suggestion
provided by Gabriel.

However the same exception has re-surfaced again in a new kerberized
cluster setup and I am unable to get it going. Some differences between the
earlier try and the current one is that earlier I used 4.3.0 version of
Phoenix and now I am on 4.5.2. I tried the same program in a non-kerberized
cluster with the 4.5.2 of Phoenix and that works.

I already tried setting file permission as shown above, and also running as
hbase user, but nothing helped.

Pasting the exception below.

16/04/25 11:25:02 INFO mapreduce.LoadIncrementalHFiles: Trying to load
hfile=hdfs://nameservice1/tmp/output/account7/ACCOUNT/0/f26e809c916a45198e19cd63a4971d65
first=<<masked>> last=<<masked>>
16/04/25 11:26:11 INFO client.RpcRetryingCaller: Call exception, tries=10,
retries=35, started=68213 ms ago, cancelled=false, msg=row '' on table
'ACCOUNT' at
region=ACCOUNT,,1461124051363.cbb334b331f00e23a6b83b85685cf793.,
hostname=<<masked>>,60020,1461243982079, seqNum=10
16/04/25 11:26:31 INFO client.RpcRetryingCaller: Call exception, tries=11,
retries=35, started=88415 ms ago, cancelled=false, msg=row '' on table
'ACCOUNT' at
region=ACCOUNT,,1461124051363.cbb334b331f00e23a6b83b85685cf793.,
hostname=<<masked>>,60020,1461243982079, seqNum=10


Any points to debug this will help.

Thanks
Sanooj

On Wed, Nov 18, 2015 at 8:26 PM, Gabriel Reid <gabriel.reid@gmail.com>
wrote:

> Re-adding the user list, which I accidentally left off.
>
> On Wed, Nov 18, 2015 at 3:55 PM, Gabriel Reid <gabriel.reid@gmail.com>
> wrote:
> > Yes, I believe that's correct, if you change the umask you make the
> > HFiles readable to all during creation.
> >
> > I believe that the alternate solutions listed on the jira ticket
> > (running the tool as the hbase user or using the alternate HBase
> > coprocessor for loading HFiles) won't have this drawback.
> >
> > - Gabriel
> >
> >
> > On Wed, Nov 18, 2015 at 3:49 PM, Sanooj Padmakumar <p.sanooj@gmail.com>
> wrote:
> >> Thank you Gabriel..
> >>
> >> Does it mean that the generated hfile be read/modified by any user other
> >> than "hbase" user?
> >>
> >> Regards
> >> Sanooj
> >>
> >> On 18 Nov 2015 02:39, "Gabriel Reid" <gabriel.reid@gmail.com> wrote:
> >>>
> >>> Hi Sanooj,
> >>>
> >>> Yes, I think that should do it, or you can pass that config parameter
> >>> as a command line parameter.
> >>>
> >>> - Gabriel
> >>>
> >>> On Tue, Nov 17, 2015 at 8:16 PM, Sanooj Padmakumar <p.sanooj@gmail.com
> >
> >>> wrote:
> >>> > Hi Gabriel
> >>> >
> >>> > Thank you so much
> >>> >
> >>> > I set the below property and it worked now.. I hope this is the
> correct
> >>> > thing to do ?
> >>> >
> >>> >  conf.set("fs.permissions.umask-mode", "000");
> >>> >
> >>> >
> >>> > Thanks Again
> >>> >
> >>> > Sanooj
> >>> >
> >>> > On Wed, Nov 18, 2015 at 12:29 AM, Gabriel Reid <
> gabriel.reid@gmail.com>
> >>> > wrote:
> >>> >>
> >>> >> Hi Sanooj,
> >>> >>
> >>> >> I believe that this is related to the issue described in PHOENIX-976
> >>> >> [1]. In that case, it's not strictly related to Kerberos, but
> instead
> >>> >> to file permissions (could it be that your dev environment also
> >>> >> doesn't have file permissions turned on?)
> >>> >>
> >>> >> If you look at the comments on that jira ticket, there are a couple
> of
> >>> >> things that you could try doing to resolve this (running the import
> >>> >> job as the hbase user, or using custom file permissions, or using
an
> >>> >> alternate incremental load coprocessor).
> >>> >>
> >>> >> - Gabriel
> >>> >>
> >>> >>
> >>> >> 1. https://issues.apache.org/jira/browse/PHOENIX-976
> >>> >>
> >>> >> On Tue, Nov 17, 2015 at 7:14 PM, Sanooj Padmakumar <
> p.sanooj@gmail.com>
> >>> >> wrote:
> >>> >> > Hello -
> >>> >> >
> >>> >> > I am using the bulkload of Phoenix on a cluster secured with
> >>> >> > Kerberos.
> >>> >> > The
> >>> >> > mapper runs fine, reducer runs fine .. and then the counters
are
> >>> >> > printed
> >>> >> > fine.. finally the LoadIncrementalHFiles steps fails.. A portion
> of
> >>> >> > the
> >>> >> > log
> >>> >> > is given below..
> >>> >> >
> >>> >> >
> >>> >> > 15/11/17 09:44:48 INFO mapreduce.LoadIncrementalHFiles: Trying
to
> >>> >> > load
> >>> >> > hfile=hdfs://..........<<masked>>>
> >>> >> > 15/11/17 09:45:56 INFO client.RpcRetryingCaller: Call exception,
> >>> >> > tries=10,
> >>> >> > retries=35, started=68220 ms ago, cancelled=false, msg=row
'' on
> >>> >> > table
> >>> >> > 'TABLE1' at region=TABLE1,<<<masked>>>>,
seqNum=26
> >>> >> > 15/11/17 09:46:16 INFO client.RpcRetryingCaller: Call exception,
> >>> >> > tries=11,
> >>> >> > retries=35, started=88315 ms ago, cancelled=false, msg=row
'' on
> >>> >> > table
> >>> >> > 'TABLE1' at region=TABLE1,<<<masked>>>>,
seqNum=26
> >>> >> >
> >>> >> > Is there any setting I should make inorder to make the program
> work
> >>> >> > on
> >>> >> > Kerberos secured environment ?
> >>> >> >
> >>> >> > Please note , our DEV environment doesnt use Kerberos and
things
> are
> >>> >> > working
> >>> >> > just fine
> >>> >> >
> >>> >> > --
> >>> >> > Thanks in advance,
> >>> >> > Sanooj Padmakumar
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Thanks,
> >>> > Sanooj Padmakumar
>



-- 
Thanks,
Sanooj Padmakumar

Mime
View raw message