phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bulvik, Noam" <Noam.Bul...@teoco.com>
Subject RE: mapreduce.LoadIncrementalHFiles: Trying to load hfile... hang till we set permission on the tmp file
Date Wed, 28 Oct 2015 06:21:21 GMT
Thanks Matt ,
Is this a known issue in the CSV Bulk Load Tool ? Do we need to open JIRA so it will be fixed
?



From: Matt Kowalczyk [mailto:mattk@cloudability.com]
Sent: Wednesday, October 28, 2015 1:01 AM
To: user@phoenix.apache.org
Subject: Re: mapreduce.LoadIncrementalHFiles: Trying to load hfile... hang till we set permission
on the tmp file

There might be a better way but my fix for this same problem was to modify the CsvBulkLoadTool.java
to perform,

                    FileSystem fs = FileSystem.get(conf);
                    RemoteIterator<LocatedFileStatus> ri = fs.listFiles(outputPath,
true);
                    while (ri.hasNext()) {
                        LocatedFileStatus fileStatus = ri.next();
                        LOG.info("chmod a+rwx on {}", fileStatus.getPath().getParent().toString());
                        fs.setPermission(fileStatus.getPath().getParent(),
                                         new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL));
                        LOG.info("chmod a+rwx on {}", fileStatus.getPath().toString());
                        fs.setPermission(fileStatus.getPath(), new FsPermission(FsAction.ALL,
FsAction.ALL, FsAction.ALL));
                    }

right before the the call to loader.doBulkLoad(outputPath, htable)
This unfortunately requires that you modify the source. I'd be interested in a solution that
doesn't require patching phoenix.
-Matt

On Tue, Oct 27, 2015 at 1:06 PM, Bulvik, Noam <Noam.Bulvik@teoco.com<mailto:Noam.Bulvik@teoco.com>>
wrote:
Hi,
We are running CSV bulk loader on phoenix 4.5 with CDH 5.4 and it works fine but with one
problem. The loading task is hang on mapreduce.LoadIncrementalHFiles: Trying to load hfile
.. until we give the directory holding the hfile (under /tmp of the HDFS) write permissions.

We set umask to be 000 but it does not work.

Any idea how it should be fixed

thanks

Noam


________________________________

PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and confidential, and
is intended only for the use of the individual to whom it is addressed and others who have
been specifically authorized to receive it. If you are not the intended recipient, you are
hereby notified that any dissemination, distribution or copying of this communication is strictly
prohibited. If you have received this communication in error, or if any problems occur with
transmission, please contact sender. Thank you.


________________________________

PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and confidential, and
is intended only for the use of the individual to whom it is addressed and others who have
been specifically authorized to receive it. If you are not the intended recipient, you are
hereby notified that any dissemination, distribution or copying of this communication is strictly
prohibited. If you have received this communication in error, or if any problems occur with
transmission, please contact sender. Thank you.
Mime
View raw message