There might be a better way but my fix for this same problem was to modify the to perform,

                    FileSystem fs = FileSystem.get(conf);
                    RemoteIterator<LocatedFileStatus> ri = fs.listFiles(outputPath, true);
                    while (ri.hasNext()) {
                        LocatedFileStatus fileStatus =;
              "chmod a+rwx on {}", fileStatus.getPath().getParent().toString());
                                         new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL));
              "chmod a+rwx on {}", fileStatus.getPath().toString());
                        fs.setPermission(fileStatus.getPath(), new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL));

right before the the call to loader.doBulkLoad(outputPath, htable)

This unfortunately requires that you modify the source. I'd be interested in a solution that doesn't require patching phoenix.


On Tue, Oct 27, 2015 at 1:06 PM, Bulvik, Noam <> wrote:


We are running CSV bulk loader on phoenix 4.5 with CDH 5.4 and it works fine but with one problem. The loading task is hang on mapreduce.LoadIncrementalHFiles: Trying to load hfile .. until we give the directory holding the hfile (under /tmp of the HDFS) write permissions.


We set umask to be 000 but it does not work.


Any idea how it should be fixed






PLEASE NOTE: The information contained in this message is privileged and confidential, and is intended only for the use of the individual to whom it is addressed and others who have been specifically authorized to receive it. If you are not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, or if any problems occur with transmission, please contact sender. Thank you.