Hi Ravi,

I see in your output that the final upload of created HFiles is failing due to the number of HFiles created per region. I also just noticed that you're supplying the hbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily config parameter. 

Could you post the exact, complete command that you're using to run this import.

Also, be aware that overriding the max hfiles per region setting is probably not the best way to get around this -- the fact that you've got so many HFiles per region probably indicates that you should have more regions. See this discussion in an earlier thread[1] for more info.

- Gabriel

1. https://lists.apache.org/list.html?user@phoenix.apache.org:lte=3M:CsvBulkLoadTool%20with%20%7E75GB%20file

On Thu, Sep 29, 2016 at 5:16 PM, Ravi Kumar Bommada <bravikumar@juniper.net> wrote:

Hi Gabriel,

 

Please find the logs attached.

 

R’s

Ravi Kumar B

 

From: Gabriel Reid [mailto:gabriel.reid@gmail.com]
Sent: Wednesday, September 28, 2016 5:51 PM
To: user@phoenix.apache.org
Subject: Re: Loading via MapReduce, Not Moving HFiles to HBase

 

Hi Ravi,

 

It looks like those log file entries you posted are from a mapreduce task. Could you post the output of the command that you're using to start the actual job (i.e. console output of "hadoop jar ...").

 

- Gabriel

 

On Wed, Sep 28, 2016 at 1:49 PM, Ravi Kumar Bommada <bravikumar@juniper.net> wrote:

Hi All,

 

I’m trying to load data via phoenix mapreduce referring to below screen:

 

 

HFiles are getting created, each HFile is of size 300MB and 176 such HFiles are there, but after that files are not moving to HBase. i.e when I’m querying HBase I’m not able to see data.According to the logs below data commit is successful.

 

Please suggest, if I’m missing any configuration.

 

Provided:

 

Using property: -Dhbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily=1024

 

Last Few Logs:

2016-09-27 07:27:35,845 INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy]

2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy]

2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy]

2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.mapred.Merger: Merging 64 intermediate segments out of a total of 127

2016-09-27 07:28:21,238 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 64 segments left of total size: -40111574372 bytes

2016-09-27 07:30:24,933 INFO [main] org.apache.hadoop.mapred.Merger: Merging 179 sorted segments

2016-09-27 07:30:24,965 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 0 segments left of total size: 4736 bytes

2016-09-27 07:30:24,967 INFO [main] org.apache.hadoop.mapred.Merger: Merging 179 sorted segments

2016-09-27 07:30:24,999 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 0 segments left of total size: 4736 bytes

2016-09-27 07:30:25,000 INFO [main] org.apache.hadoop.mapred.Merger: Merging 179 sorted segments

2016-09-27 07:30:25,033 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 0 segments left of total size: 4736 bytes

2016-09-27 07:30:25,035 INFO [main] org.apache.hadoop.mapred.Merger: Merging 179 sorted segments

2016-09-27 07:30:25,068 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 0 segments left of total size: 4736 bytes

2016-09-27 07:30:25,723 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1467713708066_29809_m_000016_0 is done. And is in the process of committing

2016-09-27 07:30:25,788 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1467713708066_29809_m_000016_0' done.

 

 

Regard’s

 

Ravi Kumar B