phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gabriel Reid <>
Subject Re: Using Phoenix Bulk Upload CSV to upload 200GB data
Date Sat, 12 Sep 2015 18:16:44 GMT
Around 1400 mappers sounds about normal to me -- I assume your block
size on HDFS is 128 MB, which works out to 1500 mappers for 200 GB of

To add to what Krishna asked, can you be a bit more specific on what
you're seeing (in log files or elsewhere) which leads you to believe
the data nodes are running out of capacity? Are map tasks failing?

If this is indeed a capacity issue, one thing you should ensure is
that map output comression is enabled. This doc from Cloudera explains
this (and the same information applies whether you're using CDH or
not) -

In any case, apart from that there isn't any basic thing that you're
probably missing, so any additional information that you can supply
about what you're running into would be useful.

- Gabriel

On Sat, Sep 12, 2015 at 2:17 AM, Krishna <> wrote:
> 1400 mappers on 9 nodes is about 155 mappers per datanode which sounds high
> to me. There are very few specifics in your mail. Are you using YARN? Can
> you provide details like table structure, # of rows & columns, etc. Do you
> have an error stack?
> On Friday, September 11, 2015, Gaurav Kanade <>
> wrote:
>> Hi All
>> I am new to Apache Phoenix (and relatively new to MR in general) but I am
>> trying a bulk insert of a 200GB tar separated file in an HBase table. This
>> seems to start off fine and kicks off about ~1400 mappers and 9 reducers (I
>> have 9 data nodes in my setup).
>> At some point I seem to be running into problems with this process as it
>> seems the data nodes run out of capacity (from what I can see my data nodes
>> have 400GB local space). It does seem that certain reducers eat up most of
>> the capacity on these - thus slowing down the process to a crawl and
>> ultimately leading to Node Managers complaining that Node Health is bad
>> (log-dirs and local-dirs are bad)
>> Is there some inherent setting I am missing that I need to set up for the
>> particular job ?
>> Any pointers would be appreciated
>> Thanks
>> --
>> Gaurav Kanade,
>> Software Engineer
>> Big Data
>> Cloud and Enterprise Division
>> Microsoft

View raw message