Another option is to create HFiles using csv bulk loader on one cluster, transfer them to the backup cluster and run LoadIncrementalHFiles(...).

On Tue, Sep 1, 2015 at 11:53 AM, Jean-Marc Spaggiari <jean-marc@spaggiari.org> wrote:
Hi Gaurav,

bulk load bypass the WAL, that's correct. It's true for Phoenix, it's true for HBase (outside of Phoenix).

If you have replication activated, you will have to bulkload the data into the 2 clusters. Transfert your csv files on the other side too and bulkload from there.

JM

2015-09-01 14:51 GMT-04:00 Gaurav Agarwal <gaurav130403@gmail.com>:
Hello

We are using phoenix Map reduce CSV uploader to load data into HBASe . I read documentation on Phoenix site, it will only create HFLE no WAL logs will be created.Please confirm understanding is correct or wrong

We have to use HBASe replication across cluster for Master Master scenario. Will the replication work in that scenario or do we need to use Copy Table to replicate ?

thanks