Thank you guys for your answers, definitely helpful !

I finally used Pig for static columns tables and HBase snapshots for dynamic columns tables to copy data to a remote cluster. I tried to keep only Pig, but unfortunately it does not support Phoenix dynamic columns at the moment, hope this feature will be added in the future.


On Fri, Jul 15, 2016 at 11:10 PM, Jonathan Leech <> wrote:
If the table is small, you can export to a flat file, copy it over, then import, all using Phoenix cmd line utilities.

If there is connectivity between the clusters, and the schema is identical, for small to mid-size tables, you can set up hbase replication, and do upsert into x select * from x. You can also replicate secondary indexes.

For larger tables use hbase snapshots, again you need connectivity. 

You can use a combination of the above to seed a cluster from another, and keep the data in sync continuously, either one direction or both. 

Hope this helps,

On Jul 15, 2016, at 11:27 AM, Pariksheet Barapatre <> wrote:

You can use PIG script as well. Just 4 lines of code and your are done, as Phoenix has both PIG Loader and storage handler.


On 15 July 2016 at 22:26, Li Gao <> wrote:
You can use spark that has access to both clusters. Distcp would also work. 

On Fri, Jul 15, 2016 at 9:37 AM, Otmane K. <> wrote:

What is the best way to copy Phoenix table from one cluster to another one ? 

Thank you,


Otmane E. Kouaihi

Disce quasi semper victurus, vive quasi cras moriturus......
Learn as if you were going to live forever, live as if you were going to die tomorrow.....