phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kristoffer Sjögren <sto...@gmail.com>
Subject Re: Copy table between hbase clusters
Date Fri, 11 Dec 2015 13:29:33 GMT
My plan is to try use asynchbase to read the raw data and then upsert
it using Phoenix SQL.

However, when I read the old table the data types for the row key
doesn't add up.

CREATE TABLE T1 (C1 INTEGER NOT NULL, C2 INTEGER NOT NULL, C3 BIGINT
NOT NULL, C4 BIGINT NOT NULL, C5 CHAR(2) NOT NULL, V BIGINT CONSTRAINT
PK PRIMARY KEY ( C1, C2, C3, C4, C5 ))

That's 4 + 4 + 8 +8 + 2 = 26 bytes for the key. But the actual key
that I read from HBase is only 23 bytes.

[0, -128, 0, 0, 0, -44, 4, 123, -32, -128, 0, 0, 10, -128, 0, 0, 0, 0,
0, 0, 0, 32, 32]

Maybe the data type definitions as described on the phoenix site have
changed since version 2.2.3? Or some data type may be variable in
size?


On Thu, Dec 10, 2015 at 4:49 PM, Kristoffer Sjögren <stoffe@gmail.com> wrote:
> Hi
>
> We're in the process of upgrading from Phoenix 2.2.3 / HBase 0.96 to
> Phoneix 4.4.0 / HBase 1.1.2 and wanted to know the simplest/easiest
> way to copy data from old-to-new table.
>
> The tables contain only a few hundred million rows so it's OK to
> export locally and then upsert.
>
> Cheers,
> -Kristoffer

Mime
View raw message