I mean can I use following methods ?

TWO_BYTE_QUALIFIERS.decode()
TWO_BYTE_QUALIFIERS.encode()

are there any additional transformations (eg : storage formats) to be considered while reading from hbase ?

Thanks. Regards


On Tue, 8 Jan 2019 at 11:46, Anil <anilklce@gmail.com> wrote:
Hi Thomas,

I have checked the hbase system.catalog table and COLUMN_QUALIFIER value is not encoded.  From the internal code, i understood default encoded scheme used for column is QualifierEncodingScheme.TWO_BYTE_QUALIFIERS.

Can i use this encoding to get the values from hbase ? Thanks.

Regards,
Anil

On Tue, 8 Jan 2019 at 11:24, Thomas D'Silva <tdsilva@salesforce.com> wrote:
There isn't an existing utility that does that. You would have to look up the COLUMN_QUALIFIER for the columns you are interested in from SYSTEM.CATALOG
and use then create a Scan.

On Mon, Jan 7, 2019 at 9:22 PM Anil <anilklce@gmail.com> wrote:
Hi Team,

Is there any utility to read hbase data using hbase apis which is created with phoniex with column name encoding ?

Idea is to use the all performance and disk usage improvements achieved with phoenix column name encoding feature and use our existing hbase jobs for our data analysis.

Thanks,
Anil

On Tue, 11 Dec 2018 at 14:02, Anil <anilklce@gmail.com> wrote:
Thanks.

On Tue, 11 Dec 2018 at 11:51, Jaanai Zhang <cloud.poster@gmail.com> wrote:
The difference since used encode column names that support in 4.10 version(Also see PHOENIX-1598).
You can config COLUMN_ENCODED_BYTES property to keep the original column names in the create table SQL, an example for:

create table test(

id varchar  primary key,

col varchar

)COLUMN_ENCODED_BYTES =0 ;



----------------------------------------
   Jaanai Zhang
   Best regards!



Anil <anilklce@gmail.com> 于2018年12月11日周二 下午1:24写道:
HI,

We have upgraded phoenix to Phoenix-4.11.0-cdh5.11.2 from phoenix 4.7. 

Problem - When a table is created in phoenix, underlying hbase column names and phoenix column names are different. Tables created in 4.7 version looks good. Looks

CREATE TABLE TST_TEMP (TID VARCHAR PRIMARY KEY ,PRI VARCHAR,SFLG VARCHAR,PFLG VARCHAR,SOLTO VARCHAR,BILTO VARCHAR) COMPRESSION = 'SNAPPY';

0: jdbc:phoenix:dq-13.labs.> select TID,PRI,SFLG from TST_TEMP limit 2;
+-------------+------------+-----------+
|   TID       |    PRI     |    SFLG   |
+-------------+------------+-----------+
| 0060189122  | 0.00       |           |
| 0060298478  | 13390.26   |           |
+-------------+------------+-----------+


hbase(main):011:0> scan 'TST_TEMP', {LIMIT => 2}
ROW                                      COLUMN+CELL
 0060189122                              column=0:\x00\x00\x00\x00, timestamp=1544296959236, value=x
 0060189122                              column=0:\x80\x0B, timestamp=1544296959236, value=0.00
 0060298478                              column=0:\x00\x00\x00\x00, timestamp=1544296959236, value=x
 0060298478                              column=0:\x80\x0B, timestamp=1544296959236, value=13390.26


hbase columns names are completely different than phoenix column names. This change observed only post up-gradation. all existing tables created in earlier versions looks good and alter statements to existing tables also looks good.

Is there any workaround to avoid this difference? we could not run hbase mapreduce jobs on hbase tables created  by phoenix. Thanks.

Thanks