HiYou can override the behavior by setting the property phoenix.schema.dropMetaData to FALSE. For further reading, http://phoenix.incubator.apache.org/language/index.html#drop_table.
Regards
RaviOn Thu, Apr 10, 2014 at 1:06 AM, Kleiton Silva <kleiton.contato@gmail.com> wrote:
Another question, in this case is possible drop table without delete table in Hbase?Thank youAtte.,On Wed, Apr 9, 2014 at 11:57 AM, Kleiton Silva <kleiton.contato@gmail.com> wrote:It's work.ResultPK a b c
row1 value1 row2 value2 row3 value3 Thank you very much Jerry.KleitonOn Wed, Apr 9, 2014 at 10:12 AM, Jerry Lam <chilinglam@gmail.com> wrote:
you can try:CREATE table "test" ( pk VARCHAR PRIMARY KEY, "cf".”a” VARCHAR, , "cf".”b” VARCHAR, "cf".”c” VARCHAR );On Wed, Apr 9, 2014 at 12:40 PM, Kleiton Silva <kleiton.contato@gmail.com> wrote:
Firas,I have the follow table in hbase:hbase(main):032:0> scan 'test'ROW COLUMN+CELLrow1 column=cf:a, timestamp=1397068853016, value=value1row2 column=cf:b, timestamp=1397068857098, value=value2row3 column=cf:c, timestamp=1397068861755, value=value3When a try to create table using Phoenix, use the command:CREATE table "test" ( pk VARCHAR PRIMARY KEY, "cf".”val” VARCHAR );The result is :PK val
row1 <null> row2 <null> row3 <null> Could you help me?Thank you.KleitonOn Tue, Apr 8, 2014 at 6:19 AM, Firas Khasawneh <Firas.Khasawneh@sas.com> wrote:
Thanks Pankaj!
From: Pankaj kr [mailto:pankaj.kr@huawei.com]
Sent: Tuesday, April 08, 2014 12:50 AM
Hi Firas,
Phoenix doesn’t work with Accumulo.
Kindly check the below link for more details,
http://phoenix.incubator.apache.org/
Cheers,
Pankaj
From: Firas Khasawneh [mailto:Firas.Khasawneh@sas.com]
Sent: 07 April 2014 20:46
To: user@phoenix.incubator.apache.org
Subject: RE: Basic mapping to HBase table
Hi all,
Does phoenix work only with HBase or does it also work with Accumulo?
Thanks,
Firas
From: Pankaj kr [mailto:pankaj.kr@huawei.com]
Sent: Monday, April 07, 2014 9:15 AM
To: user@phoenix.incubator.apache.org
Subject: RE: Basic mapping to HBase table
Hi Daniel,
You mapped the HBase table using the below statement,
CREATE VIEW "t1" ( pk VARCHAR PRIMARY KEY, "f1".val VARCHAR );
By default, Phoenix sends characters in capital letter, so here second column of view is mapped to the qualifier “f1:VAL”.
But you inserted records at HBase in “f1:val” instead of “f1:VAL” as,
r1 column=f1:val, timestamp=1396558762590, value=a
So NULL value is displayed in VAL column at Phoenix side.
You can map as "f1".”val” to resolve this or insert records in “f1:VAL”.
Cheers,
Pankaj
From: Daniel Rodriguez [mailto:df.rodriguez143@gmail.com]
Sent: 04 April 2014 06:18
To: user
Subject: Basic mapping to HBase table
Hi all,
I spent a couple of hours today trying phoenix for the first time, looks amazing.
My final objective is to do SQL on a big hbase table that has a composite key, i decided to start slow and I was able to create a table on phoenix, upsert values and see them on hbase, but I am not able to do the oposite: map (using a view) values on an existing hbase to a phoenix table, i am always getting "null" values.
Here is a basic example copied from the docs:
HBASE:
> create 't1', {NAME => 'f1', VERSIONS => 5}
PHOENIX:
> CREATE VIEW "t1" ( pk VARCHAR PRIMARY KEY, "f1".val VARCHAR );
> select * from "t1";
+------------+------------+
| PK | VAL |
+------------+------------+
+------------+------------+
Works fine since there is no data.
I add data to hbase:
> put 't1,'r1','f1','a'
> scan 't1'
ROW COLUMN+CELL
r1 column=f1:, timestamp=1396558806334, value=a
But if i try to select from phoenix i get only null values.:
> select * from "t1";
+------------+------------+
| PK | VAL |
+------------+------------+
| r1 | null |
+------------+------------+
I also tried to save it on an specific column in the column family:
> scan 't1'
ROW COLUMN+CELL
r1 column=f1:, timestamp=1396558806334, value=a
r1 column=f1:val, timestamp=1396558762590, value=a
I also tried to change from varchar to integer and insert numbers but i got the same result in both cases.
I am using phoenix 2.2.0 on EMR.
Any help you can give me is appreciated.
Thanks,
Daniel