phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kristoffer Sjögren <sto...@gmail.com>
Subject Re: Migrate data between 2.2 and 4.0
Date Mon, 30 Jun 2014 08:34:28 GMT
Maybe this is a big no-no but if I try to update column_family
in system.catalog from '0' to '_0' i get an exception.

upsert into system.catalog (TABLE_NAME, COLUMN_NAME, COLUMN_FAMILY) select
TABLE_NAME, COLUMN_NAME, '_0' from system.catalog where table_name = 'T'
AND COLUMN_NAME = 'V';

java.lang.ArrayIndexOutOfBoundsException: 6
at
org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:463)
at
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:408)
at
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:399)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:224)
at
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:919)
at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
at sqlline.SqlLine.dispatch(SqlLine.java:821)
at sqlline.SqlLine.begin(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)



On Sun, Jun 29, 2014 at 10:27 AM, Kristoffer Sjögren <stoffe@gmail.com>
wrote:

> Thanks, that's good to know.
>
>
> On Sun, Jun 29, 2014 at 10:20 AM, James Taylor <jamestaylor@apache.org>
> wrote:
>
>> FWIW, the upgrade script does not touch your data - it only modifies
>> the metadata in your SYSTEM.CATALOG table.
>>
>> On Sun, Jun 29, 2014 at 10:18 AM, Kristoffer Sjögren <stoffe@gmail.com>
>> wrote:
>> > Thanks James for the clarification. I haven't tried the upgrade
>> procedure
>> > yet. I just wanted to migrate our existing data first to CDH5 and try
>> out a
>> > few things on HBase 0.96.
>> >
>> >
>> > On Sun, Jun 29, 2014 at 9:50 AM, James Taylor <jamestaylor@apache.org>
>> > wrote:
>> >>
>> >> The default column family (i.e. the name of the column family used for
>> >> your table when one is not explicitly specified) was changed from _0
>> >> to 0 between 2.2 and 3.0/4.0. You can override this in your CREATE
>> >> TABLE statement through the DEFAULT_COLUMN_FAMILY property. The
>> >> upgrade script modifies this property dynamically for any table being
>> >> upgraded.
>> >>
>> >> Was the upgrade script not working for you?
>> >>
>> >> Thanks,
>> >> James
>> >>
>> >> On Fri, Jun 27, 2014 at 6:44 PM, Kristoffer Sjögren <stoffe@gmail.com>
>> >> wrote:
>> >> > After copying data from column _0 to column 0 phoenix is able to read
>> >> > the data (didn't try the catalog trick).
>> >> >
>> >> > I suppose this is the way Phoenix does the upgrade internally also
>> >> > (move data between columns)?
>> >> >
>> >> > On Wed, Jun 25, 2014 at 9:48 PM, Jeffrey Zhong <
>> jzhong@hortonworks.com>
>> >> > wrote:
>> >> >>
>> >> >> No, you have to create column family name 0 and copy the data from
>> >> >> column
>> >> >> family "_0" to it or (personally I didn't try the following) you
>> might
>> >> >> able to change meta data in system.catalog to use column family
"_0"
>> >> >> instead.
>> >> >>
>> >> >> For normal upgrade, you should follow instructions at
>> >> >> http://phoenix.apache.org/upgrade_from_2_2.html
>> >> >>
>> >> >> On 6/25/14 7:11 AM, "Kristoffer Sjögren" <stoffe@gmail.com>
wrote:
>> >> >>
>> >> >>>Yes, the import worked after creating the column family and
I can
>> see
>> >> >>>all the rows when doing scans.
>> >> >>>
>> >> >>>But I got nothing when using Phoenix 4.0 client, so after comparing
>> >> >>>old and new tables I saw that 4.0 tables have column family
name 0
>> >> >>>instead of _0.
>> >> >>>
>> >> >>>Now as far as I know there is no way to rename a column qualifier,
>> >> >>>right? So I cant simply remove the 0 column and rename _0 to
0
>> right?
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>On Wed, Jun 25, 2014 at 3:08 AM, Jeffrey Zhong <
>> jzhong@hortonworks.com>
>> >> >>>wrote:
>> >> >>>>
>> >> >>>>
>> >> >>>> You can try to use hbase shell to manually add "_0" column
family
>> >> >>>> into
>> >> >>>> your destination hbase table. Phoenix 4.0 from Apache can't
work
>> on
>> >> >>>> hbase0.96. You can check discussions in
>> >> >>>> https://issues.apache.org/jira/browse/PHOENIX-848 to see
if your
>> >> >>>> hbase
>> >> >>>>is
>> >> >>>> good for phoenix 4.0.
>> >> >>>>
>> >> >>>> Thanks,
>> >> >>>> -Jeffrey
>> >> >>>>
>> >> >>>> On 6/24/14 5:32 AM, "Kristoffer Sjögren" <stoffe@gmail.com>
>> wrote:
>> >> >>>>
>> >> >>>>>Hi
>> >> >>>>>
>> >> >>>>>We're currently running Phoenix 2.2 on HBase 0.94 CDH
4.4 and
>> slowly
>> >> >>>>>preparing to move to Phoenix 4 and HBase 0.96 CDH 5.
>> >> >>>>>
>> >> >>>>>For my first tests I wanted to simply copy data from
0.94 to 0.96,
>> >> >>>>>which works fine for regular hbase table using the following
>> >> >>>>> commands:
>> >> >>>>>
>> >> >>>>>$ hbase org.apache.hadoop.hbase.mapreduce.Export table
/tmp/table
>> >> >>>>>$ hadoop distcp hftp://hbase94:50070/tmp/table
>> >> >>>>> hdfs://hbase96/tmp/table
>> >> >>>>>$ hbase -Dhbase.import.version=0.94
>> >> >>>>>org.apache.hadoop.hbase.mapreduce.Import table /tmp/table
>> >> >>>>>
>> >> >>>>>This approach fail on the import for for phoenix tables
tough (see
>> >> >>>>>below) - where I create an identical table in 0.96 using
phoenix
>> 4.0
>> >> >>>>>sqlline and then do the commands mentioned above. As
I understand,
>> >> >>>>> the
>> >> >>>>>_0 column family was used to allow hbase empty rows.
>> >> >>>>>
>> >> >>>>>Are there any tricks that can be made to allow copy
data between
>> >> >>>>> these
>> >> >>>>>two installations?
>> >> >>>>>
>> >> >>>>>Cheers,
>> >> >>>>>-Kristoffer
>> >> >>>>>
>> >> >>>>>
>> >> >>>>>2014-06-18 13:31:09,633 INFO  [main] mapreduce.Job:
Task Id :
>> >> >>>>>attempt_1403015236309_0015_m_000004_1, Status : FAILED
>> >> >>>>>Error:
>> >>
>> >>>>>org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
>> >> >>>>>Failed 6900 actions:
>> >> >>>>>org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException:
>> >> >>>>>Column family _0 does not exist in region
>> >>
>> >> >>>>>
>> >>>>>TABLE,\x1F\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x
>> >> >>>>>00
>> >>
>> >> >>>>>
>> >>>>>\x00\x00\x00\x00\x00\x00,1403090967713.51a3eefa9b92568a87223aff7878cdcf.
>> >> >>>>>in table 'TABLE', {TABLE_ATTRIBUTES => {coprocessor$1
=>
>> >> >>>>>'|org.apache.phoenix.coprocessor.ScanRegionObserver|1|',
>> >> >>>>> coprocessor$2
>> >> >>>>>=>
>> >>
>> >> >>>>>
>> >>>>>'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|',
>> >> >>>>>coprocessor$3 =>
>> >>
>> >>>>>'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|1|',
>> >> >>>>>coprocessor$4 =>
>> >> >>>>>'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|',
>> >> >>>>>coprocessor$5 =>
>> >>
>> >> >>>>>
>> >>>>>'|org.apache.phoenix.hbase.index.Indexer|1073741823|index.builder=org.ap
>> >> >>>>>ac
>> >>
>> >> >>>>>
>> >>>>>he.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec
>> >> >>>>>.c
>> >> >>>>>lass=org.apache.phoenix.index.PhoenixIndexCodec'},
>> >> >>>>>{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS =>
'1', IN_MEMORY =>
>> >> >>>>>'false', KEEP_DELETED_CELLS => 'true', DATA_BLOCK_ENCODING
=>
>> >> >>>>>'FAST_DIFF', COMPRESSION => 'NONE', TTL => 'FOREVER',
>> MIN_VERSIONS =>
>> >> >>>>>'0', BLOCKCACHE => 'true', BLOCKSIZE => '65536',
>> REPLICATION_SCOPE =>
>> >> >>>>>'0'}
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServ
>> >> >>>>>er
>> >> >>>>>.java:4056)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMuta
>> >> >>>>>ti
>> >> >>>>>on(HRegionServer.java:3361)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.j
>> >> >>>>>av
>> >> >>>>>a:3265)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.
>> >> >>>>>ca
>> >> >>>>>llBlockingMethod(ClientProtos.java:26935)
>> >> >>>>>at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
>> >> >>>>>: 6900 times,
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(As
>> >> >>>>>yn
>> >> >>>>>cProcess.java:187)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(Async
>> >> >>>>>Pr
>> >> >>>>>ocess.java:171)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:
>> >> >>>>>88
>> >> >>>>>2)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java
>> >> >>>>>:9
>> >> >>>>>40)
>> >> >>>>>at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:903)
>> >> >>>>>at org.apache.hadoop.hbase.client.HTable.put(HTable.java:864)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.wr
>> >> >>>>>it
>> >> >>>>>e(TableOutputFormat.java:126)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.wr
>> >> >>>>>it
>> >> >>>>>e(TableOutputFormat.java:87)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.
>> >> >>>>>ja
>> >> >>>>>va:635)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskIn
>> >> >>>>>pu
>> >> >>>>>tOutputContextImpl.java:89)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedM
>> >> >>>>>ap
>> >> >>>>>per.java:112)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.mapreduce.Import$Importer.writeResult(Import.jav
>> >> >>>>>a:
>> >> >>>>>167)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:136)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:118)
>> >> >>>>>at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> >> >>>>>at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> >> >>>>>at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>> >> >>>>>at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> >> >>>>>at java.security.AccessController.doPrivileged(Native
Method)
>> >> >>>>>at javax.security.auth.Subject.doAs(Subject.java:422)
>> >> >>>>>at
>> >>
>> >> >>>>>
>> >>>>>org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformatio
>> >> >>>>>n.
>> >> >>>>>java:1548)
>> >> >>>>>at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> --
>> >> >>>> CONFIDENTIALITY NOTICE
>> >> >>>> NOTICE: This message is intended for the use of the individual
or
>> >> >>>>entity to
>> >> >>>> which it is addressed and may contain information that
is
>> >> >>>> confidential,
>> >> >>>> privileged and exempt from disclosure under applicable
law. If the
>> >> >>>>reader
>> >> >>>> of this message is not the intended recipient, you are
hereby
>> >> >>>> notified
>> >> >>>>that
>> >> >>>> any printing, copying, dissemination, distribution, disclosure
or
>> >> >>>> forwarding of this communication is strictly prohibited.
If you
>> have
>> >> >>>> received this communication in error, please contact the
sender
>> >> >>>>immediately
>> >> >>>> and delete it from your system. Thank You.
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> CONFIDENTIALITY NOTICE
>> >> >> NOTICE: This message is intended for the use of the individual
or
>> >> >> entity to
>> >> >> which it is addressed and may contain information that is
>> confidential,
>> >> >> privileged and exempt from disclosure under applicable law. If
the
>> >> >> reader
>> >> >> of this message is not the intended recipient, you are hereby
>> notified
>> >> >> that
>> >> >> any printing, copying, dissemination, distribution, disclosure
or
>> >> >> forwarding of this communication is strictly prohibited. If you
have
>> >> >> received this communication in error, please contact the sender
>> >> >> immediately
>> >> >> and delete it from your system. Thank You.
>> >
>> >
>>
>
>

Mime
View raw message