phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kristoffer Sjögren <sto...@gmail.com>
Subject Re: Migrate data between 2.2 and 4.0
Date Wed, 25 Jun 2014 14:11:29 GMT
Yes, the import worked after creating the column family and I can see
all the rows when doing scans.

But I got nothing when using Phoenix 4.0 client, so after comparing
old and new tables I saw that 4.0 tables have column family name 0
instead of _0.

Now as far as I know there is no way to rename a column qualifier,
right? So I cant simply remove the 0 column and rename _0 to 0 right?



On Wed, Jun 25, 2014 at 3:08 AM, Jeffrey Zhong <jzhong@hortonworks.com> wrote:
>
>
> You can try to use hbase shell to manually add "_0" column family into
> your destination hbase table. Phoenix 4.0 from Apache can't work on
> hbase0.96. You can check discussions in
> https://issues.apache.org/jira/browse/PHOENIX-848 to see if your hbase is
> good for phoenix 4.0.
>
> Thanks,
> -Jeffrey
>
> On 6/24/14 5:32 AM, "Kristoffer Sjögren" <stoffe@gmail.com> wrote:
>
>>Hi
>>
>>We're currently running Phoenix 2.2 on HBase 0.94 CDH 4.4 and slowly
>>preparing to move to Phoenix 4 and HBase 0.96 CDH 5.
>>
>>For my first tests I wanted to simply copy data from 0.94 to 0.96,
>>which works fine for regular hbase table using the following commands:
>>
>>$ hbase org.apache.hadoop.hbase.mapreduce.Export table /tmp/table
>>$ hadoop distcp hftp://hbase94:50070/tmp/table hdfs://hbase96/tmp/table
>>$ hbase -Dhbase.import.version=0.94
>>org.apache.hadoop.hbase.mapreduce.Import table /tmp/table
>>
>>This approach fail on the import for for phoenix tables tough (see
>>below) - where I create an identical table in 0.96 using phoenix 4.0
>>sqlline and then do the commands mentioned above. As I understand, the
>>_0 column family was used to allow hbase empty rows.
>>
>>Are there any tricks that can be made to allow copy data between these
>>two installations?
>>
>>Cheers,
>>-Kristoffer
>>
>>
>>2014-06-18 13:31:09,633 INFO  [main] mapreduce.Job: Task Id :
>>attempt_1403015236309_0015_m_000004_1, Status : FAILED
>>Error:
>>org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
>>Failed 6900 actions:
>>org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException:
>>Column family _0 does not exist in region
>>TABLE,\x1F\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
>>\x00\x00\x00\x00\x00\x00,1403090967713.51a3eefa9b92568a87223aff7878cdcf.
>>in table 'TABLE', {TABLE_ATTRIBUTES => {coprocessor$1 =>
>>'|org.apache.phoenix.coprocessor.ScanRegionObserver|1|', coprocessor$2
>>=> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|',
>>coprocessor$3 =>
>>'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|1|',
>>coprocessor$4 =>
>>'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|',
>>coprocessor$5 =>
>>'|org.apache.phoenix.hbase.index.Indexer|1073741823|index.builder=org.apac
>>he.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.c
>>lass=org.apache.phoenix.index.PhoenixIndexCodec'},
>>{NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY =>
>>'false', KEEP_DELETED_CELLS => 'true', DATA_BLOCK_ENCODING =>
>>'FAST_DIFF', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS =>
>>'0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE =>
>>'0'}
>>at
>>org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer
>>.java:4056)
>>at
>>org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutati
>>on(HRegionServer.java:3361)
>>at
>>org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.jav
>>a:3265)
>>at
>>org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.ca
>>llBlockingMethod(ClientProtos.java:26935)
>>at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
>>at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
>>: 6900 times,
>>at
>>org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(Asyn
>>cProcess.java:187)
>>at
>>org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncPr
>>ocess.java:171)
>>at
>>org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:88
>>2)
>>at
>>org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:9
>>40)
>>at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:903)
>>at org.apache.hadoop.hbase.client.HTable.put(HTable.java:864)
>>at
>>org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.writ
>>e(TableOutputFormat.java:126)
>>at
>>org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.writ
>>e(TableOutputFormat.java:87)
>>at
>>org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.ja
>>va:635)
>>at
>>org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInpu
>>tOutputContextImpl.java:89)
>>at
>>org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMap
>>per.java:112)
>>at
>>org.apache.hadoop.hbase.mapreduce.Import$Importer.writeResult(Import.java:
>>167)
>>at org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:136)
>>at org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:118)
>>at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>>at java.security.AccessController.doPrivileged(Native Method)
>>at javax.security.auth.Subject.doAs(Subject.java:422)
>>at
>>org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.
>>java:1548)
>>at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Mime
View raw message