phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kristoffer Sjögren <sto...@gmail.com>
Subject Re: Problems upgrading 2.2.3-incubating to 3.2.1
Date Mon, 19 Jan 2015 10:53:58 GMT
Yes, all our test cases pass so I think we're good.

Maybe there isn't a problem in sqlline after all. It acts the same even if
I create 3.2.2 tables from a clean installation.

I realized that it calibrate the width of the console window and chop off
everything else off. This makes it look broken if you look at the output
below. Is it possible to break lines instead of chopping them off (like
mysql does for example)?


0: jdbc:phoenix:localhost> !tables
+------------------------------------------+-----------------------------------+
|                TABLE_CAT                 |               TABLE_SCHEM
    |
+------------------------------------------+-----------------------------------+
| null                                     | SYSTEM
   |
| null                                     | SYSTEM
   |
| null                                     | SYSTEM
   |
| null                                     | null
   |
+------------------------------------------+-----------------------------------+


0: jdbc:phoenix:localhost> !describe TABLE
+------------------------------------------+-----------------------------------+
|                TABLE_CAT                 |               TABLE_SCHEM
    |
+------------------------------------------+-----------------------------------+
| null                                     | null
   |
| null                                     | null
   |
| null                                     | null
   |
| null                                     | null
   |
| null                                     | null
   |
| null                                     | null
   |
+------------------------------------------+-----------------------------------+


0: jdbc:phoenix:localhost> select * from TABLE;
+------------------------------------------+-----------------------------------+
|                    TA                    |                    TS
    |
+------------------------------------------+-----------------------------------+
| 1                                        | 1
    |
| 2                                        | 1
    |
| 3                                        | 1
    |
+------------------------------------------+-----------------------------------+




On Thu, Jan 15, 2015 at 10:16 AM, James Taylor <jamestaylor@apache.org>
wrote:

> Yes, you can delete the SYSTEM.TABLE through the hbase shell. You can
> also remove any coprocessors that no longer exist from existing
> tables. But I wouldn't do that until you're sure everything is
> functioning correctly. That's worrisome that sqlline isn't displaying
> the metadata properly. Other than that, do other things work? Please
> make sure you make snapshots of the SYSTEM.TABLE, though - I don't
> want you to be in a position where you wouldn't be able to revert
> back.
>
> On Wed, Jan 14, 2015 at 7:10 AM, Kristoffer Sjögren <stoffe@gmail.com>
> wrote:
> > Yes, single quotes for the default column family works.
> >
> > CREATE TABLE TABLE (
> >     C1 INTEGER NOT NULL,
> >     C2 INTEGER NOT NULL,
> >     C3 BIGINT NOT NULL,
> >     C4 BIGINT NOT NULL,
> >     C5 CHAR(2) NOT NULL,
> >     V BIGINT
> >
> >     CONSTRAINT PK PRIMARY KEY (
> >         C1,
> >         C2,
> >         C3,
> >         C4,
> >         C5
> >     )
> > )
> > DEFAULT_COLUMN_FAMILY='_0',
> > SALT_BUCKETS = 1;
> >
> > But I can still see [2] (below) in the logs periodically. But maybe this
> > originates from the old table? Can I delete that metadata somehow to get
> rid
> > of it?
> >
> > Also, sqlline 3.2.2 looks totally broken. No metadata is displayed for
> any
> > of the tables and select * from table only display 2 columns.
> >
> > I had to apply this fix manually to get it working at all.
> >
> > https://issues.apache.org/jira/browse/PHOENIX-1513
> >
> > [2]
> >
> > 15/01/14 15:54:33 WARN coprocessor.MetaDataRegionObserver:
> > ScheduledBuildIndexTask failed!
> > org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column
> > family 0 does not exist in region
> > SYSTEM.TABLE,,1421246940729.e9f67581ef059c5703b27a2e649dccf8. in table
> {NAME
> > => 'SYSTEM.TABLE', SPLIT_POLICY =>
> > 'org.apache.phoenix.schema.MetaDataSplitPolicy', coprocessor$7 =>
> > '|org.apache.phoenix.coprocessor.MetaDataRegionObserver|2|',
> coprocessor$5
> > => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|',
> > coprocessor$6 =>
> '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|1|',
> > coprocessor$3 =>
> > '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|1|',
> > coprocessor$4 => '|org.apache.phoenix.join.HashJoiningRegionObserver|1|',
> > coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|1|',
> > UpgradeTo21 => 'true', coprocessor$2 =>
> > '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|',
> > UpgradeTo20 => 'true', UpgradeTo22 => 'true', FAMILIES => [{NAME =>
'_0',
> > ENCODE_ON_DISK => 'true', BLOOMFILTER => 'NONE', VERSIONS => '1000',
> > IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'true', DATA_BLOCK_ENCODING
> =>
> > 'FAST_DIFF', TTL => '2147483647', COMPRESSION => 'NONE', MIN_VERSIONS =>
> > '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE =>
> '0'}]}
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:5341)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1744)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1722)
> > at
> >
> org.apache.phoenix.coprocessor.MetaDataRegionObserver$BuildIndexScheduleTask.run(MetaDataRegionObserver.java:174)
> > at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> > 15/01/14 15:54:39 DEBUG hfile.LruBlockCache: Stats: total=1.99 MB,
> > free=239.68 MB, max=241.67 MB, blocks=4, accesses=242, hits=238,
> > hitRatio=98.34%, , cachingAccesses=242, cachingHits=238,
> > cachingHitsRatio=98.34%, , evictions=0, evicted=0, evictedPerRun=NaN
> > 15/01/14 15:54:43 WARN coprocessor.MetaDataRegionObserver:
> > ScheduledBuildIndexTask failed!
> > org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column
> > family 0 does not exist in region
> > SYSTEM.TABLE,,1421246940729.e9f67581ef059c5703b27a2e649dccf8. in table
> {NAME
> > => 'SYSTEM.TABLE', SPLIT_POLICY =>
> > 'org.apache.phoenix.schema.MetaDataSplitPolicy', coprocessor$7 =>
> > '|org.apache.phoenix.coprocessor.MetaDataRegionObserver|2|',
> coprocessor$5
> > => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|',
> > coprocessor$6 =>
> '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|1|',
> > coprocessor$3 =>
> > '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|1|',
> > coprocessor$4 => '|org.apache.phoenix.join.HashJoiningRegionObserver|1|',
> > coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|1|',
> > UpgradeTo21 => 'true', coprocessor$2 =>
> > '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|',
> > UpgradeTo20 => 'true', UpgradeTo22 => 'true', FAMILIES => [{NAME =>
'_0',
> > ENCODE_ON_DISK => 'true', BLOOMFILTER => 'NONE', VERSIONS => '1000',
> > IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'true', DATA_BLOCK_ENCODING
> =>
> > 'FAST_DIFF', TTL => '2147483647', COMPRESSION => 'NONE', MIN_VERSIONS =>
> > '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE =>
> '0'}]}
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:5341)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1744)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1722)
> > at
> >
> org.apache.phoenix.coprocessor.MetaDataRegionObserver$BuildIndexScheduleTask.run(MetaDataRegionObserver.java:174)
> > at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> >
> >
> > On Tue, Jan 13, 2015 at 10:02 PM, James Taylor <jamestaylor@apache.org>
> > wrote:
> >>
> >> The warning for [1] can be ignored, but [2] is problematic. You're
> >> coming from a very old version (our first incubator release ever),
> >> it's going to be difficult to figure out where the issue is.
> >>
> >> One alternative means of upgrading might be to manually rerun your
> >> CREATE TABLE statements on top of 3.2.2. You'd want to add the
> >> following to the end of each statement: DEFAULT_COLUMN_FAMILY="_0".
> >> The tables will map to your existing HBase data in that case. Just
> >> remove the old jar from the client and server, add the new jar, and
> >> bounce your cluster beforehand. Try this in your test environment and
> >> let us know if that works.
> >>
> >> Thanks,
> >> James
> >>
> >> On Tue, Jan 13, 2015 at 2:53 AM, Kristoffer Sjögren <stoffe@gmail.com>
> >> wrote:
> >> > Hi James
> >> >
> >> > I tried the upgrade path you mentioned and it worked as far as I can
> >> > tell.
> >> > Insert and query existing tables works at least.
> >> >
> >> > The only thing that worries me is an exception thrown at region server
> >> > start
> >> > up [1] and frequent periodic exceptions complaining about building the
> >> > index
> >> > [2] in runtime. I followed the upgrade procedure multiple times and
> >> > always
> >> > seem to end up in this state.
> >> >
> >> > What could be the cause of these exceptions? HashJoiningRegionObserver
> >> > does
> >> > indeed not exist in any phoenix 3+ versions.
> >> >
> >> > Cheers,
> >> > -Kristoffer
> >> >
> >> > [1]
> >> >
> >> > 15/01/13 10:27:27 WARN regionserver.RegionCoprocessorHost: attribute
> >> > 'coprocessor$4' has invalid coprocessor specification
> >> > '|org.apache.phoenix.join.HashJoiningRegionObserver|1|'
> >> > 15/01/13 10:27:27 WARN regionserver.RegionCoprocessorHost:
> >> > java.io.IOException: No jar path specified for
> >> > org.apache.phoenix.join.HashJoiningRegionObserver
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:185)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:190)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:154)
> >> > at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:473)
> >> > at sun.reflect.GeneratedConstructorAccessor12.newInstance(Unknown
> >> > Source)
> >> > at
> >> >
> >> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >> > at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4070)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4253)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:329)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:100)
> >> > at
> >> >
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:171)
> >> > at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >> > at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >> > at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> > [2]
> >> >
> >> > 5/01/13 10:27:47 WARN coprocessor.MetaDataRegionObserver:
> >> > ScheduledBuildIndexTask failed!
> >> > org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException:
> Column
> >> > family 0 does not exist in region
> >> > SYSTEM.TABLE,,1421139725748.a46320eb144712e231b1dd8ab3da30aa. in table
> >> > {NAME
> >> > => 'SYSTEM.TABLE', SPLIT_POLICY =>
> >> > 'org.apache.phoenix.schema.MetaDataSplitPolicy', coprocessor$7 =>
> >> > '|org.apache.phoenix.coprocessor.MetaDataRegionObserver|2|',
> >> > coprocessor$5
> >> > => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|',
> >> > coprocessor$6 =>
> >> > '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|1|',
> >> > coprocessor$3 =>
> >> > '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|1|',
> >> > coprocessor$4 =>
> >> > '|org.apache.phoenix.join.HashJoiningRegionObserver|1|',
> >> > coprocessor$1 =>
> >> > '|org.apache.phoenix.coprocessor.ScanRegionObserver|1|',
> >> > UpgradeTo21 => 'true', coprocessor$2 =>
> >> > '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|',
> >> > UpgradeTo20 => 'true', UpgradeTo22 => 'true', FAMILIES => [{NAME
=>
> >> > '_0',
> >> > ENCODE_ON_DISK => 'true', BLOOMFILTER => 'NONE', VERSIONS => '1000',
> >> > IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'true',
> DATA_BLOCK_ENCODING
> >> > =>
> >> > 'FAST_DIFF', TTL => '2147483647', COMPRESSION => 'NONE', MIN_VERSIONS
> =>
> >> > '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE
=>
> >> > '0'}]}
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:5341)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1744)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1722)
> >> > at
> >> >
> >> >
> org.apache.phoenix.coprocessor.MetaDataRegionObserver$BuildIndexScheduleTask.run(MetaDataRegionObserver.java:174)
> >> > at
> >> >
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> >> > at
> >> >
> >> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> >> > at
> >> >
> >> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> >> > at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >> > at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >> > at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> > On Mon, Jan 12, 2015 at 7:05 PM, James Taylor <jamestaylor@apache.org
> >
> >> > wrote:
> >> >>
> >> >> Hi Kristoffer,
> >> >> You'll need to upgrade first from 2.2.3 to 3.0.0-incubating, and then
> >> >> to each minor version (3.1 and then 3.2.2) to trigger the upgrade for
> >> >> each release. You can access previous releases from the "Download
> >> >> Previous Releases" link here:
> http://phoenix.apache.org/download.html.
> >> >> We'll improve this in future releases such that you can go directly
> to
> >> >> any minor release within the same major release in a single step.
> >> >>
> >> >> When you upgrade, follow these steps:
> >> >> - Remove the old client and server jar
> >> >> - Replace both the client jar and the server jar with the new one
> >> >> - Bounce your cluster
> >> >> - Establish a connection from the client to the server (i.e. bring
up
> >> >> sqlline, for example). This is what actually triggers the upgrade.
> >> >>
> >> >> FWIW, since you're going through the trouble of upgrading, you may
> >> >> want to consider moving to our 4.x releases and upgrading your
> cluster
> >> >> to HBase 0.98. I believe the 0.94 HBase releases are close to
> >> >> end-of-life, and the upcoming 3.3 release of Phoenix will be the last
> >> >> release in the 3.x series.
> >> >>
> >> >> Thanks,
> >> >> James
> >> >>
> >> >> On Mon, Jan 12, 2015 at 5:38 AM, Kristoffer Sjögren <
> stoffe@gmail.com>
> >> >> wrote:
> >> >> > Hi
> >> >> >
> >> >> > I'm trying to upgrade phoenix 2.2.3-incubating to phoenix 3.2.2
on
> my
> >> >> > local
> >> >> > computer first in order to gain confidence that it will work on
the
> >> >> > production cluster. We use HBase 0.94.6 CDH 4.4.0.
> >> >> >
> >> >> > 1) My first question is what release to pick? There is no phoenix
> >> >> > 3.2.2
> >> >> > jar
> >> >> > in maven central (only 3.2.1 jar) and there is no 3.2.1 tar.gz
> >> >> > distribution
> >> >> > is available from phoenix.apache.org (only 3.2.2).
> >> >> >
> >> >> > Anyway, I tried replacing phoenix-2.2.3-incubating.jar with
> >> >> > phoenix-core-3.2.2.jar
> >> >> > (from phoenix-3.2.2-bin.tar.gz) in $HBASE_HOME/lib and restarted
> >> >> > HBase.
> >> >> >
> >> >> > This triggered warnings in HBase log [2], which is understandable
> >> >> > since
> >> >> > phoenix-core-3.2.2.jar does not include
> >> >> > org.apache.phoenix.join.HashJoiningRegionObserver.
> >> >> >
> >> >> >
> >> >> > 2) Next step I updated the client from phoenix-2.2.3-incubating.jar
> >> >> > to
> >> >> > phoenix-core-3.2.1.jar and added the following to hbase-site.xml.
> >> >> >
> >> >> >   <configuration>
> >> >> >     <property>
> >> >> >       <name>phoenix.client.autoUpgradeWhiteList</name>
> >> >> >       <value>*</value>
> >> >> >     </property>
> >> >> >   </configuration>
> >> >> >
> >> >> > I think this triggered the upgrade process as soon as the client
> >> >> > contacted
> >> >> > HBase. But all tables are inaccessible after this process [2].
Now
> >> >> > there
> >> >> > are
> >> >> > also periodic warnings occurring periodically in HBase log [3].
> >> >> >
> >> >> > I also tried to install 3.2.2 manually into maven and run same
> client
> >> >> > and
> >> >> > server version, but this did not change the behavior.
> >> >> >
> >> >> > I'm not sure what have gone wrong?
> >> >> >
> >> >> > Cheers,
> >> >> > -Kristoffer
> >> >> >
> >> >> >
> >> >> > [1]
> >> >> >
> >> >> > 15/01/12 14:25:12 WARN regionserver.RegionCoprocessorHost:
> attribute
> >> >> > 'coprocessor$6' has invalid coprocessor specification
> >> >> >
> >> >> >
> >> >> >
> '|org.apache.hbase.index.Indexer|1073741823|org.apache.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'
> >> >> >
> >> >> > 15/01/12 14:25:12 WARN regionserver.RegionCoprocessorHost:
> >> >> > java.io.IOException: No jar path specified for
> >> >> > org.apache.hbase.index.Indexer
> >> >> >
> >> >> > 15/01/12 14:25:12 WARN regionserver.RegionCoprocessorHost:
> attribute
> >> >> > 'coprocessor$4' has invalid coprocessor specification
> >> >> > '|org.apache.phoenix.join.HashJoiningRegionObserver|1|'
> >> >> >
> >> >> > 15/01/12 14:25:12 WARN regionserver.RegionCoprocessorHost:
> >> >> > java.io.IOException: No jar path specified for
> >> >> > org.apache.phoenix.join.HashJoiningRegionObserver
> >> >> >
> >> >> > [2]
> >> >> >
> >> >> > Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR
> >> >> > 1012
> >> >> > (42M03): Table undefined. tableName=TRACKING_COUNTER
> >> >> >
> >> >> >
> >> >> > [3]
> >> >> >
> >> >> >
> >> >> > 15/01/12 14:25:24 WARN coprocessor.MetaDataRegionObserver:
> >> >> > ScheduledBuildIndexTask failed!
> >> >> > org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException:
> >> >> > Column
> >> >> > family 0 does not exist in region
> >> >> > SYSTEM.TABLE,,1421069004810.6b41a24a11a4f106b85d6ae76334baf6.
in
> >> >> > table
> >> >> > {NAME
> >> >> > => 'SYSTEM.TABLE', SPLIT_POLICY =>
> >> >> > 'org.apache.phoenix.schema.MetaDataSplitPolicy', UpgradeTo21 =>
> >> >> > 'true',
> >> >> > UpgradeTo20 => 'true', coprocessor$7 =>
> >> >> > '|org.apache.phoenix.coprocessor.MetaDataRegionObserver|2|',
> >> >> > UpgradeTo22
> >> >> > =>
> >> >> > 'true', coprocessor$6 =>
> >> >> > '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|1|',
> >> >> > coprocessor$5
> >> >> > =>
> >> >> > '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|',
> >> >> > coprocessor$4 =>
> >> >> > '|org.apache.phoenix.join.HashJoiningRegionObserver|1|',
> >> >> > coprocessor$3 =>
> >> >> >
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|1|',
> >> >> > coprocessor$2 =>
> >> >> >
> >> >> >
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|',
> >> >> > coprocessor$1 =>
> >> >> > '|org.apache.phoenix.coprocessor.ScanRegionObserver|1|',
> >> >> > FAMILIES => [{NAME => '_0', DATA_BLOCK_ENCODING => 'FAST_DIFF',
> >> >> > BLOOMFILTER
> >> >> > => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1000',
> COMPRESSION
> >> >> > =>
> >> >> > 'NONE', TTL => '2147483647', MIN_VERSIONS => '0',
> KEEP_DELETED_CELLS
> >> >> > =>
> >> >> > 'true', BLOCKSIZE => '65536', ENCODE_ON_DISK => 'true',
IN_MEMORY
> =>
> >> >> > 'false', BLOCKCACHE => 'true'}]}
> >> >> > at
> >> >> >
> >> >> >
> >> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:5341)
> >> >> > at
> >> >> >
> >> >> >
> >> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1744)
> >> >> > at
> >> >> >
> >> >> >
> >> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1722)
> >> >> > at
> >> >> >
> >> >> >
> >> >> >
> org.apache.phoenix.coprocessor.MetaDataRegionObserver$BuildIndexScheduleTask.run(MetaDataRegionObserver.java:174)
> >> >> > at
> >> >> >
> >> >> >
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> >> >> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> >> >> > at
> >> >> >
> >> >> >
> >> >> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> >> >> > at
> >> >> >
> >> >> >
> >> >> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> >> >> > at
> >> >> >
> >> >> >
> >> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >> > at
> >> >> >
> >> >> >
> >> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >> > at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >
> >
>

Mime
View raw message