phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Saurabh Agarwal (BLOOMBERG/ 731 LEX)" <sagarwal...@bloomberg.net>
Subject Re: Phoenix table is inaccessible...
Date Mon, 14 Mar 2016 19:56:44 GMT
Thanks James. I will look into phoenix version upgrade and see if that helps. 

From: user@phoenix.apache.org At: Mar 14 2016 13:02:51
To: Saurabh Agarwal (BLOOMBERG/ 731 LEX), user@phoenix.apache.org
Subject: Re: Phoenix table is inaccessible...

Saurabh,
Is your table write-once, append-only data (as it looks to be based on the primary key constraint)? If that's the case, I'd recommend adding IMMUTABLE_ROWS=true to the end of your CREATE TABLE statement as this will improve performance.

For the INDEX hint, it needs to be /*+ INDEX <tableName> <indexName> */ like this:

    select /*+INDEX("WeatherData" targetMaxIndex)*/

You need to double quote WeatherData since you double quoted it when you created it and not double quote targetMaxIndex since you didn't double quote it when you created it.

Would it be possible to try your existing test on the newly released 4.7.0 and see if the problem occurs under the same condition (i.e. same CREATE TABLE statement, same 120,000 rows, same index)?

Thanks,
James

On Mon, Mar 14, 2016 at 9:36 AM, Saurabh Agarwal (BLOOMBERG/ 731 LEX) <sagarwal144@bloomberg.net> wrote:

Sorry for duplicate messages. Ignore the previous ones. 

From: user@phoenix.apache.org At: Mar 14 2016 12:31:45
To: user@phoenix.apache.org
Subject: Re: Phoenix table is inaccessible...


From: user@phoenix.apache.org At: Mar 14 2016 12:26:57
To: user@phoenix.apache.org
Subject: Re: Phoenix table is inaccessible...

Hi James, 

Here are the answer for your questions -

- What has been the life cycle of the table and index? Are you mapping to an existing HBase table, or was the data all created through Phoenix APIs? Did you do any manual deletions of rows from the SYSTEM.CATALOG table?
>>>> we created phoenix table first. No mapping to existing HBase table. Data were ingested using upsert commend using psql. We did not delete any rows from the SYSTEM.CATALOG table.
- When you initially created the index, did it function correctly or did you immediately run into this issue?
>>>> It did not function correctly. It start giving error right away. 
- How much data was already in the table when you created the secondary index?
>>>> around 120,000 rows
- Does bouncing your cluster and client have any impact?
No. It is test env. 

Here is table's creation ddl -
create table "WeatherData"
  (
    "domain" varchar not null,
    "source" varchar not null,
    "model" varchar not null,
    "type" varchar not null,
    "frequency" varchar not null,
    "locationIdentifier" varchar not null,
    "publicationDate" time not null,
    "targetDate" time not null,
    "maxTemperature" double,
    "minTemperature" double,
    "meanTemperature" double,
    "heatingDegreeDay" double,
    "coolingDegreeDay" double,
    "precipitation" double,
    "precipitation3Hour" double,
    "precipitation6Hour" double,
    "precipitation12Hour" double,
    "precipitation24Hour" double,
    "cloudCover" integer,
    "solarRadiation" double,
    "humidity" double,
    "windSpeed" double,
    "windDirection" integer,
    "uvIndex" integer,
    "dewPoint" double,
    "comfortLevel" double,
    "skyDescriptor" varchar,
    "precipitationDescriptor" varchar,
    "temperatureDescriptor" varchar,
    "airDescriptor" varchar,
    "visibility" double,
    "snowfall" double,
    "probabilityOfPrecipitation" double,
    "growingDegreeDay" double
    constraint pk primary key(
      "domain", "source", "model", "type", "frequency", "locationIdentifier", "publicationDate", "targetDate"
    )
  );

Here is what I did try for creating indexes - 

create index targetMaxIndex on "WeatherData"("maxTemperature") 

and use that as hint in my select statement, 

select /*+INDEX("targetMaxIndex")*/ * from "WeatherData" where "maxTemperature" > 35;

phoenix was not using the targetMaxIndex. So I were experimenting with including all columns 

create index idx on "WeatherData"("maxTemperature") include( "domain", "source", "model", "type", "frequency", "locationIdentifier", "publicationDate", "targetDate", "minTemperature", "meanTemperature", "heatingDegreeDay", "coolingDegreeDay", "precipitation", "precipitation3Hour", "precipitation6Hour", "precipitation12Hour", "precipitation24Hour", "cloudCover", "solarRadiation", "solarRadiation", "humidity", "windSpeed", "windDirection", "uvIndex", "dewPoint", "comfortLevel", "skyDescriptor", "precipitationDescriptor", "temperatureDescriptor", "airDescriptor", "visibility", "snowfall", "probabilityOfPrecipitation", "growingDegreeDay");

after creating this index, the table became inaccessible. 


From: user@phoenix.apache.org At: Mar 12 2016 14:18:00
To: user@phoenix.apache.org
Subject: Re: Phoenix table is inaccessible...

Saurabh,
Based on the stack trace and the ArrayIndexOutOfBoundsException, you're definitely hitting some kind of bug. It looks like when the server is attempting to load the metadata for the index on your table, it's unable to do so. Would it be possible to get your CREATE TABLE statement and your CREATE INDEX statement to see if we can reproduce this on our recent 4.7.0 release? Looks you're three releases behind being on 4.4 - that's a lot of bug fixes you're missing.

A couple of questions to see if we can narrow down the issue:
- What has been the life cycle of the table and index? Are you mapping to an existing HBase table, or was the data all created through Phoenix APIs? Did you do any manual deletions of rows from the SYSTEM.CATALOG table?
- When you initially created the index, did it function correctly or did you immediately run into this issue?
- How much data was already in the table when you created the secondary index?
- Does bouncing your cluster and client have any impact?

If this isn't a transient issue, to recover, you may need to drop your SYSTEM.CATALOG table from the HBase shell and re-run your DDL statements. This won't impact your data - when you re-issue your CREATE TABLE statement, Phoenix will bind to the existing HBase table it finds. Note that it will try to create new empty KeyValues for each row, though (which could take some time depending on how much data you have, but won't hurt anything). To prevent this, you can set the CurrentSCN property in the URL or as a connection property to a long, Epoch timestamp value that's smaller than any of your existing data on the connection where you're issuing the CREATE TABLE statement. See this[1] FAQ for more on that. Your indexes will then need to be recreated - I'd recommend just giving them different names so that the existing index tables aren't re-used.

HTH,

    James

[1] https://phoenix.apache.org/faq.html#Can_phoenix_work_on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API

On Sat, Mar 12, 2016 at 9:20 AM, Anil Gupta <anilgupta84@gmail.com> wrote:

Yes, Global indexes are stored in separate hbase table and their region location is not related to main table regions. 

Sent from my iPhone

On Mar 12, 2016, at 4:34 AM, Saurabh Agarwal (BLOOMBERG/ 731 LEX) <sagarwal144@bloomberg.net> wrote:


Thanks. I will try that. 

Having regions of the secondary indexes on different region servers than regions of main table are common. These are global indexes. Correct me if I am wrong. I am wondering why this will cause issue. 

Sent from Bloomberg Professional for iPhone 

----- Original Message -----
From: Jonathan Leech <jonathaz@gmail.com>
To: user@phoenix.apache.org
CC: SAURABH AGARWAL
At: 12-Mar-2016 00:28:17


Seen these kinds of errors when the regions for the 2ndary index end up on a different region server than the main table. Make sure the configuration is all correct and also look for regions stuck in transition, etc. try bouncing hbase, then drop all secondary indexes on the table as well as as its hbase table if necessary.


> On Mar 11, 2016, at 6:29 PM, Sergey Soldatov <sergeysoldatov@gmail.com> wrote:
>
> The system information about all Phoenix tables is located in HBase
> SYSTEM.CATALOG table. So, if you recreate the catalog you will need to
> recreate all tables as well. I'm not sure is there any other way to
> fix it.
>
> On Fri, Mar 11, 2016 at 4:25 PM, Saurabh Agarwal (BLOOMBERG/ 731 LEX)
> <sagarwal144@bloomberg.net> wrote:
>> Thanks. I will try that. Questions? I am able to access other tables fine.
>> If SYSTEM.CATALOG got corrupted, wouldn't it impact all tables?
>>
>> Also how to restore SYSTEM.CATALOG table without restarting sqlline?
>>
>>
>> Sent from Bloomberg Professional for iPhone
>>
>>
>> ----- Original Message -----
>> From: Sergey Soldatov <sergeysoldatov@gmail.com>
>> To: SAURABH AGARWAL, user@phoenix.apache.org
>> CC: ANIRUDHA JADHAV
>> At: 11-Mar-2016 19:07:31
>>
>> Hi Saurabh,
>> It seems that your SYSTEM.CATALOG got corrupted somehow. Usually you
>> need to disable and drop 'SYSTEM.CATALOG' in hbase shell. After that
>> restart sqlline (it will automatically recreate system catalog) and
>> recreate all user tables. The table data usually is not affected, but
>> just in case make a backup of your hbase before.
>>
>> Possible someone has a better advice.
>>
>> Thanks,
>> Sergey
>>
>> On Fri, Mar 11, 2016 at 3:05 PM, Saurabh Agarwal (BLOOMBERG/ 731 LEX)
>> <sagarwal144@bloomberg.net> wrote:
>>> Hi,
>>>
>>> I had been experimenting with different indexes on Phoenix table to get
>>> the
>>> desired performance.
>>>
>>> After creating secondary index that create index on one column and include
>>> rest of the fields, it start throwing the following exceptions whenever I
>>> access the table.
>>>
>>> Can you point me what might be went wrong here?
>>>
>>> We are using HDP 2.3 - HBase 1.1.2.2.3.2.0-2950,
>>> phoenix-4.4.0.2.3.2.0-2950
>>>
>>> 0: jdbc:phoenix:> select count(*) from "Weather";
>>> 16/03/11 17:37:32 WARN ipc.CoprocessorRpcChannel: Call failed on
>>> IOException
>>> org.apache.hadoop.hbase.DoNotRetryIOException:
>>> org.apache.hadoop.hbase.DoNotRetryIOException: com.bloomb
>>> erg.ds.WeatherSmallSalt: 35
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:447)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataPr
>>> otos.java:10505)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:187
>>> 5)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Cl
>>> ientProtos.java:32209)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 35
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
>>> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
>>> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java
>>> :526)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:803)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:430)
>>> ... 10 more
>>>
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>> at
>>>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>> at
>>>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.j
>>> ava:45)
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>>> at
>>>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>>> at
>>>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:325)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1622)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.ja
>>> va:92)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.ja
>>> va:89)
>>> at
>>>
>>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcC
>>> hannel.java:95)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.getTable(MetaDat
>>> aProtos.java:10665)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:
>>> 1292)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:
>>> 1279)
>>> at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1751)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by:
>>>
>>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOExc
>>> eption): org.apache.hadoop.hbase.DoNotRetryIOException:
>>> com.bloomberg.ds.WeatherSmallSalt: 35
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:447)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataPr
>>> otos.java:10505)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:187
>>> 5)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Cl
>>> ientProtos.java:32209)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 35
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
>>> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
>>> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java
>>> :526)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:803)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:430)
>>> ... 10 more
>>>
>>> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMe
>>> thod(AbstractRpcClient.java:287)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execServic
>>> e(ClientProtos.java:32675)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1618)
>>> ... 13 more
>>> 16/03/11 17:37:32 WARN client.HTable: Error calling coprocessor service
>>> org.apache.phoenix.coprocessor.g enerated.MetaDataProtos$MetaDataService
>>> for
>>> row \x00\x00com.bloomberg.ds.WeatherSmallSalt
>>> java.util.concurrent.ExecutionException:
>>> org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoo
>>> p.hbase.DoNotRetryIOException: com.bloomberg.ds.WeatherSmallSalt: 35
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:447)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataPr
>>> otos.java:10505)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:187
>>> 5)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Cl
>>> ientProtos.java:32209)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 35
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
>>> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
>>> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java
>>> :526)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:803)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:430)
>>> ... 10 more
>>>
>>> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>>> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>>> at
>>> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1763)
>>> at
>>> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1719)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryS
>>> ervicesImpl.java:1026)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryS
>>> ervicesImpl.java:1006)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.jav
>>> a:1278)
>>> at
>>>
>>> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:415)
>>> at
>>>
>>> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:358)
>>> at
>>>
>>> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:354)
>>> at
>>>
>>> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:4
>>> 13)
>>> at
>>>
>>> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:28
>>> 8)
>>> at
>>>
>>> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:189)
>>> at
>>>
>>> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStateme
>>> nt.java:358)
>>> at
>>>
>>> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStateme
>>> nt.java:339)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:247)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:242)
>>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>>> at
>>>
>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:241)
>>> at
>>>
>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1257)
>>> at sqlline.Commands.execute(Commands.java:822)
>>> at sqlline.Commands.sql(Commands.java:732)
>>> at sqlline.SqlLine.dispatch(SqlLine.java:808)
>>> at sqlline.SqlLine.begin(SqlLine.java:681)
>>> at sqlline.SqlLine.start(SqlLine.java:398)
>>> at sqlline.SqlLine.main(SqlLine.java:292)
>>> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
>>> org.apache.hadoop.hbase.DoNotRetryIOException:
>>> com.bloomberg.ds.WeatherSmallSalt: 35
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:447)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataPr
>>> otos.java:10505)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:187
>>> 5)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Cl
>>> ientProtos.java:32209)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 35
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
>>> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
>>> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java
>>> :526)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:803)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:430)
>>> ... 10 more
>>>
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>> at
>>>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>> at
>>>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.j
>>> ava:45)
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>>> at
>>>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>>> at
>>>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:325)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1622)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.ja
>>> va:92)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.ja
>>> va:89)
>>> at
>>>
>>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcC
>>> hannel.java:95)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.getTable(MetaDat
>>> aProtos.java:10665)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:
>>> 1292)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:
>>> 1279)
>>> at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1751)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by:
>>>
>>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOExc
>>> eption): org.apache.hadoop.hbase.DoNotRetryIOException:
>>> com.bloomberg.ds.WeatherSmallSalt: 35
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:447)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataPr
>>> otos.java:10505)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:187
>>> 5)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Cl
>>> ientProtos.java:32209)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 35
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
>>> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
>>> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java
>>> :526)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:803)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:430)
>>> ... 10 more
>>>
>>> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMe
>>> thod(AbstractRpcClient.java:287)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execServic
>>> e(ClientProtos.java:32675)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1618)
>>> ... 13 more
>>> Error: org.apache.hadoop.hbase.DoNotRetryIOException:
>>> com.bloomberg.ds.WeatherSmallSalt: 35
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:447)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataPr
>>> otos.java:10505)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:187
>>> 5)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Cl
>>> ientProtos.java:32209)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 35
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
>>> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
>>> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java
>>> :526)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:803)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:430)
>>> ... 10 more (state=08000,code=101)
>>> org.apache.phoenix.exception.PhoenixIOException:
>>> org.apache.hadoop.hbase.DoNotRetryIOException: com.bloo
>>> mberg.ds.WeatherSmallSalt: 35
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:447)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataPr
>>> otos.java:10505)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:187
>>> 5)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Cl
>>> ientProtos.java:32209)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 35
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
>>> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
>>> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java
>>> :526)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:803)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:430)
>>> ... 10 more
>>>
>>> at
>>>
>>> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryS
>>> ervicesImpl.java:1043)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryS
>>> ervicesImpl.java:1006)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.jav
>>> a:1278)
>>> at
>>>
>>> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:415)
>>> at
>>>
>>> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:358)
>>> at
>>>
>>> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:354)
>>> at
>>>
>>> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:4
>>> 13)
>>> at
>>>
>>> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:28
>>> 8)
>>> at
>>>
>>> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:189)
>>> at
>>>
>>> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStateme
>>> nt.java:358)
>>> at
>>>
>>> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStateme
>>> nt.java:339)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:247)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:242)
>>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>>> at
>>>
>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:241)
>>> at
>>>
>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1257)
>>> at sqlline.Commands.execute(Commands.java:822)
>>> at sqlline.Commands.sql(Commands.java:732)
>>> at sqlline.SqlLine.dispatch(SqlLine.java:808)
>>> at sqlline.SqlLine.begin(SqlLine.java:681)
>>> at sqlline.SqlLine.start(SqlLine.java:398)
>>> at sqlline.SqlLine.main(SqlLine.java:292)
>>> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
>>> org.apache.hadoop.hbase.DoNotRetryIOException:
>>> com.bloomberg.ds.WeatherSmallSalt: 35
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:447)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataPr
>>> otos.java:10505)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:187
>>> 5)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Cl
>>> ientProtos.java:32209)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 35
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
>>> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
>>> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java
>>> :526)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:803)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:430)
>>> ... 10 more
>>>
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>> at
>>>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>> at
>>>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.j
>>> ava:45)
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>>> at
>>>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>>> at
>>>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:325)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1622)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.ja
>>> va:92)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.ja
>>> va:89)
>>> at
>>>
>>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcC
>>> hannel.java:95)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.getTable(MetaDat
>>> aProtos.java:10665)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:
>>> 1292)
>>> at
>>>
>>> org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:
>>> 1279)
>>> at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1751)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by:
>>>
>>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOExc
>>> eption): org.apache.hadoop.hbase.DoNotRetryIOException:
>>> com.bloomberg.ds.WeatherSmallSalt: 35
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:447)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataPr
>>> otos.java:10505)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:187
>>> 5)
>>> at
>>>
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Cl
>>> ientProtos.java:32209)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 35
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:354)
>>> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:276)
>>> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:265)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:826)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java
>>> :526)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:803)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:462)
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1696
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1643
>>> )
>>> at
>>>
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:430)
>>> ... 10 more
>>>
>>> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>>> at
>>>
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMe
>>> thod(AbstractRpcClient.java:287)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execServic
>>> e(ClientProtos.java:32675)
>>> at
>>>
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1618)
>>> ... 13 more
>


Mime
View raw message