phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sergey Soldatov <sergeysolda...@gmail.com>
Subject Re: ArrayIndexOutOfBounds excpetion
Date Wed, 19 Jul 2017 22:40:09 GMT
Hi Siddharth
The problem that was described in PHOENIX-3196 (as well as in PHOENIX-930
and several others) is that we sent metadata updates for the table before
checking whether there is a problem with duplicated column names. I don't
think you hit the same problem if you are using alter tables. Could you
check whether it's possible to create simple steps to reproduce it?

Thanks,
Sergey

On Tue, Jul 11, 2017 at 3:02 AM, Siddharth Ubale <
siddharth.ubale@syncoms.com> wrote:

> Hi,
>
>
>
> We are using a Phoenix table , where we constantly upgrade the structure
> by altering the table and adding new columns.
>
> Almost, every 3 days we see that the table becomes unusable via phoenix
> after some alter commands have altered the table.
>
> Earlier we were under the impression that one of the columns gets created
> which is a duplicate causing a metadata issue with phoenix which it is
>
> Unable to manage at this stage based on online discussions.
>
> Also, there exists an unresolved issue below about the same.
>
> https://issues.apache.org/jira/browse/PHOENIX-3196
>
>
>
> Can anyone tell me if they are facing the same and what they have done in
> order to check this occurrence.
>
>
>
> Please find the stack trace for my problem below:
>
>
>
>
>
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: DATAWAREHOUSE3: null
>
>                 at org.apache.phoenix.util.ServerUtil.createIOException(
> ServerUtil.java:89)
>
>                 at org.apache.phoenix.coprocessor.
> MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:546)
>
>                 at org.apache.phoenix.coprocessor.generated.
> MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
>
>                 at org.apache.hadoop.hbase.regionserver.HRegion.
> execService(HRegion.java:6001)
>
>                 at org.apache.hadoop.hbase.regionserver.HRegionServer.
> execServiceOnRegion(HRegionServer.java:3510)
>
>                 at org.apache.hadoop.hbase.regionserver.HRegionServer.
> execService(HRegionServer.java:3492)
>
>                 at org.apache.hadoop.hbase.protobuf.generated.
> ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30950)
>
>                 at org.apache.hadoop.hbase.ipc.
> RpcServer.call(RpcServer.java:2109)
>
>                 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.
> java:101)
>
>                 at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
> RpcExecutor.java:130)
>
>                 at org.apache.hadoop.hbase.ipc.
> RpcExecutor$1.run(RpcExecutor.java:107)
>
>                 at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.lang.ArrayIndexOutOfBoundsException
>
>
>
> SQLState:  08000
>
> ErrorCode: 101
>
>
>
>
>
> Thanks,
>
> Siddharth Ubale,
>
>
>

Mime
View raw message