phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: Can't drop table, help!
Date Mon, 30 Jun 2014 14:36:06 GMT
Russell,
What version of Phoenix are you using? If you can produce a unit test
for this, then we'll get a fix for you in our next patch release.
Thanks,
James

On Fri, Jun 20, 2014 at 12:22 AM, Jody Landreneau
<jodylandreneau@gmail.com> wrote:
> for your item 3, I am trying the following that seems to work from the
> SQuirrel client. seems easier than the hbase cli.
>
> delete from system.catalog where table_name = 'my_bad_table'
>
>
> On Thu, Jun 19, 2014 at 3:01 PM, Russell Jurney <russell.jurney@gmail.com>
> wrote:
>>
>> Jody: yes, you must:
>>
>> 1) disable table in hbase shell
>> 2) drop table
>> 3) scan SYSTEM.TABLE and remove all entries with that table name
>>
>> I automate item 3 with a script, but it's hard to get right owing to weird
>> column wrapping of hbase data.
>>
>>
>> On Thursday, June 19, 2014, Jody Landreneau <jodylandreneau@gmail.com>
>> wrote:
>>>
>>> We've experienced this issue as well. Should we file a ticket? Is the
>>> recommended solution to remove the entries from the 'SYSTEM.CATALOG'.
>>>
>>> thanks --
>>>
>>>
>>> On Sun, Jun 15, 2014 at 7:59 PM, Russell Jurney
>>> <russell.jurney@gmail.com> wrote:
>>>>
>>>> The problem turned out to be an error in the script that removes entries
>>>> from SYSTEM.TABLE.
>>>>
>>>>
>>>> On Sunday, June 15, 2014, Abhilash L L <abhilash@capillarytech.com>
>>>> wrote:
>>>>>
>>>>> Same thing happened to us as well.
>>>>> We deleted the info from the system table for the corresponding table
>>>>> and then dropped the table from hbase shell.
>>>>>
>>>>> We are on phoenix 2.2.1 and hbase 94.7
>>>>>
>>>>> Since it anyways throws a invalid ddl exception, it shouldn't be adding
>>>>> anything to the meta table
>>>>>
>>>>> On Jun 16, 2014 5:12 AM, "Ravi Kiran" <maghamravikiran@gmail.com>
>>>>> wrote:
>>>>>
>>>>> Hi Russel,
>>>>>    When recreating the table, does it complain of a TABLE_ALREADY_EXIST
>>>>> exception?
>>>>>
>>>>>     If possible, can you please confirm if you see the table
>>>>> 'DEV_HET_MEF' from the zookeeper client ( zkcli.sh)
>>>>>       a )   hbase zkcli -server <host>:<port>
>>>>>       b)    ls /hbase/table/
>>>>>       If so, you can remove it by running rmr /hbase/table/DEV_HET_MEF
>>>>>
>>>>> Also ,can you please let me know which version of Phoenix are you using
>>>>> to help see if I can reproduce the problem.
>>>>>
>>>>> Regards
>>>>> Ravi
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Jun 15, 2014 at 12:58 PM, Russell Jurney
>>>>> <russell.jurney@gmail.com> wrote:
>>>>>
>>>>> We created a table with duplicate fields by mistake.  Now we are unable
>>>>> to drop the table:
>>>>>
>>>>>
>>>>> 0: jdbc:phoenix:hiveapp1> drop table DEV_HET_MEF;
>>>>> Error: org.apache.hadoop.hbase.DoNotRetryIOException: DEV_HET_MEF: 28
>>>>> at
>>>>> com.salesforce.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:83)
>>>>> at
>>>>> com.salesforce.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:578)
>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>> at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>>> at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java:5490)
>>>>> at
>>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java:3720)
>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>> at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>>> at
>>>>> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
>>>>> at
>>>>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1428)
>>>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 28
>>>>> at com.salesforce.phoenix.schema.PTableImpl.init(PTableImpl.java:213)
>>>>> at com.salesforce.phoenix.schema.PTableImpl.<init>(PTableImpl.java:181)
>>>>> at
>>>>> com.salesforce.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:176)
>>>>> at
>>>>> com.salesforce.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:381)
>>>>> at
>>>>> com.salesforce.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:215)
>>>>> at
>>>>> com.salesforce.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:595)
>>>>> at
>>>>> com.salesforce.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:558)
>>>>> ... 12 more (state=08000,code=101)
>>>>>
>>>>>
>>>>> So I go to hbase shell and drop the table there...
>>>>>
>>>>> disable 'DEV_HET_MEF'
>>>>>
>>>>> drop 'DEV_HET_MEF'
>>>>>
>>>>>
>>>>> Finally, I scan 'SYSTEM.TABLE' and remove all rows corresponding to
>>>>> 'DEV_HET_MEF'.
>>>>>
>>>>> Normally this works, and the table should be gone! But this time, the
>>>>> table can't be recreated. What should I do? I tried restarting HBase,
no
>>>>> effect.
>>>>>
>>>>> Thanks!
>>>>> --
>>>>>
>>>>> Email from people at capillarytech.com may not represent official
>>>>> policy of Capillary Technologies unless explicitly stated. Please see
our
>>>>> Corporate-Email-Policy for details.Contents of this email are confidential.
>>>>> Please contact the Sender if you have received this email in error.
>>>>
>>>>
>>>>
>>>> --
>>>> Russell Jurney twitter.com/rjurney russell.jurney@gmail.com
>>>> datasyndrome.com
>>>
>>>
>>
>>
>> --
>> Russell Jurney twitter.com/rjurney russell.jurney@gmail.com
>> datasyndrome.com
>
>

Mime
View raw message