phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <>
Subject Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)
Date Fri, 15 Jan 2016 02:28:58 GMT
Hi Anil,
This error occurs if you're performing an update that takes a long time on
a mutable table that has a secondary index. In this case, we make an RPC
before the update which sends index metadata to the region server which
it'll use for the duration of the update to generate the secondary index
rows based on the data rows. In this case, the cache entry is expiring
before the update (i.e. your MR job) completes. Try
increasing phoenix.coprocessor.maxServerCacheTimeToLiveMs in the region
server hbase-site.xml. See our Tuning page[1] for more info.

FWIW, 500K rows would be much faster to insert via our standard UPSERT


On Sun, Jan 10, 2016 at 10:18 PM, Anil Gupta <> wrote:

> Bump..
> Can secondary index commiters/experts provide any insight into this? This
> is one of the feature that encouraged us to use phoenix.
> Imo, global secondary index should be handled as a inverted index table.
> So, i m unable to understand why its failing on region splits.
> Sent from my iPhone
> On Jan 6, 2016, at 11:14 PM, anil gupta <> wrote:
> Hi All,
> I am using Phoenix4.4, i have created a global secondary in one table. I
> am running MapReduce job with 20 reducers to load data into this
> table(maybe i m doing 50 writes/second/reducer). Dataset  is around 500K
> rows only. My mapreduce job is failing due to this exception:
> Caused by: org.apache.phoenix.execute.CommitException:
> java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
> metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached
> index metadata.  key=-413539871950113484
> region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d.
> Index update failed
>     at
> org.apache.phoenix.execute.MutationState.commit(
>     at
> org.apache.phoenix.jdbc.PhoenixConnection$
>     at
> org.apache.phoenix.jdbc.PhoenixConnection$
>     at
>     at
> org.apache.phoenix.jdbc.PhoenixConnection.commit(
>     at
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(
>     ... 14 more
> It seems like i am hitting
>, but i dont have heavy
> write or read load like wuchengzhi. I haven't dont any tweaking in
> Phoenix/HBase conf yet.
> What is the root cause of this error? What are the recommended changes in
> conf for this?
> --
> Thanks & Regards,
> Anil Gupta

View raw message