Hi All,
I am using Phoenix4.4, i have created a global secondary in one table. I am
running MapReduce job with 20 reducers to load data into this table(maybe i
m doing 50 writes/second/reducer). Dataset is around 500K rows only. My
mapreduce job is failing due to this exception:
Caused by: org.apache.phoenix.execute.CommitException:
java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
metadata. ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached
index metadata. key=-413539871950113484
region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d.
Index update failed
at
org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
at
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
at
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at
org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
at
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:84)
... 14 more
It seems like i am hitting
https://issues.apache.org/jira/browse/PHOENIX-1718, but i dont have heavy
write or read load like wuchengzhi. I haven't dont any tweaking in
Phoenix/HBase conf yet.
What is the root cause of this error? What are the recommended changes in
conf for this?
--
Thanks & Regards,
Anil Gupta
|