phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Batyrshin <0x62...@gmail.com>
Subject Re: java.io.IOException: Added a key not lexically larger than previous
Date Thu, 15 Aug 2019 19:37:06 GMT
Im using global index.
HBase-1.4.10
Phoenix-4.14.2

I constantly get this issues today after increasing write load.

> On 15 Aug 2019, at 21:27, Josh Elser <elserj@apache.org> wrote:
> 
> Are you using a local index? Can you share the basics please (HBase and Phoenix versions).
> 
> I'm not seeing if you've shared this previously on this or another thread. Sorry if you
have.
> 
> Short-answer, it's possible that something around secondary indexing in Phoenix causes
this but not possible to definitively say in a vaccuum.
> 
> On 8/15/19 1:19 PM, Alexander Batyrshin wrote:
>> Is is possible that Phoenix is the reason of this problem?
>>> On 20 Jun 2019, at 04:16, Alexander Batyrshin <0x62ash@gmail.com> wrote:
>>> 
>>> Hello,
>>> Are there any ideas where this problem comes from and how to fix?
>>> 
>>> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,348 WARN  [MemStoreFlusher.0]
regionserver.HStore: Failed flushing store file, retrying num=9
>>> Jun 18 21:38:05 prod022 hbase[148581]: java.io.IOException: Added a key not lexically
larger than previous. Current cell = \x0D100395583733fW+,WQ/d:p/1560882798036/DeleteColumn/vlen=0/seqid=30023231,
lastCell = \x0D100395583733fW+,WQ/d:p/1560882798036/Put/vlen=29/seqid=30023591
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:279)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1053)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:139)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:969)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2484)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2622)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at java.lang.Thread.run(Thread.java:748)
>>> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,373 FATAL [MemStoreFlusher.0]
regionserver.HRegionServer: ABORTING region server prod022,60020,1560521871613: Replay of
WAL required. Forcing server shutdown
>>> Jun 18 21:38:05 prod022 hbase[148581]: org.apache.hadoop.hbase.DroppedSnapshotException:
region: TBL_C,\x0D04606203096428+jaVbx.,1558885224779.b4633aee06956663b05e8322ce34b0a3.
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2675)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at java.lang.Thread.run(Thread.java:748)
>>> Jun 18 21:38:05 prod022 hbase[148581]: Caused by: java.io.IOException: Added
a key not lexically larger than previous. Current cell = \x0D100395583733fW+,WQ/d:p/1560882798036/DeleteColumn/vlen=0/seqid=30023231,
lastCell = \x0D100395583733fW+,WQ/d:p/1560882798036/Put/vlen=29/seqid=30023591
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:279)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1053)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:139)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:969)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2484)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2622)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         ... 9 more
>>> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,373 FATAL [MemStoreFlusher.0]
regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.phoenix.coprocessor.ScanRegionObserver...
>>> 


Mime
View raw message