phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ychernyatin@gmail.com <ychernya...@gmail.com>
Subject Problem with query with: limit, offset and order by
Date Fri, 25 May 2018 13:36:36 GMT
Hi everyone.

   I faced the problem of executing query. We have table with 150 rows, if you try execute
query  with huge offset, order by  and using limit, phoenix will be crushed with such problem:


Example query:

Select * from table order by col1 , col2 limit 1000 offset 15677558;

Caused by:

Error: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException:
TABLE,,1524572088462.899bce582428250714db99a6b679e435.: Requested memory of 4201719544 bytes
is larger than global pool of 272655974 bytes.
        at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
        at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
        at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:264)
        at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of 4201719544
bytes is larger than global pool of 272655974 bytes.
        at org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:66)
        at org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:89)
        at org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:95)
        at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.getTopNScanner(NonAggregateRegionScannerFactory.java:315)
        at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.getRegionScanner(NonAggregateRegionScannerFactory.java:163)
        at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:72)
        at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
        ... 8 more (state=08000,code=101)


But if you remove "order by" or "limit " problem will gone. 


Versions:
Phoenix:  4.13.1
Hbase: 1.3
Hadoop : 2.6.4

Mime
View raw message