phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Siddharth Ubale <siddharth.ub...@syncoms.com>
Subject RE: Hi
Date Wed, 19 Nov 2014 07:07:21 GMT
Hi Maryann,

Thanks for replying.
I have tried the join with the below mentioned  solution to add “/* +NO_STAR_Join*/” after
select in the query , however I am still getting the same error. However this time SQLLINE
has thrown a stack trace which I am sharing below :

Wed Nov 19 12:10:12 IST 2014, org.apache.hadoop.hbase.client.RpcRetryingCaller@46b1f633, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException: Requested memory
of 256498316 bytes is larger than global pool of 152046796 bytes.
                at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
                at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
                at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5689)
                at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396)
                at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378)
                at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29929)
                at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
                at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
                at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
                at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
                at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of 256498316
bytes is larger than global pool of 152046796 bytes.
                at org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:72)
                at org.apache.phoenix.memory.GlobalMemoryManager.access$300(GlobalMemoryManager.java:32)
                at org.apache.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.resize(GlobalMemoryManager.java:142)
                at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:112)
                at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:81)
                at org.apache.phoenix.join.HashCacheFactory.newCache(HashCacheFactory.java:74)
                at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:87)
                at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:75)
                ... 10 more

Wed Nov 19 12:10:32 IST 2014, org.apache.hadoop.hbase.client.RpcRetryingCaller@46b1f633, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException: Requested memory
of 256498316 bytes is larger than global pool of 152046796 bytes.
                at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
                at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
                at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5689)
                at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396)
                at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378)
                at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29929)
                at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
                at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
                at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
                at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
                at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of 256498316
bytes is larger than global pool of 152046796 bytes.
                at org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:72)
                at org.apache.phoenix.memory.GlobalMemoryManager.access$300(GlobalMemoryManager.java:32)
                at org.apache.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.resize(GlobalMemoryManager.java:142)
                at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:112)
                at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:81)
                at org.apache.phoenix.join.HashCacheFactory.newCache(HashCacheFactory.java:74)
                at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:87)
                at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:75)
                ... 10 more


                at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:129)
                at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
                at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:96)
                at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:51)
                at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService$Stub.addServerCache(ServerCachingProtos.java:3270)
                at org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:204)
                at org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:189)
                at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1542)
                ... 4 more
Caused by: java.io.IOException: java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 256498316 bytes is larger than global pool of 152046796 bytes.
                at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
                at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
                at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5689)
                at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396)
                at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378)
                at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29929)
                at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
                at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
                at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
                at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
                at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of 256498316
bytes is larger than global pool of 152046796 bytes.
                at org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:72)
                at org.apache.phoenix.memory.GlobalMemoryManager.access$300(GlobalMemoryManager.java:32)
                at org.apache.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.resize(GlobalMemoryManager.java:142)
                at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:112)
                at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:81)
                at org.apache.phoenix.join.HashCacheFactory.newCache(HashCacheFactory.java:74)
                at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:87)
                at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:75)
                ... 10 more

                at sun.reflect.GeneratedConstructorAccessor9.newInstance(Unknown Source)
                at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
                at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
                at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
                at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
                at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:285)
                at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1567)
                at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:93)
                at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90)
                at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
                ... 11 more
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException:
org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of 256498316 bytes
is larger than global pool of 152046796 bytes.
                at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
                at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
                at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5689)
                at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396)
                at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378)
                at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29929)
                at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
                at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
                at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
                at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
                at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of 256498316
bytes is larger than global pool of 152046796 bytes.
                at org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:72)
                at org.apache.phoenix.memory.GlobalMemoryManager.access$300(GlobalMemoryManager.java:32)
                at org.apache.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.resize(GlobalMemoryManager.java:142)
                at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:112)
                at org.apache.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:81)
                at org.apache.phoenix.join.HashCacheFactory.newCache(HashCacheFactory.java:74)
                at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:87)
                at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:75)
                ... 10 more

                at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1452)
                at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1656)
                at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1714)
                at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:29924)
                at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1563)
                ... 14 more


It Says “: org.apache.phoenix.memory.InsufficientMemoryException” every time , however
from the reference Phoenix page on joins where it is mentioned to tune phoenix for joins ,
 I have changed default values of the following attributes :

1.       <property>

<name>phoenix.query.maxServerCacheBytes</name>

<value>409,715,200</value>

</property>



2.

<property>

<name>phoenix.query.maxGlobalMemoryPercentage</name>

<value>39</value>

</property>



3.       <property>

<name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>

<value>60000</value>

</property>



Please let me know about the above issue that I am facing.



Thanks,

Siddharth Ubale










From: Maryann Xue [mailto:maryann.xue@gmail.com]
Sent: Tuesday, November 18, 2014 10:12 PM
To: user@phoenix.apache.org
Subject: FW: Hi

Hi Siddharth,

It's not clear what inner exception you are getting. Would be nice if you can post the entire
stack trace of the exception. Anyway, one of the possible reasons could be insufficient memory
error, coz the query is by default executed as a star join which means the latter three tables
will be hash cached the same time on the server side. I assume that this is not the optimal
choice in your case, for after the where clause being applied on the first table, the result
might be considerably small. So you can try using the hint NO_STAR_JOIN by inserting "/* +NO_STAR_JOIN*/"
right after "Select".

Thanks,
Maryann

On Tuesday, November 18, 2014, Siddharth Ubale <siddharth.ubale@syncoms.com<javascript:_e(%7B%7D,'cvml','siddharth.ubale@syncoms.com');>>
wrote:
Hi,

The query for which I get this :

select * from "Customers_3" as c
inner join "Address_3" as a on a."A_C_Id" = c."C_Id"
inner join "Orders_3" as o on o."O_C_Id" = c."C_Id"
inner join "Order_Details_3" as od on od."O_No" = o."O_No"
where c."C_Name" = 'Amit500';

Thanks,
Siddharth

From: Siddharth Ubale
Sent: Tuesday, November 18, 2014 5:28 PM
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Hi

HI All,

Can anyone explain why we get this error while running joins on many tables in Phoenix :

Error: Encountered exception in sub plan [2] execution. (state=,code=0)

Thanks,
Siddharth Ubale,
Synchronized Communications
#43, Velankani Tech Park, Block No. II,
3rd Floor, Electronic City Phase I,
Bangalore – 560 100
Tel : +91 80 3202 4060
Web: www.syncoms.com<http://www.syncoms.com/>
[LogoNEWmohLARGE]
London|Bangalore|Orlando

we innovate, plan, execute, and transform the business​



--
Thanks,
Maryann
Mime
View raw message