phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Maryann Xue <maryann....@gmail.com>
Subject Re: Getting InsufficientMemoryException
Date Fri, 26 Sep 2014 19:56:31 GMT
Yes, you should make your modification on each region server, since this is
a server-side configuration.


On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <gsvijayraajaa@gmail.com>
wrote:

> Hi Xue,
>
>           Thanks for replying. I did modify the hbase-site.xml by
> increasing the default value of phoenix.query.maxGlobalMemoryPercentage .
> Also increased the Region server heap space memory . The
> change didn't get reflected and I still get the error with an indication
> that "global pool of 319507660 bytes" is present. Should I modify the
> hbase-site.xml in every region server or just the file present in the class
> path of Phoenix client?
>
> Regards,
> Vijay Raajaa G S
>
> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <maryann.xue@gmail.com>
> wrote:
>
>> Hi Vijay,
>>
>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>> joining the other two tables at the same time, which means the region
>> server memory for Phoenix should be large enough to hold 2 tables together
>> and you also need to expect some memory expansion for java objects.
>>
>> Do you mean that after you had modified the parameters you mentioned, you
>> were still getting the same error message with exactly the same numbers as "global
>> pool of 319507660 bytes"? Did you make sure that the parameters actually
>> took effect after modification?
>>
>>
>> Thanks,
>> Maryann
>>
>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>> gsvijayraajaa@gmail.com> wrote:
>>
>>> Hi,
>>>
>>>     I am trying to do a join of three tables usng the following query:
>>>
>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>> c.c_first_name;*
>>>
>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>
>>> *I get the following error:*
>>>
>>> ./psql.py 10.10.5.55 test.sql
>>> java.sql.SQLException: Encountered exception in hash plan [0] execution.
>>> at
>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>> at
>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>> at
>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>> Caused by: java.sql.SQLException:
>>> java.util.concurrent.ExecutionException:
>>> java.lang.reflect.UndeclaredThrowableException
>>> at
>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>> at
>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>> at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.util.concurrent.ExecutionException:
>>> java.lang.reflect.UndeclaredThrowableException
>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>> at
>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>> ... 8 more
>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>> at $Proxy10.addServerCache(Unknown Source)
>>> at
>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>> at
>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>> ... 5 more
>>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
>>> Failed after attempts=14, exceptions:
>>> Tue Sep 23 00:25:53 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:26:02 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:26:18 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:26:43 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:27:01 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:27:10 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:27:24 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:28:16 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:28:35 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:29:09 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:30:16 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:31:22 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:32:29 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:33:35 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>
>>> at
>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>> at
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>> ... 8 more
>>>
>>> Trials:
>>>
>>> I tried to increase the Region Server Heap space ,
>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>
>>> I am not able to increase the global memory .
>>>
>>> Regards,
>>> Vijay Raajaa
>>>
>>
>>
>>
>> --
>> Thanks,
>> Maryann
>>
>
>


-- 
Thanks,
Maryann

Mime
View raw message