phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Maryann Xue <maryann....@gmail.com>
Subject Re: Getting InsufficientMemoryException
Date Fri, 10 Oct 2014 02:07:43 GMT
You are welcome, Vijay!


On Thu, Oct 9, 2014 at 12:48 AM, G.S.Vijay Raajaa <gsvijayraajaa@gmail.com>
wrote:

> Modifying phoenix.coprocessor.maxServerCacheTimeToLiveMs parameter which
> defaults to *30,000 *solved the problem.
>
> *Thanks !!*
>
> On Wed, Oct 8, 2014 at 10:25 AM, G.S.Vijay Raajaa <gsvijayraajaa@gmail.com
> > wrote:
>
>> Hi Maryann,
>>
>>                  Its the same query:
>>
>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>> c.c_first_name;*
>>
>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records. The
>> CUSTOMER_DEMOGRAPHICS contains 2M records and CUSTOMER_ADDRESS contains
>> 50000 records.*
>>
>> *Regards,*
>> *Vijay Raajaa G S*
>>
>> On Tue, Oct 7, 2014 at 9:39 PM, Maryann Xue <maryann.xue@gmail.com>
>> wrote:
>>
>>> Hi Ashish,
>>>
>>> The warning you got was exactly showing the reason why you finally got
>>> that error: one of the join table query had taken too long so that the
>>> cache for other join tables expired and got invalidated. Again, could you
>>> please share your query and the size of the tables used in your query?
>>> Instead of changing the parameters to get around the problems, it might be
>>> much more efficient just to adjust the query itself. And if doable, mostly
>>> likely the query is gonna run faster as well.
>>>
>>> Besides, you might find this document helpful :
>>> http://phoenix.apache.org/joins.html
>>>
>>>
>>> Thanks,
>>> Maryann
>>>
>>>
>>> On Tue, Oct 7, 2014 at 11:01 AM, ashish tapdiya <ashishtapdiya@gmail.com
>>> > wrote:
>>>
>>>> Maryann,
>>>>
>>>> hbase-site.xml was not on CLASSPATH and that was the issue. Thanks for
>>>> the help. I appreciate it.
>>>>
>>>> ~Ashish
>>>>
>>>>
>>>>
>>>> On Sat, Oct 4, 2014 at 3:40 PM, Maryann Xue <maryann.xue@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> The "phoenix.query.maxServerCacheBytes" is a client parameter while
>>>>> the other two are server parameters. But it looks like the configuration
>>>>> change did not take effect at your client side. Could you please make
sure
>>>>> that this is the only configuration that goes to the CLASSPATH of your
>>>>> phoenix client execution environment?
>>>>>
>>>>> Another thing is the exception you got was a different problem from
>>>>> Vijay's. It happened in an even earlier stage. Could you please also
share
>>>>> you query? We could probably re-write it so that it can better fit the
>>>>> hash-join scheme. (Since table stats are not used in joins yet, we
>>>>> currently have to do it manually.)
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>> On Tue, Sep 30, 2014 at 1:22 PM, ashish tapdiya <
>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>
>>>>>> Here it is,
>>>>>>
>>>>>> java.sql.SQLException: Encountered exception in hash plan [1]
>>>>>> execution.
>>>>>>         at
>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>         at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>         at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>         at
>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>         at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>         at
>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:158)
>>>>>>         at Query.sel_Cust_Order_OrderLine_Tables(Query.java:135)
>>>>>>         at Query.main(Query.java:25)
>>>>>> Caused by:
>>>>>> org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size
of hash
>>>>>> cache (104857684 bytes) exceeds the maximum allowed size (104857600
bytes)
>>>>>>         at
>>>>>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
>>>>>>         at
>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
>>>>>>         at
>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>         at
>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>>         at java.lang.Thread.run(Thread.java:744)
>>>>>>
>>>>>> I am setting hbase heap to 4 GB and phoenix properties are set as
>>>>>> below
>>>>>>
>>>>>> <property>
>>>>>>       <name>phoenix.query.maxServerCacheBytes</name>
>>>>>>       <value>2004857600</value>
>>>>>> </property>
>>>>>> <property>
>>>>>>       <name>phoenix.query.maxGlobalMemoryPercentage</name>
>>>>>>       <value>40</value>
>>>>>> </property>
>>>>>> <property>
>>>>>>       <name>phoenix.query.maxGlobalMemorySize</name>
>>>>>>       <value>1504857600</value>
>>>>>> </property>
>>>>>>
>>>>>> Thanks,
>>>>>> ~Ashish
>>>>>>
>>>>>> On Tue, Sep 30, 2014 at 12:13 PM, Maryann Xue <maryann.xue@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Ashish,
>>>>>>>
>>>>>>> Could you please let us see your error message?
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>> On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <
>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hey Maryann,
>>>>>>>>
>>>>>>>> Thanks for your input. I tried both the properties but no
luck.
>>>>>>>>
>>>>>>>> ~Ashish
>>>>>>>>
>>>>>>>> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <maryann.xue@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Hi Ashish,
>>>>>>>>>
>>>>>>>>> The global cache size is set to either "
>>>>>>>>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>>>>>>>>> * heapSize" (Sorry about the mistake I made earlier).
The ""
>>>>>>>>> phoenix.query.maxServerCacheBytes" is a client parameter
and is
>>>>>>>>> most likely NOT the thing you should worry about. So
you can try adjusting "
>>>>>>>>> phoenix.query.maxGlobalMemoryPercentage" and the heap
size in
>>>>>>>>> region server configurations and see how it works.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <
>>>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>>>>>>>>> remains the default value of 100 MB. I get to see
it when join fails.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> ~Ashish
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <
>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Ashish,
>>>>>>>>>>>
>>>>>>>>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>>>>>>>>> or "phoenix.query.maxGlobalMemoryPercentage *
heapSize",
>>>>>>>>>>> whichever is *smaller*. You can try setting "phoenix.query.
>>>>>>>>>>> maxGlobalMemoryPercentage" instead, which is
recommended, and
>>>>>>>>>>> see how it goes.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Maryann
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya
<
>>>>>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Maryann,
>>>>>>>>>>>>
>>>>>>>>>>>> I am having the same issue where star join
is failing with MaxServerCacheSizeExceededException.
>>>>>>>>>>>> I set phoenix.query.maxServerCacheBytes to
1 GB both in client
>>>>>>>>>>>> and server hbase-site.xml's. However, it
does not take effect.
>>>>>>>>>>>>
>>>>>>>>>>>> Phoenix 3.1
>>>>>>>>>>>> HBase .94
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> ~Ashish
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann
Xue <
>>>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Yes, you should make your modification
on each region server,
>>>>>>>>>>>>> since this is a server-side configuration.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay
Raajaa <
>>>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Xue,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>           Thanks for replying. I
did modify the
>>>>>>>>>>>>>> hbase-site.xml by increasing the
default value of
>>>>>>>>>>>>>> phoenix.query.maxGlobalMemoryPercentage
. Also increased the
>>>>>>>>>>>>>> Region server heap space memory .
The change didn't get reflected and I
>>>>>>>>>>>>>> still get the error with an indication
that "global pool of
>>>>>>>>>>>>>> 319507660 bytes" is present. Should
I modify the hbase-site.xml in every
>>>>>>>>>>>>>> region server or just the file present
in the class path of Phoenix client?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>> Vijay Raajaa G S
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM,
Maryann Xue <
>>>>>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I think here the query plan is
scanning table *CUSTOMER_30000
>>>>>>>>>>>>>>> *while joining the other two
tables at the same time, which
>>>>>>>>>>>>>>> means the region server memory
for Phoenix should be large enough to hold 2
>>>>>>>>>>>>>>> tables together and you also
need to expect some memory expansion for java
>>>>>>>>>>>>>>> objects.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Do you mean that after you had
modified the parameters you
>>>>>>>>>>>>>>> mentioned, you were still getting
the same error message with exactly the
>>>>>>>>>>>>>>> same numbers as "global pool
of 319507660 bytes"? Did you
>>>>>>>>>>>>>>> make sure that the parameters
actually took effect after modification?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Sep 23, 2014 at 1:43
AM, G.S.Vijay Raajaa <
>>>>>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>     I am trying to do a join
of three tables usng the
>>>>>>>>>>>>>>>> following query:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> *select c.c_first_name, ca.ca_city,
cd.cd_education_status
>>>>>>>>>>>>>>>> from CUSTOMER_30000 c join
CUSTOMER_DEMOGRAPHICS_1 cd on
>>>>>>>>>>>>>>>> c.c_current_cdemo_sk = cd.cd_demo_sk
join CUSTOMER_ADDRESS_1 ca on
>>>>>>>>>>>>>>>> c.c_current_addr_sk = ca.ca_address_sk
group by ca.ca_city,
>>>>>>>>>>>>>>>> cd.cd_education_status, c.c_first_name;*
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> *The size of CUSTOMER_30000
is 4.1 GB with 30million
>>>>>>>>>>>>>>>> records.*
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> *I get the following error:*
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>>>>>>>>> java.sql.SQLException: Encountered
exception in hash plan
>>>>>>>>>>>>>>>> [0] execution.
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>>>> at $Proxy10.addServerCache(Unknown
Source)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>>>>>>>>> ... 5 more
>>>>>>>>>>>>>>>> Caused by:
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException:
Failed after
>>>>>>>>>>>>>>>> attempts=14, exceptions:
>>>>>>>>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger
than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Trials:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I tried to increase the Region
Server Heap space ,
>>>>>>>>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage
as well.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I am not able to increase
the global memory .
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>> Vijay Raajaa
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Maryann
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Maryann
>>>
>>
>>
>


-- 
Thanks,
Maryann

Mime
View raw message