phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: Out of memory problem
Date Mon, 24 Feb 2014 21:12:29 GMT
Hi Maarten,
Which version of Phoenix are you using? How many distinct values would be
in your dst column versus your src column? In Phoenix 3.0, we'll spill to
disk if you're aggregating over a group by with many distinct values
(you'll take a perf hit for this, of course), while in Phoenix 2.0 we don't
do this. To work around this in Phoenix 2.0, you can decrease the amount of
parallelization used for a query which would bring down the amount of
memory required. See this[1] blog for more on that, and adjust down the
phoenix.query.targetConcurrency and phoenix.query.maxConcurrency parameters.

Thanks,
James

[1] http://phoenix-hbase.blogspot.com/2013/02/phoenix-knobs-dials.html


On Mon, Feb 24, 2014 at 4:27 AM, Maarten Wullink <maarten.wullink@sidn.nl>wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA512
>
> Some more info, the problem only occurs if the query engine is using
> the key to filter on (SERVER FILTER BY FIRST KEY ONLY)
>
>
> 0: jdbc:phoenix:localhost> explain select src, count(*) as qt from
> "queries" group by src order by qt desc limit 10;
> +------------+
> |    PLAN    |
> +------------+
> | CLIENT PARALLEL 30-WAY FULL SCAN OVER queries |
> |     SERVER FILTER BY FIRST KEY ONLY |
> |     SERVER AGGREGATE INTO DISTINCT ROWS BY [SRC] |
> | CLIENT MERGE SORT |
> | CLIENT TOP 10 ROWS SORTED BY [COUNT(1) DESC] |
>
>
> This query will run into trouble after 10sec.
> When using another query
>
> 0: jdbc:phoenix:localhost> explain select dst, count(*) as qt from
> "queries" group by dst order by qt desc limit 10;
> +------------+
> |    PLAN    |
> +------------+
> | CLIENT PARALLEL 30-WAY FULL SCAN OVER queries |
> |     SERVER AGGREGATE INTO DISTINCT ROWS BY [DST] |
> | CLIENT MERGE SORT |
> | CLIENT TOP 10 ROWS SORTED BY [COUNT(1) DESC] |
>
>
> No such error occurs.
>
>
> Maarten Wullink schreef op 24-02-14 11:05:
> > Hi,
> >
> > I am testing Phoenix on a cloudera virtualbox environment with 8gig
> > of total RAM. The RS has been assigned 4gig of RAM. I have table
> > with 1146117 rows. When i try do do aggragations i get the
> > following exception. It seems to me that there is sufficient free
> > memory (41859900) available, so why i am getting this exception?
> >
> > Cheers,
> >
> > Maarten
> >
> > java.lang.RuntimeException:
> > com.salesforce.phoenix.exception.PhoenixIOException:
> > com.salesforce.phoenix.exception.PhoenixIOException:
> > org.apache.hadoop.hbase.DoNotRetryIOException:
> >
> queries,\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1392995316456.7476821ab4c43f23c3b4fd1224c454c9.:
> >
> >
> Requested memory of 2025000 bytes could not be allocated from
> > remaining memory of 41859900 bytes from global pool of 42500096
> > bytes after waiting for 10000ms.
> >
> >
>
> - --
> Maarten Wullink | Research Engineer
> SIDN | Meander 501 | 6825 MD | Postbus 5022 | 6802 EA | ARNHEM
> T +31 (0)26 352 55 45 | M +31 (0)6 21 26 87 55 | F +31 (0)26 352 55 05
> maarten.wullink@sidn.nl | www.sidn.nl
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
> Comment: GPGTools - https://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQEcBAEBCgAGBQJTCzqlAAoJEE8qSVxLG/CLu8QH/3R5Xcf6vgMeM9l32HiISaUU
> DXvRCbVV+qLS8LnuSU0viCK5YgaETGNUTLUWaXlpiIPLLLRGFXHHaoe71I7fq5k4
> EdeBrdpfTBSAk1MAdVDfUkkeqb/dvWa9xVlmiTxR973CoYJMB4PZxhI6Y1Pw+xTY
> ge5SRoAo7ZtSwNNFHSR17YDGmmh1NLIcCG66uNpUTSW4ICj3CWnn/dkWYLJ9Q3FL
> WI/zZdocFg1GygihJSgEkokvf53wgAO5/zYM7IRjBdfPsMEHFUZjKFv/9oFQAQeM
> 8DtiaO3iyxOgHj2Sq94AYcam4IbZMaT3KJa5kTai2sDI6mdaZQjP+mFz3ZPcOtA=
> =zLhi
> -----END PGP SIGNATURE-----
>

Mime
View raw message