livy-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rabe, Jens" <jens.r...@iwes.fraunhofer.de>
Subject RE: about LIVY-424
Date Mon, 12 Nov 2018 06:55:19 GMT
Do you run Spark in local mode or on a cluster? If on a cluster, try increasing executor memory.

From: lk_hadoop <lk_hadoop@163.com>
Sent: Monday, November 12, 2018 7:53 AM
To: user <user@livy.incubator.apache.org>; lk_hadoop <lk_hadoop@163.com>
Subject: Re: about LIVY-424

I'm using livy-0.5.0 with spark2.3.0,I started  a  session with 4GB mem for Driver, And
I run code server times :
 var tmp1 = spark.sql("use tpcds_bin_partitioned_orc_2");var tmp2 = spark.sql("select count(1)
from tpcds_bin_partitioned_orc_2.store_sales").show
the table have 5760749 rows data.
after run about 10 times , the Driver physical memory will beyond 4.5GB and killed by yarn.
I saw the old generation memory  keep growing and can not release by gc.

2018-11-12
________________________________
lk_hadoop
________________________________
发件人:"lk_hadoop"<lk_hadoop@163.com<mailto:lk_hadoop@163.com>>
发送时间:2018-11-12 09:37
主题:about LIVY-424
收件人:"user"<user@livy.incubator.apache.org<mailto:user@livy.incubator.apache.org>>
抄送:

hi,all:
        I meet this issue https://issues.apache.org/jira/browse/LIVY-424  , anybody know how
to resolve it?
2018-11-12
________________________________
lk_hadoop
Mime
View raw message