flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gonzalo Herreros <gherre...@gmail.com>
Subject Re: How to increase NUMBER of Spark Executors ?
Date Thu, 25 Feb 2016 08:36:50 GMT
Local will only run with one executor, you are specifying 4 cores to be
used by the executor
That affects the number of taks an executor can run concurrently.

Please note this is the Flume distribution list, not the Spark one

Gonzalo


On 25 February 2016 at 02:22, Sutanu Das <sd2302@att.com> wrote:

> Community,
>
>
>
> How can I increase the NUMBER OF EXECUTORS for my Streaming job Local ?
>
>
>
> We have tried *spark.master = local[4] *but It is not starting 4
> executors and our job keeps getting Queued – do we need to make a code
> change to increase number of executors?
>
>
>
> This job – jar file read from Kafka Stream with 2 partitions and send data
> to Cassandra -
>
>
>
> Please help advise – thanks again community
>
>
>
> *Here is How we start the job:*
>
>
>
> nohup spark-submit --properties-file
> /hadoop_common/airwaveApList.properties --class airwaveApList
> /hadoop_common/airwaveApList-1.0.jar
>
>
>
> *Properties file for the Steaming Job:*
>
>
>
> spark.cassandra.connection.host       cass_host
>
> spark.cassandra.auth.username         cass_app
>
> spark.cassandra.auth.password         xxxxx
>
> spark.topic                           ap_list_spark_streaming
>
> spark.app.name                        ap-status
>
> spark.metadata.broker.list            server.corp.net:6667
>
> spark.zookeeper.connect               server.net:2181
>
> spark.group.id                        airwave_activation_status
>
> spark.zookeeper.connection.timeout.ms 1000
>
> spark.cassandra.sql.keyspace          enterprise
>
> *spark.master                          local[4]*
>
> spark.batch.size.seconds              120
>
> spark.driver.memory                   12G
>
> spark.executor.memory                 12G
>
> spark.akka.frameSize                  512
>
> spark.local.dir                       /prod/hadoop/spark/airwaveApList_temp
>
> spark.history.kerberos.keytab none
>
> spark.history.kerberos.principal none
>
> spark.history.provider
> org.apache.spark.deploy.yarn.history.YarnHistoryProvider
>
> spark.history.ui.port 18080
>
> spark.yarn.historyServer.address has-dal-0001.corp.wayport.net:18080
>
> spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService
>
>
>

Mime
View raw message