phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mujtaba Chohan <mujt...@apache.org>
Subject Re: Spark & UpgradeInProgressException: Cluster is being concurrently upgraded from 4.11.x to 4.12.x
Date Fri, 10 Nov 2017 19:36:44 GMT
Probably being hit by https://issues.apache.org/jira/browse/PHOENIX-4335.
Please upgrade to 4.13.0 which will be available by EOD today.

On Fri, Nov 10, 2017 at 8:37 AM, Stepan Migunov <
Stepan.Migunov@firstlinesoftware.com> wrote:

> Hi,
>
>
>
> I have just upgraded my cluster to Phoenix 4.12 and got an issue with
> tasks running on Spark 2.2 (yarn cluster mode). Any attempts to use method
> phoenixTableAsDataFrame to load data from existing database causes an
> exception (see below).
>
>
>
> The tasks worked fine on version 4.11. I have checked connection with
> sqlline - it works now and shows that version is 4.12. Moreover, I have
> noticed, that if limit the number of executors to one, the Spark's task
> executing successfully too!
>
>
>
> It looks like that executors running in parallel "interferes" each other’s
> and could not acquire version's mutex.
>
>
>
> Any suggestions please?
>
>
>
> *final Connection connection =
> ConnectionUtil.getInputConnection(configuration, overridingProps);*
>
> *User class threw exception: org.apache.spark.SparkException: Job aborted
> due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent
> failure: Lost task 0.3 in stage 0.0 (TID 36, n7701-hdp005, executor 26):
> java.lang.RuntimeException:
> org.apache.phoenix.exception.UpgradeInProgressException: Cluster is being
> concurrently upgraded from 4.11.x to 4.12.x. Please retry establishing
> connection.*
>
> *at
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:201)*
>
> *at
> org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(PhoenixInputFormat.java:76)*
>
> *at
> org.apache.spark.rdd.NewHadoopRDD$$anon$1.liftedTree1$1(NewHadoopRDD.scala:180)*
>
> *at
> org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:179)*
>
> *at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:134)*
>
> *at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:69)*
>
> *at org.apache.phoenix.spark.PhoenixRDD.compute(PhoenixRDD.scala:64)*
>
> *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
>
> *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
>
> *at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)*
>
> *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
>
> *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
>
> *at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)*
>
> *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
>
> *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
>
> *at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)*
>
> *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
>
> *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
>
> *at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)*
>
> *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
>
> *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
>
> *at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)*
>
> *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
>
> *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
>
> *at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)*
>
> *at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)*
>
> *at org.apache.spark.scheduler.Task.run(Task.scala:108)*
>
> *at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)*
>
> *at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)*
>
> *at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)*
>
> *at java.lang.Thread.run(Thread.java:745)*
>
> *Caused by: org.apache.phoenix.exception.UpgradeInProgressException:
> Cluster is being concurrently upgraded from 4.11.x to 4.12.x. Please retry
> establishing connection.*
>
> *at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.acquireUpgradeMutex(ConnectionQueryServicesImpl.java:3173)*
>
> *at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:2567)*
>
> *at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2440)*
>
> *at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2360)*
>
> *at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)*
>
> *at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2360)*
>
> *at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)*
>
> *at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)*
>
> *at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)*
>
> *at java.sql.DriverManager.getConnection(DriverManager.java:664)*
>
> *at java.sql.DriverManager.getConnection(DriverManager.java:208)*
>
> *at
> org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:98)*
>
> *at
> org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:57)*
>
> *at
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:176)*
>
> *... 30 more*
>

Mime
View raw message