phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: Spark & UpgradeInProgressException: Cluster is being concurrently upgraded from 4.11.x to 4.12.x
Date Sat, 11 Nov 2017 18:51:08 GMT
Hi Stepan,
We discussed whether or not we should continue with Phoenix releases for
HBase 1.1, but no one showed any interested in being the release manager
[1], so we concluded that we would stop doing them. It's important to
remember that the ASF is a volunteer effort and anyone can step up and take
on this responsibility. That's essentially how contributors build merit to
become committers and eventually PMC members and the project continues to
grow. If you're interested, I suggest you start a new DISCUSS thread on the
dev list and volunteer. Here's what would need to be done:
- cherry-pick changes from master between 4.12.0 and 4.13.0 release to
4.x-HBase-1.1 branch
- create a pull request with the above and get a +1 from a committer
- monitor the Jenkins job that'll run with these changes keeping a lookout
for any test failures
- assuming there are no test failures, follow the directions here [2] to
perform a release

Thanks,
James


[1]
https://lists.apache.org/thread.html/ae13def3c024603ce3cdde871223cbdbae0219b4efe93ed4e48f55d5@%3Cdev.phoenix.apache.org%3E
[2] https://phoenix.apache.org/release.html

On Sat, Nov 11, 2017 at 1:02 AM, stepan.migunov@firstlinesoftware.com <
stepan.migunov@firstlinesoftware.com> wrote:

>
>
> On 2017-11-10 22:36, Mujtaba Chohan <mujtaba@apache.org> wrote:
> > Probably being hit by https://issues.apache.org/jira/browse/PHOENIX-4335
> .
> > Please upgrade to 4.13.0 which will be available by EOD today.
> >
> > On Fri, Nov 10, 2017 at 8:37 AM, Stepan Migunov <
> > Stepan.Migunov@firstlinesoftware.com> wrote:
> >
> > > Hi,
> > >
> > >
> > >
> > > I have just upgraded my cluster to Phoenix 4.12 and got an issue with
> > > tasks running on Spark 2.2 (yarn cluster mode). Any attempts to use
> method
> > > phoenixTableAsDataFrame to load data from existing database causes an
> > > exception (see below).
> > >
> > >
> > >
> > > The tasks worked fine on version 4.11. I have checked connection with
> > > sqlline - it works now and shows that version is 4.12. Moreover, I have
> > > noticed, that if limit the number of executors to one, the Spark's task
> > > executing successfully too!
> > >
> > >
> > >
> > > It looks like that executors running in parallel "interferes" each
> other’s
> > > and could not acquire version's mutex.
> > >
> > >
> > >
> > > Any suggestions please?
> > >
> > >
> > >
> > > *final Connection connection =
> > > ConnectionUtil.getInputConnection(configuration, overridingProps);*
> > >
> > > *User class threw exception: org.apache.spark.SparkException: Job
> aborted
> > > due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent
> > > failure: Lost task 0.3 in stage 0.0 (TID 36, n7701-hdp005, executor
> 26):
> > > java.lang.RuntimeException:
> > > org.apache.phoenix.exception.UpgradeInProgressException: Cluster is
> being
> > > concurrently upgraded from 4.11.x to 4.12.x. Please retry establishing
> > > connection.*
> > >
> > > *at
> > > org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(
> PhoenixInputFormat.java:201)*
> > >
> > > *at
> > > org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(
> PhoenixInputFormat.java:76)*
> > >
> > > *at
> > > org.apache.spark.rdd.NewHadoopRDD$$anon$1.liftedTree1$1(NewHadoopRDD.
> scala:180)*
> > >
> > > *at
> > > org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(
> NewHadoopRDD.scala:179)*
> > >
> > > *at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:134)*
> > >
> > > *at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:69)*
> > >
> > > *at org.apache.phoenix.spark.PhoenixRDD.compute(PhoenixRDD.scala:64)*
> > >
> > > *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
> > >
> > > *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
> > >
> > > *at
> > > org.apache.spark.rdd.MapPartitionsRDD.compute(
> MapPartitionsRDD.scala:38)*
> > >
> > > *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
> > >
> > > *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
> > >
> > > *at
> > > org.apache.spark.rdd.MapPartitionsRDD.compute(
> MapPartitionsRDD.scala:38)*
> > >
> > > *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
> > >
> > > *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
> > >
> > > *at
> > > org.apache.spark.rdd.MapPartitionsRDD.compute(
> MapPartitionsRDD.scala:38)*
> > >
> > > *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
> > >
> > > *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
> > >
> > > *at
> > > org.apache.spark.rdd.MapPartitionsRDD.compute(
> MapPartitionsRDD.scala:38)*
> > >
> > > *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
> > >
> > > *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
> > >
> > > *at
> > > org.apache.spark.rdd.MapPartitionsRDD.compute(
> MapPartitionsRDD.scala:38)*
> > >
> > > *at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)*
> > >
> > > *at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)*
> > >
> > > *at
> > > org.apache.spark.scheduler.ShuffleMapTask.runTask(
> ShuffleMapTask.scala:96)*
> > >
> > > *at
> > > org.apache.spark.scheduler.ShuffleMapTask.runTask(
> ShuffleMapTask.scala:53)*
> > >
> > > *at org.apache.spark.scheduler.Task.run(Task.scala:108)*
> > >
> > > *at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:335)*
> > >
> > > *at
> > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)*
> > >
> > > *at
> > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)*
> > >
> > > *at java.lang.Thread.run(Thread.java:745)*
> > >
> > > *Caused by: org.apache.phoenix.exception.UpgradeInProgressException:
> > > Cluster is being concurrently upgraded from 4.11.x to 4.12.x. Please
> retry
> > > establishing connection.*
> > >
> > > *at
> > > org.apache.phoenix.query.ConnectionQueryServicesImpl.
> acquireUpgradeMutex(ConnectionQueryServicesImpl.java:3173)*
> > >
> > > *at
> > > org.apache.phoenix.query.ConnectionQueryServicesImpl.
> upgradeSystemTables(ConnectionQueryServicesImpl.java:2567)*
> > >
> > > *at
> > > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(
> ConnectionQueryServicesImpl.java:2440)*
> > >
> > > *at
> > > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(
> ConnectionQueryServicesImpl.java:2360)*
> > >
> > > *at
> > > org.apache.phoenix.util.PhoenixContextExecutor.call(
> PhoenixContextExecutor.java:76)*
> > >
> > > *at
> > > org.apache.phoenix.query.ConnectionQueryServicesImpl.init(
> ConnectionQueryServicesImpl.java:2360)*
> > >
> > > *at
> > > org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(
> PhoenixDriver.java:255)*
> > >
> > > *at
> > > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(
> PhoenixEmbeddedDriver.java:150)*
> > >
> > > *at org.apache.phoenix.jdbc.PhoenixDriver.connect(
> PhoenixDriver.java:221)*
> > >
> > > *at java.sql.DriverManager.getConnection(DriverManager.java:664)*
> > >
> > > *at java.sql.DriverManager.getConnection(DriverManager.java:208)*
> > >
> > > *at
> > > org.apache.phoenix.mapreduce.util.ConnectionUtil.
> getConnection(ConnectionUtil.java:98)*
> > >
> > > *at
> > > org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(
> ConnectionUtil.java:57)*
> > >
> > > *at
> > > org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(
> PhoenixInputFormat.java:176)*
> > >
> > > *... 30 more*
> > >
> > Thank you for reply! Do you know when version 4.13 for HBASE 1.1
> excpected?
>

Mime
View raw message