phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From NaHeon Kim <honey.and...@gmail.com>
Subject Re: write Dataframe to phoenix
Date Mon, 20 Mar 2017 07:53:07 GMT
Did you check your project has dependency on phoenix-spark jar? : )
See Spark setup at http://phoenix.apache.org/phoenix_spark.html

Regards,
NaHeon

2017-03-20 15:31 GMT+09:00 Sateesh Karuturi <sateesh.karuturi9@gmail.com>:

>
> I am trying to write Dataframe to Phoenix.
>
> Here is my code:
>
>
>    1. df.write.format("org.apache.phoenix.spark").mode(SaveMode.Overwrite
>    ).options(collection.immutable.Map(
>    2. "zkUrl" -> "localhost:2181/hbase-unsecure",
>    3. "table" -> "TEST")).save();
>
> and i am getting following exception:
>
>
>    1. org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage
3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 411, ip-xxxxx-xx-xxx.ap-southeast-1.compute.internal):
java.lang.RuntimeException: java.sql.SQLException: No suitable driver found for jdbc:phoenix:localhost:2181:/hbase-unsecure;
>    2.         at org.apache.phoenix.mapreduce.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:58)
>    3.         at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1030)
>    4.         at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1014)
>    5.         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>    6.         at org.apache.spark.scheduler.Task.run(Task.scala:88)
>    7.         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>    8.         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>    9.         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>
>

Mime
View raw message