phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ricardo Crespo <ricardo.e.cre...@gmail.com>
Subject Re: [Error] Spark - Save to Phoenix
Date Mon, 11 Apr 2016 09:33:15 GMT
Hi Divya,


I am asumming you are using YARN, you have to put phoenix jar on your spark
classpath, you can do it with this parameter:

--jars path_to_phoenix_jar

you can check another options here:
http://spark.apache.org/docs/latest/running-on-yarn.html


Another option is to included phoenix on your jar with mvn assembly plugin:

http://maven.apache.org/plugins/maven-assembly-plugin/

Best regards,

Ricardo



2016-04-11 11:17 GMT+02:00 Divya Gehlot <divya.htconex@gmail.com>:

> Hi,
> I am getting below error when I try to save data to Phoenix
> Below are  Cluster configuration and steps which I followed :
> *Cluster Configuration :*
> Hortonworks distribution 2.3.4 version
> Spark 1.5.2
> Pheonix 4.4
>
>
> *Table created in Phoenix *
> CREATE TABLE TEST
> (
>    RPT_DATE varchar(100) PRIMARY KEY
> )
> ;
>
> *Spark Scala Script *
> val dfLCR = readTable(sqlContext, "", "TEST")
> val schemaL = dfLCR.schema
> val lcrReportPath = "/TestDivya/Spark/Results/TestData/"
> val dfReadReport=
> sqlContext.read.format("com.databricks.spark.csv").option("header",
> "true").schema(schemaL).load(lcrReportPath)
> dfReadlcrReport.show()
> val dfWidCol = dfReadReport.withColumn("RPT_DATE",lit("2015-01-01"))
> val dfSelect = dfWidCol.select("RPT_DATE")
>
> dfSelect.write.format("org.apache.phoenix.spark").mode(SaveMode.Overwrite).options(collection.immutable.Map(
>                 "zkUrl" -> "localhost",
>                 "table" -> "TEST")).save()
>
> *command line to run the Spark scala job *
> spark-shell  --properties-file  /TestDivya/Spark/Phoenix.properties --jars
> /usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/phoenix-client.jar,/usr/hdp/2.3.4.0-3485/phoenix/phoenix-server.jar
>  --driver-class-path
> /usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/hbase/lib/phoenix-server.jar,/usr/hdp/2.3.4.0-3485/hbase/lib/phoenix-client-4.4.0.jar
> --conf
> "spark.executor.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-client.jar"
> --packages com.databricks:spark-csv_2.10:1.4.0  --master yarn-client -i
> /TestDivya/Spark/WriteToPheonix.scala
>
>
> *Error :*
>
> 16/04/11 09:03:43 INFO TaskSetManager: Lost task 2.3 in stage 3.0 (TID
> 409) on executor ip-xxx-xx-xx-xxx.ap-southeast-1.compute.internal:
> java.lang.RuntimeException (java.sql.SQLException: No suitable driver found
> for jdbc:phoenix:localhost:2181:/hbase-unsecure;) [duplicate 7]
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
> in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage
> 3.0 (TID 408, ip-172-31-22-135.ap-southeast-1.compute.internal):
> java.lang.RuntimeException: java.sql.SQLException: No suitable driver found
> for jdbc:phoenix:localhost:2181:/hbase-unsecure;
>
> Could somebody help me resolving the error.
> What am I missing ?
>
> Thanks,
> Divya
>
>

Mime
View raw message