phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sergey Soldatov <sergeysolda...@gmail.com>
Subject Re: write dataframe to phoenix
Date Mon, 27 Mar 2017 20:05:13 GMT
Hi Sateesh,
You need only -client.jar. I noticed a problem in your sample. zkUrl is the
Zookeeper url, but not jdbc connection string. So remove 'jdbc:phoenix:'.
Also check that you provide the correct zk parent in the connection string
(if it different from /hbase)

I would also recommend to add it to  spark.executor.extraClassPath
and spark.driver.extraClassPath in spark-defaults.conf

Thanks,
Sergey

On Mon, Mar 27, 2017 at 12:18 PM, Sateesh Karuturi <
sateesh.karuturi9@gmail.com> wrote:

> Hello Modi,
>
> Thanks for the response.
>
> i am running the code via spark-submit command, and i have included
> following jars to spark classpath. still getting exception.
>
> hoenix-4.8.0-HBase-1.1-client.jar
> phoenix-spark-4.8.0-HBase-1.1.jar
> phoenix-core-4.8.0-HBase-1.1.jar
>
> On Mon, Mar 27, 2017 at 10:30 PM, Dhaval Modi <dhavalmodi24@gmail.com>
> wrote:
>
>> Hi Sateesh,
>>
>> If you are running from spark shell, then please include Phoenix spark
>> jar in classpath.
>>
>> Kindly refer to url that Sandeep provide.
>>
>>
>> Regards,
>> Dhaval
>>
>>
>> On Mar 27, 2017 21:20, "Sateesh Karuturi" <sateesh.karuturi9@gmail.com>
>> wrote:
>>
>> Thanks Sandeep for your response.
>>
>> This is the exception what i am getting:
>>
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage
3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 411, ip-xxxxx-xx-xxx.ap-southeast-1.compute.internal):
java.lang.RuntimeException: java.sql.SQLException: No suitable driver found for jdbc:phoenix:localhost:2181:/hbase-unsecure;
>>         at org.apache.phoenix.mapreduce.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:58)
>>         at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1030)
>>         at org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1014)
>>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>         at org.apache.spark.scheduler.Task.run(Task.scala:88)
>>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>
>>
>> On Mon, Mar 27, 2017 at 8:17 PM, Sandeep Nemuri <nhsandeep6@gmail.com>
>> wrote:
>>
>>> What is the error you are seeing ?
>>>
>>> Ref: https://phoenix.apache.org/phoenix_spark.html
>>>
>>> df.write \
>>>   .format("org.apache.phoenix.spark") \
>>>   .mode("overwrite") \
>>>   .option("table", "TABLE1") \
>>>   .option("zkUrl", "localhost:2181") \
>>>   .save()
>>>
>>>
>>>
>>> On Mon, Mar 27, 2017 at 10:19 AM, Sateesh Karuturi <
>>> sateesh.karuturi9@gmail.com> wrote:
>>>
>>>> Please anyone help me out how to write dataframe to phoenix in java?
>>>>
>>>> here is my code:
>>>>
>>>> pos_offer_new_join.write().format("org.apache.phoenix.spark"
>>>> ).mode(SaveMode.Overwrite)
>>>>
>>>>                         .options(ImmutableMap.of("driver",
>>>> "org.apache.phoenix.jdbc.PhoenixDriver","zkUrl",
>>>>
>>>>                         "jdbc:phoenix:localhost:2181",
>>>> "table","RESULT"))
>>>>
>>>>                         .save();
>>>>
>>>>
>>>> but i am not able to write data to phoenix.
>>>>
>>>>
>>>> Thanks.
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *  Regards*
>>> *  Sandeep Nemuri*
>>>
>>
>>
>>
>

Mime
View raw message