phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luqman Ghani <lgsa...@gmail.com>
Subject Fwd: Apache Spark Integration
Date Mon, 17 Jul 2017 15:50:07 GMT
---------- Forwarded message ----------
From: Luqman Ghani <lgsahaf@gmail.com>
Date: Sat, Jul 15, 2017 at 2:38 PM
Subject: Apache Spark Integration
To: user@phoenix.apache.org


Hi,

I am evaluating which approach to use for integrating Phoenix with Spark,
namely JDBC and phoenix-spark. I have one query regarding the following
point stated in limitations in Apache Spark Integration
<https://phoenix.apache.org/phoenix_spark.html> section:
"

   - The Data Source API does not support passing custom Phoenix settings
   in configuration, you must create the DataFrame or RDD directly if you need
   fine-grained configuration.

"

Can someone point me to or give an example on how to give such
configuration?

Also, it says in the docs
<https://phoenix.apache.org/phoenix_spark.html#Saving_DataFrames> that
there is a 'save' function to save a dataframe to a table. But there is
none. Instead, 'saveToPhoenix' shows up in my Intellij IDE suggestions. I'm
using phoenix-4.11.0-HBase-1.2 and Spark-2.0.2. Is this an error in docs?

Thanks,
Luqman

Mime
View raw message