phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ankit Singhal <ankitsingha...@gmail.com>
Subject Re: Apache Spark Integration
Date Mon, 17 Jul 2017 18:20:27 GMT
You can take a look at our IT tests for phoenix-spark module.
https://github.com/apache/phoenix/blob/master/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala

On Mon, Jul 17, 2017 at 9:20 PM, Luqman Ghani <lgsahaf@gmail.com> wrote:

>
> ---------- Forwarded message ----------
> From: Luqman Ghani <lgsahaf@gmail.com>
> Date: Sat, Jul 15, 2017 at 2:38 PM
> Subject: Apache Spark Integration
> To: user@phoenix.apache.org
>
>
> Hi,
>
> I am evaluating which approach to use for integrating Phoenix with Spark,
> namely JDBC and phoenix-spark. I have one query regarding the following
> point stated in limitations in Apache Spark Integration
> <https://phoenix.apache.org/phoenix_spark.html> section:
> "
>
>    - The Data Source API does not support passing custom Phoenix settings
>    in configuration, you must create the DataFrame or RDD directly if you need
>    fine-grained configuration.
>
> "
>
> Can someone point me to or give an example on how to give such
> configuration?
>
> Also, it says in the docs
> <https://phoenix.apache.org/phoenix_spark.html#Saving_DataFrames> that
> there is a 'save' function to save a dataframe to a table. But there is
> none. Instead, 'saveToPhoenix' shows up in my Intellij IDE suggestions. I'm
> using phoenix-4.11.0-HBase-1.2 and Spark-2.0.2. Is this an error in docs?
>
> Thanks,
> Luqman
>
>

Mime
View raw message