In Spark 1.4 it worked via JDBC - sure it would work in 1.6 / 2.0 without issues.

Here's a sample code I used (it was getting data in parallel 24 partitions)

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext

import org.apache.spark.rdd.JdbcRDD
import java.sql.{Connection, DriverManager, ResultSet}


def createConnection() = {
DriverManager.getConnection("jdbc:phoenix:hd101.lps.stage,hd102.lps.stage,hd103.lps.stage"); // the Zookeeper quorum

def extractValues(r: ResultSet) = {
(r.getLong(1),    // datum
r.getInt(2),  // pg
r.getString(3),  // HID

val data = new JdbcRDD(sc, createConnection,
"SELECT DATUM, PG, HID,  ..... WHERE DATUM >= ? * 1000  AND DATUM <= ? * 1000 and PG = <a value>",
lowerBound = 1364774400, upperBound = 1384774400, numPartitions = 24, mapRow = extractValues)



2016-10-07 15:20 GMT+02:00 Ted Yu <>:
JIRA on hbase side:


On Fri, Oct 7, 2016 at 6:07 AM, Josh Mahonin <> wrote:
Hi Mich,

There's an open ticket about this issue here:

Long story short, Spark changed their API (again), breaking the existing integration. I'm not sure the level of effort to get it working with Spark 2.0, but based on examples from other projects, it looks like there's a fair bit of Maven module work to support both Spark 1.x and Spark 2.x concurrently in the same project. Patches are very welcome!



On Fri, Oct 7, 2016 at 8:33 AM, Mich Talebzadeh <> wrote:

Has anyone managed to read phoenix table in Spark 2 by any chance please?


Dr Mich Talebzadeh



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.