You can get data type from Phoenix meta, then encode/decode data to write/read data. I think this way is effective, FYI :)


----------------------------------------
   Yun Zhang
   Best regards!


2018-08-04 21:43 GMT+08:00 Brandon Geise <brandongeise@gmail.com>:

Good morning,

 

I’m looking at using a combination of Hbase, Phoenix and Spark for a project and read that using the Spark-Phoenix plugin directly is more efficient than JDBC, however it wasn’t entirely clear from examples when writing a dataframe if an upsert is performed and how much fine-grained options there are for executing the upsert.  Any information someone can share would be greatly appreciated!

 

 

Thanks,

Brandon