phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Leech <jonat...@gmail.com>
Subject Re: Phoenix hbase question
Date Tue, 23 May 2017 20:01:14 GMT
There is a Phoenix / mapreduce integration. If you bypass Hbase you will need to take care
to not miss edits that are only in memory and WAL.

If you bypass both Phoenix and Hbase you will have to write code that can interpret both...Possible,
yes, but not a good use of your time.

Is there some machine learning algorithm you want to use that isn't included in Spark, or
that you wouldn't be able to integrate with either Spark or a MapReduce job?

> On May 23, 2017, at 1:39 PM, Ash N <742000@gmail.com> wrote:
> 
> Thanks Jonathan. ..
> 
> But am looking to access data directly from HDFS.  not go through phoenix/hbase fir access.

> 
> Is this possible? 
> 
> 
> Best regards 
> 
> On May 23, 2017 3:35 PM, "Jonathan Leech" <jonathaz@gmail.com> wrote:
> I think you would use Spark for that, via the Phoenix spark plugin.
> 
> > On May 23, 2017, at 12:24 PM, Ash N <742000@gmail.com> wrote:
> >
> > Hi All,
> >
> > This may be a silly question.  we are storing data through Apache Phoenix.
> > Is there anything special we have to do so that machine learning and other analytics
workloads can access this data from HDFS layer?
> >
> > Considering HBase stores data in HDFS.
> >
> >
> > thanks,
> > -ash
> 

Mime
View raw message