phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: Phoenix Pig Storage Error
Date Thu, 13 Feb 2014 01:23:41 GMT
Phoenix will work with either Hadoop1 or Hadoop2. The
phoenix-<version>-client.jar bundles the Hadoop1 jars, so if you want to
use Hadoop2, don't use that jar. Instead you can use the
phoenix-<version>.jar and include any other required jars on the classpath.
On the client-side Phoenix depends on antlr and opencsv (if your doing bulk
loading).

Thanks,
James


On Wed, Feb 12, 2014 at 5:10 PM, Russell Jurney <russell.jurney@gmail.com>wrote:

> I am using CDH 4.4, with HBase hbase-0.94.6+132 and pig-0.11.0+33. My
> Hadoop client lib is hadoop-2.0.0+1475.
>
> So it looks like my Pig is MR2, but Phoenix is expecting MR1?
>
> I'm not really sure how to go about resolving this issue. CDH is a bit of
> a black box - I don't know if their Pig is using MR1/2. And I don't have
> source to recompile it.
>
> It looks like my Pig is using
>
>
> On Tue, Feb 11, 2014 at 11:12 PM, Prashant Kommireddi <
> pkommireddi@salesforce.com> wrote:
>
>> Yup, that seems like a classpath issue. Also, make sure to compile pig
>> with the correct hadoop version if you are using the fat jar.
>>
>>
>> On Tue, Feb 11, 2014 at 9:05 PM, Skanda <skanda.ganapathy@gmail.com>wrote:
>>
>>> Hi Russell,
>>>
>>> Which version of HBase and Hadoop are you using? The reason for this
>>> issue is that TaskAttemptContext is an interface in Hadoop 2.x but is a
>>> class in Hadoop 1.x.
>>>
>>> Regards,
>>> Skanda
>>>
>>>
>>> On Wed, Feb 12, 2014 at 10:06 AM, James Taylor <jamestaylor@apache.org>wrote:
>>>
>>>> This is beyond my knowledge of Pig, but Prashant may know as he
>>>> contributed our Pig integration.
>>>>
>>>> Thanks,
>>>> James
>>>>
>>>>
>>>> On Tue, Feb 11, 2014 at 4:34 PM, Russell Jurney <
>>>> russell.jurney@gmail.com> wrote:
>>>>
>>>>> I am trying to store data into this table:
>>>>>
>>>>> CREATE TABLE IF NOT EXISTS BEACONING_ACTIVITY  (
>>>>>
>>>>> EVENT_TIME VARCHAR NOT NULL,
>>>>> C_IP VARCHAR NOT NULL,
>>>>> CS_HOST VARCHAR NOT NULL,
>>>>>  SLD  VARCHAR NOT NULL,
>>>>> CONFIDENCE DOUBLE NOT NULL,
>>>>> RISK DOUBLE NOT NULL,
>>>>>  ANOMOLY DOUBLE NOT NULL,
>>>>> INTERVAL DOUBLE NOT NULL
>>>>>
>>>>> CONSTRAINT PK PRIMARY KEY (EVENT_TIME, C_IP, CS_HOST)
>>>>> );
>>>>>
>>>>>
>>>>> Using this Pig:
>>>>>
>>>>> hosts_and_risks = FOREACH hosts_and_anomaly GENERATE hour, c_ip,
>>>>> cs_host, sld, confidence, (confidence * anomaly) AS risk:double, anomaly,
>>>>> interval;
>>>>> --hosts_and_risks = ORDER hosts_and_risks BY risk DESC;
>>>>> --STORE hosts_and_risks INTO '/tmp/beacons.txt';
>>>>> STORE hosts_and_risks into 'hbase://BEACONING_ACTIVITY' using
>>>>> com.salesforce.phoenix.pig.PhoenixHBaseStorage('hiveapp1','-batchSize
>>>>> 5000');
>>>>>
>>>>> And the most helpful error message I get is this:
>>>>>
>>>>> 2014-02-11 16:24:13,831 FATAL org.apache.hadoop.mapred.Child: Error running
child : java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext,
but class was expected
>>>>> 	at com.salesforce.phoenix.pig.hadoop.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:75)
>>>>> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
>>>>> 	at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:597)
>>>>> 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:444)
>>>>> 	at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
>>>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>>>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>> 	at org.apache.hadoop.mapred.Child.main(Child.java:262)
>>>>>
>>>>>
>>>>> What am I to do?
>>>>>
>>>>>
>>>>> --
>>>>> Russell Jurney twitter.com/rjurney russell.jurney@gmail.com
>>>>> datasyndrome.com
>>>>>
>>>>
>>>>
>>>
>>
>
>
> --
> Russell Jurney twitter.com/rjurney russell.jurney@gmail.com datasyndrome.
> com
>

Mime
View raw message