phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: CsvBulkUpload not working after upgrade to 4.6
Date Wed, 09 Dec 2015 19:30:38 GMT
Zack,
Have you asked Hortonworks through your support channel? This sounds like
an issue related to the HDP version you have - you need to confirm with
them that upgrading to Phoenix 4.6.0 will work (and if there are any extra
steps you need to take).

Thanks,
James



On Wed, Dec 9, 2015 at 10:41 AM, Riesland, Zack <Zack.Riesland@sensus.com>
wrote:

> Thanks Samarth,
>
>
>
> I’m running hbase 0.98.4.2.2.8.0-3150 and phoenix 4.6.0-HBase-0.98
>
>
>
> The hbase stuff is there via the HDP 2.2.8 install. It worked before
> upgrading to 4.6.
>
>
>
> *From:* Samarth Jain [mailto:samarth@apache.org]
> *Sent:* Wednesday, December 09, 2015 1:29 PM
> *To:* user@phoenix.apache.org
> *Subject:* Re: CsvBulkUpload not working after upgrade to 4.6
>
>
>
> Zack,
>
>
>
> What version of HBase are you running? And which version of Phoenix
> (specifically 4.6-0.98 version or 4.6-1.x version)? FWIW, I don't see the
> MetaRegionTracker.java file in HBase branches 1.x and master. Maybe you
> don't have the right hbase-client jar in place?
>
>
>
> - Samarth
>
>
>
> On Wed, Dec 9, 2015 at 4:30 AM, Riesland, Zack <Zack.Riesland@sensus.com>
> wrote:
>
> This morning I tried running the same operation from a data node as well
> as a name node, where phoenix 4.2 is completely gone, and I get the exact
> same error.
>
>
>
>
>
>
>
> *From:* Riesland, Zack
> *Sent:* Tuesday, December 08, 2015 8:42 PM
> *To:* user@phoenix.apache.org
> *Subject:* CsvBulkUpload not working after upgrade to 4.6
>
>
>
> I upgraded our cluster from 4.2.2 to 4.6.
>
>
>
> After a few hiccups, everything seems to be working: I can connect and
> interact with the DB using Aqua Studio. My web stuff that queries Phoenix
> works, using the new client jar. My java code to connect and interact with
> the DB works, using the new client jar, etc.
>
>
>
> With one exception: Csv Bulk Upload does not work with the new client jar
> – only with the old one (4.2.2.blah).
>
>
>
> On my edge node, where I run this from, I upgraded phoenix using the same
> script. /usr/hdp/current/phoenix-client now points to a folder full of 4.6
> stuff. Permissions all seem to be correct. However, the command below fails
> with the error below.
>
>
>
> If I replace the reference to the (4.6) phoenix-client.jar and point
> explicitly to the old 4.2 client jar, it still works.
>
>
>
> Any ideas or suggestions?
>
>
>
> Thanks!
>
>
>
>
>
>
>
> HADOOP_CLASSPATH=/usr/hdp/current/hbase-master/conf/:/usr/hdp/current/hbase-master/lib/hbase-protocol.jar
> hadoop jar /usr/hdp/current/phoenix-client/phoenix-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool
> -Dfs.permissions.umask-mode=000 --z <server info> --table <table> --input
> <input>
>
>
>
> 15/12/08 15:37:01 INFO zookeeper.ZooKeeper: Initiating client connection,
> connectString=<connect string> sessionTimeout=120000
> watcher=hconnection-0x7728c6760x0, quorum=<stuff>
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: Opening socket connection to
> .... Will not attempt to authenticate using SASL (unknown error)
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: Socket connection established
> to ..., initiating session
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: Session establishment
> complete on server..., sessionid = 0x25110076f7fbf25, negotiated timeout =
> 40000
>
> 15/12/08 15:37:01 INFO
> client.HConnectionManager$HConnectionImplementation: Closing master
> protocol: MasterService
>
> 15/12/08 15:37:01 INFO
> client.HConnectionManager$HConnectionImplementation: Closing zookeeper
> sessionid=0x25110076f7fbf25
>
> 15/12/08 15:37:01 INFO zookeeper.ZooKeeper: Session: 0x25110076f7fbf25
> closed
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: EventThread shut down
>
> 15/12/08 15:37:01 INFO
> client.HConnectionManager$HConnectionImplementation: Closing zookeeper
> sessionid=0x35102a35baf2c0c
>
> 15/12/08 15:37:01 INFO zookeeper.ZooKeeper: Session: 0x35102a35baf2c0c
> closed
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: EventThread shut down
>
> Exception in thread "main" java.sql.SQLException: ERROR 2006 (INT08):
> Incompatible jars detected between client and server. Ensure that
> phoenix.jar is put on the classpath of HBase in every region server:
> org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos$MetaRegionServer.hasState()Z
>
>                 at
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:396)
>
>                 at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1000)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:879)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1225)
>
>                 at
> org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:113)
>
>                 at
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2013)
>
>                 at
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:785)
>
>                 at
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:186)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:319)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:311)
>
>                 at
> org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:309)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1368)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1929)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1898)
>
>                 at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1898)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
>
>                 at
> java.sql.DriverManager.getConnection(DriverManager.java:571)
>
>                 at
> java.sql.DriverManager.getConnection(DriverManager.java:187)
>
>                 at
> org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:301)
>
>                 at
> org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:292)
>
>                 at
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.loadData(CsvBulkLoadTool.java:211)
>
>                 at
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.run(CsvBulkLoadTool.java:184)
>
>                 at
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>
>                 at
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>
>                 at
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:99)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:606)
>
>                 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>
>                 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> Caused by: java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos$MetaRegionServer.hasState()Z
>
>                 at
> org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.getMetaRegionState(MetaRegionTracker.java:219)
>
>                 at
> org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:204)
>
>                 at
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:58)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1157)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1249)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1160)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1117)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:958)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:439)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:953)
>
>                 ... 33 more
>
>
>

Mime
View raw message