phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Riesland, Zack" <Zack.Riesl...@sensus.com>
Subject RE: CsvBulkUpload not working after upgrade to 4.6
Date Mon, 14 Dec 2015 14:51:21 GMT
Thanks Gabriel,

That makes sense.

Unfortunately, it looks like the code is using org.apache.hadoop.hbase.regionserver.SplitTransaction,
which doesn't appear to be in the HBase version used by HDP 2.2.8

I replaced the "hbase.version" build property in the main pom.xml with "0.98.4-hadoop2", but
I encountered the error below.

Has anyone in this community been able to use Phoenix 4.6 (or even 4.5) with HDP 2.2.8?



[ERROR] COMPILATION ERROR : 
[INFO] -------------------------------------------------------------
[ERROR] /Z:/phoenix-4.6/src/phoenix-4.6.0-HBase-0.98-src/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexSplitter.java:[96,23]
cannot find symbol
  symbol:   variable useZKForAssignment
  location: variable st of type org.apache.hadoop.hbase.regionserver.SplitTransaction
[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix .................................... SUCCESS [  3.703 s]
[INFO] Phoenix Core ...................................... FAILURE [ 57.801 s]
[INFO] Phoenix - Flume ................................... SKIPPED
[INFO] Phoenix - Pig ..................................... SKIPPED
[INFO] Phoenix Query Server Client ....................... SKIPPED
[INFO] Phoenix Query Server .............................. SKIPPED
[INFO] Phoenix - Pherf ................................... SKIPPED
[INFO] Phoenix - Spark ................................... SKIPPED
[INFO] Phoenix Assembly .................................. SKIPPED
[INFO] Phoenix - Tracing Web Application ................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:01 min
[INFO] Finished at: 2015-12-14T09:47:37-05:00
[INFO] Final Memory: 100M/2988M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.0:compile
(default-compile) on project phoenix-core: Compilation failure
[ERROR] /Z:/phoenix-4.6/src/phoenix-4.6.0-HBase-0.98-src/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexSplitter.java:[96,23]
cannot find symbol
[ERROR] symbol:   variable useZKForAssignment
[ERROR] location: variable st of type org.apache.hadoop.hbase.regionserver.SplitTransaction
[ERROR] -> [Help 1]

-----Original Message-----
From: Gabriel Reid [mailto:gabriel.reid@gmail.com] 
Sent: Sunday, December 13, 2015 1:11 AM
To: user@phoenix.apache.org
Subject: Re: CsvBulkUpload not working after upgrade to 4.6

This looks like an incompatibility between HBase versions (i.e.
between the version that Phoenix is built against, and the version that you've got installed
on your cluster).

The reason that the bulk loader and fat client are causing issues is that they include the
linked versions of the hbase jars within their jar, so the newer incompatible version of the
hbase jars is used when using these tools. The server jars doesn't do this, which is why it
does work.

Probably the easiest thing to do here (if you're up for it) is recompile the phoenix jars
(at least the fat client jar) against the specific version of HBase that you've got on your
cluster. Assuming that all compiles, it should resolve this issue.

- Gabriel


On Fri, Dec 11, 2015 at 2:01 PM, Riesland, Zack <Zack.Riesland@sensus.com> wrote:
> Thanks James,
>
>
>
> I did. Unfortunately, Hortonworks only supports software that they 
> provide in their stack, and they only provide 4.2.2 for our version of 
> the HDP stack. Even if we upgrade to the latest HDP, we would only get 
> support for Phoenix 4.4
>
>
>
> FWIW, I am also unable to connect Aqua Data Studio to Phoenix using 
> the new client jar (phoenix-4.6.0-HBase-0.98-client.jar
>
>
>
> When I try, I get the stack trace below.
>
>
>
> However, I am still able to connect using the old 4.2.2 jar.
>
>
>
>
>
> java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
>
>                 at
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(S
> QLExceptionCode.java:396)
>
>                 at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExcept
> ionInfo.java:145)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(Co
> nnectionQueryServicesImpl.java:298)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(Connec
> tionQueryServicesImpl.java:179)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(Connectio
> nQueryServicesImpl.java:1919)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(Connectio
> nQueryServicesImpl.java:1898)
>
>                 at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExec
> utor.java:78)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQu
> eryServicesImpl.java:1898)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(Phoen
> ixDriver.java:180)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedD
> river.java:132)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
>
>                 at \\...\\ .\\हिñçêČάй語简 ?한\\.Jᠻꐎꎂᢋ 9.KX(Unknown

> Source)
>
>                 at \\...\\ .\\हिñçêČάй語简 ?한\\.Jᠻꐎꎂᢋ 9.au(Unknown

> Source)
>
>                 at \\...\\ .\\हिñçêČάй語简 ?한\\.Jᠻꐎꎂᢋ 
> 9.getConnection(Unknown
> Source)
>
>                 at \\...\\ .\\हिñçêČάй語简 ?한\\.Jᠻꐎꎂᢋ 
> 9.getConnection(Unknown
> Source)
>
>                 at
> com.aquafold.datastudio.mainframe.dialog.connection.diagnostics.yꑅꀱꏓᜪ
> import.dv(Unknown Source)
>
>                 at
> com.aquafold.datastudio.mainframe.dialog.connection.diagnostics.yꑅꀱꏓᜪ
> import.b(Unknown Source)
>
>                 at \\...\\ .\\हिñçêČάй語简 ?한\\.bᡲꐢꐟꄦ 5 5.d(Unknown

> Source)
>
>                 at \\...\\ .\\हिñçêČάй語简 ?한\\.bᡲꐢꐟꄦ 5 5.b(Unknown

> Source)
>
>                 at
> com.aquafold.datastudio.mainframe.dialog.connection.diagnostics.PingDi
> alog$3.runTest(Unknown
> Source)
>
>                 at
> com.aquafold.datastudio.mainframe.dialog.connection.diagnostics.PingDi
> alog$2.run(Unknown
> Source)
>
> Caused by: java.io.IOException: 
> java.lang.reflect.InvocationTargetException
>
>                 at
> org.apache.hadoop.hbase.client.ConnectionManager.createConnection(Conn
> ectionManager.java:426)
>
>                 at
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInter
> nal(ConnectionManager.java:319)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HCo
> nnectionManager.java:292)
>
>                 at
> org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.cre
> ateConnection(HConnectionFactory.java:47)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(Co
> nnectionQueryServicesImpl.java:296)
>
>                 ... 18 more
>
>
>
>
>
>
>
> From: James Taylor [mailto:jamestaylor@apache.org]
> Sent: Wednesday, December 09, 2015 2:31 PM
> To: user
>
>
> Subject: Re: CsvBulkUpload not working after upgrade to 4.6
>
>
>
> Zack,
>
> Have you asked Hortonworks through your support channel? This sounds 
> like an issue related to the HDP version you have - you need to 
> confirm with them that upgrading to Phoenix 4.6.0 will work (and if 
> there are any extra steps you need to take).
>
>
>
> Thanks,
>
> James
>
>
>
>
>
>
>
> On Wed, Dec 9, 2015 at 10:41 AM, Riesland, Zack 
> <Zack.Riesland@sensus.com>
> wrote:
>
> Thanks Samarth,
>
>
>
> I’m running hbase 0.98.4.2.2.8.0-3150 and phoenix 4.6.0-HBase-0.98
>
>
>
> The hbase stuff is there via the HDP 2.2.8 install. It worked before 
> upgrading to 4.6.
>
>
>
> From: Samarth Jain [mailto:samarth@apache.org]
> Sent: Wednesday, December 09, 2015 1:29 PM
> To: user@phoenix.apache.org
> Subject: Re: CsvBulkUpload not working after upgrade to 4.6
>
>
>
> Zack,
>
>
>
> What version of HBase are you running? And which version of Phoenix 
> (specifically 4.6-0.98 version or 4.6-1.x version)? FWIW, I don't see 
> the MetaRegionTracker.java file in HBase branches 1.x and master. 
> Maybe you don't have the right hbase-client jar in place?
>
>
>
> - Samarth
>
>
>
> On Wed, Dec 9, 2015 at 4:30 AM, Riesland, Zack 
> <Zack.Riesland@sensus.com>
> wrote:
>
> This morning I tried running the same operation from a data node as 
> well as a name node, where phoenix 4.2 is completely gone, and I get 
> the exact same error.
>
>
>
>
>
>
>
> From: Riesland, Zack
> Sent: Tuesday, December 08, 2015 8:42 PM
> To: user@phoenix.apache.org
> Subject: CsvBulkUpload not working after upgrade to 4.6
>
>
>
> I upgraded our cluster from 4.2.2 to 4.6.
>
>
>
> After a few hiccups, everything seems to be working: I can connect and 
> interact with the DB using Aqua Studio. My web stuff that queries 
> Phoenix works, using the new client jar. My java code to connect and 
> interact with the DB works, using the new client jar, etc.
>
>
>
> With one exception: Csv Bulk Upload does not work with the new client 
> jar – only with the old one (4.2.2.blah).
>
>
>
> On my edge node, where I run this from, I upgraded phoenix using the 
> same script. /usr/hdp/current/phoenix-client now points to a folder 
> full of 4.6 stuff. Permissions all seem to be correct. However, the 
> command below fails with the error below.
>
>
>
> If I replace the reference to the (4.6) phoenix-client.jar and point 
> explicitly to the old 4.2 client jar, it still works.
>
>
>
> Any ideas or suggestions?
>
>
>
> Thanks!
>
>
>
>
>
>
>
> HADOOP_CLASSPATH=/usr/hdp/current/hbase-master/conf/:/usr/hdp/current/
> hbase-master/lib/hbase-protocol.jar
> hadoop jar /usr/hdp/current/phoenix-client/phoenix-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool 
> -Dfs.permissions.umask-mode=000 --z <server info> --table <table> 
> --input <input>
>
>
>
> 15/12/08 15:37:01 INFO zookeeper.ZooKeeper: Initiating client 
> connection, connectString=<connect string> sessionTimeout=120000 
> watcher=hconnection-0x7728c6760x0, quorum=<stuff>
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: Opening socket connection 
> to .... Will not attempt to authenticate using SASL (unknown error)
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: Socket connection 
> established to ..., initiating session
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: Session establishment 
> complete on server..., sessionid = 0x25110076f7fbf25, negotiated 
> timeout = 40000
>
> 15/12/08 15:37:01 INFO client.HConnectionManager$HConnectionImplementation:
> Closing master protocol: MasterService
>
> 15/12/08 15:37:01 INFO client.HConnectionManager$HConnectionImplementation:
> Closing zookeeper sessionid=0x25110076f7fbf25
>
> 15/12/08 15:37:01 INFO zookeeper.ZooKeeper: Session: 0x25110076f7fbf25 
> closed
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: EventThread shut down
>
> 15/12/08 15:37:01 INFO client.HConnectionManager$HConnectionImplementation:
> Closing zookeeper sessionid=0x35102a35baf2c0c
>
> 15/12/08 15:37:01 INFO zookeeper.ZooKeeper: Session: 0x35102a35baf2c0c 
> closed
>
> 15/12/08 15:37:01 INFO zookeeper.ClientCnxn: EventThread shut down
>
> Exception in thread "main" java.sql.SQLException: ERROR 2006 (INT08):
> Incompatible jars detected between client and server. Ensure that 
> phoenix.jar is put on the classpath of HBase in every region server:
> org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos$MetaRegionS
> erver.hasState()Z
>
>                 at
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(S
> QLExceptionCode.java:396)
>
>                 at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExcept
> ionInfo.java:145)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServer
> Compatibility(ConnectionQueryServicesImpl.java:1000)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreate
> d(ConnectionQueryServicesImpl.java:879)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(Conne
> ctionQueryServicesImpl.java:1225)
>
>                 at
> org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(D
> elegateConnectionQueryServices.java:113)
>
>                 at
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataC
> lient.java:2013)
>
>                 at
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.ja
> va:785)
>
>                 at
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCo
> mpiler.java:186)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:
> 319)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:
> 311)
>
>                 at
> org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatem
> ent.java:309)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatemen
> t.java:1368)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(Connectio
> nQueryServicesImpl.java:1929)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(Connectio
> nQueryServicesImpl.java:1898)
>
>                 at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExec
> utor.java:78)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQu
> eryServicesImpl.java:1898)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(Phoen
> ixDriver.java:180)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedD
> river.java:132)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
>
>                 at
> java.sql.DriverManager.getConnection(DriverManager.java:571)
>
>                 at
> java.sql.DriverManager.getConnection(DriverManager.java:187)
>
>                 at
> org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:301)
>
>                 at
> org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:292)
>
>                 at
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.loadData(CsvBulkLoadTool.
> java:211)
>
>                 at
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.run(CsvBulkLoadTool.java:
> 184)
>
>                 at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>
>                 at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>
>                 at
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java
> :99)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
>
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> orImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:606)
>
>                 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>
>                 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> Caused by: java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos$MetaRegionS
> erver.hasState()Z
>
>                 at
> org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.getMetaRegionState
> (MetaRegionTracker.java:219)
>
>                 at
> org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailabl
> e(MetaRegionTracker.java:204)
>
>                 at
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation
> (ZooKeeperRegistry.java:58)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegion(HConnectionManager.java:1157)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegionInMeta(HConnectionManager.java:1249)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegion(HConnectionManager.java:1160)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegion(HConnectionManager.java:1117)
>
>                 at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.getRegionLocation(HConnectionManager.java:958)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegion
> s(ConnectionQueryServicesImpl.java:439)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServer
> Compatibility(ConnectionQueryServicesImpl.java:953)
>
>                 ... 33 more
>
>
>
>
Mime
View raw message