phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Naga Vijayapuram <Naga_Vijayapu...@gap.com>
Subject Re: CsvBulkLoadTool - Exception for Local Index
Date Fri, 30 Jan 2015 18:10:03 GMT
Hi Rajeshbabu,

I have ensured server-side and client-side jars are from the patched bundle.

. Server-side in hbase lib dir - phoenix-5.0.0-SNAPSHOT-server.jar
. Client-side in command-line - hadoop jar phoenix-5.0.0-SNAPSHOT-client.jar …

I did the test after bouncing the cluster.

Naga

On Jan 30, 2015, at 9:47 AM, Rajeshbabu Chintaguntla <chrajeshbabu32@gmail.com<mailto:chrajeshbabu32@gmail.com>>
wrote:

It's not a problem with the patch. The problem is Incompatible jars between client and server.
You can make sure jars at client and server are same and restart the region server after replace.

Exception in thread "main" java.sql.SQLException: ERROR 2006 (INT08): Incompatible jars detected
between client and server. Ensure that phoenix.jar is put on the classpath of HBase in every
region server:

On Thu, Jan 29, 2015 at 9:14 PM, Naga Vijayapuram <Naga_Vijayapuram@gap.com<mailto:Naga_Vijayapuram@gap.com>>
wrote:
Hi Rajeshbabu,

I picked up the patch and tried on Hortonworks Sandbox (as our cluster is tied up with lots
of other critical jobs and no scope to test with) and got this …
(ensured phoenix-5.0.0-SNAPSHOT-server.jar is in hbase lib dir) ...

15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.2.0.0-2041/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.0.0-2041/hadoop/lib/native
15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Client environment:os.name<http://os.name/>=Linux
15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.1.3.el6.x86_64
15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Client environment:user.name<http://user.name/>=root
15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Client environment:user.dir=/root/nv125m
15/01/30 04:56:46 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181
sessionTimeout=30000 watcher=hconnection-0x246e78aa, quorum=localhost:2181, baseZNode=/hbase-unsecure
15/01/30 04:56:46 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181<http://127.0.0.1:2181/>.
Will not attempt to authenticate using SASL (unknown error)
15/01/30 04:56:46 INFO zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181<http://127.0.0.1:2181/>,
initiating session
15/01/30 04:56:46 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost.localdomain/127.0.0.1:2181<http://127.0.0.1:2181/>,
sessionid = 0x14b3920fde00026, negotiated timeout = 30000
15/01/30 04:56:48 INFO metrics.Metrics: Initializing metrics system: phoenix
15/01/30 04:56:48 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
15/01/30 04:56:48 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
15/01/30 04:56:48 INFO impl.MetricsSystemImpl: phoenix metrics system started
15/01/30 04:56:50 INFO query.ConnectionQueryServicesImpl: Found quorum: localhost:2181
15/01/30 04:56:50 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2a3709d7
connecting to ZooKeeper ensemble=localhost:2181
15/01/30 04:56:50 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181
sessionTimeout=30000 watcher=hconnection-0x2a3709d7, quorum=localhost:2181, baseZNode=/hbase-unsecure
15/01/30 04:56:50 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181<http://127.0.0.1:2181/>.
Will not attempt to authenticate using SASL (unknown error)
15/01/30 04:56:50 INFO zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181<http://127.0.0.1:2181/>,
initiating session
15/01/30 04:56:50 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost.localdomain/127.0.0.1:2181<http://127.0.0.1:2181/>,
sessionid = 0x14b3920fde00027, negotiated timeout = 30000
15/01/30 04:56:50 INFO client.HConnectionManager$HConnectionImplementation: Closing master
protocol: MasterService
15/01/30 04:56:50 INFO client.HConnectionManager$HConnectionImplementation: Closing zookeeper
sessionid=0x14b3920fde00027
15/01/30 04:56:50 INFO zookeeper.ClientCnxn: EventThread shut down
15/01/30 04:56:50 INFO zookeeper.ZooKeeper: Session: 0x14b3920fde00027 closed
15/01/30 04:56:51 INFO client.HConnectionManager$HConnectionImplementation: Closing zookeeper
sessionid=0x14b3920fde00026
15/01/30 04:56:51 INFO zookeeper.ZooKeeper: Session: 0x14b3920fde00026 closed
15/01/30 04:56:51 INFO zookeeper.ClientCnxn: EventThread shut down
Exception in thread "main" java.sql.SQLException: ERROR 2006 (INT08): Incompatible jars detected
between client and server. Ensure that phoenix.jar is put on the classpath of HBase in every
region server: org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos$MetaRegionServer.hasState()Z
at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:350)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:966)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:845)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1173)
at org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:111)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1599)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:556)
at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:175)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:279)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:271)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:269)
at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1051)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$10.call(ConnectionQueryServicesImpl.java:1819)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$10.call(ConnectionQueryServicesImpl.java:1788)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1788)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:126)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:233)
at org.apache.phoenix.mapreduce.CsvBulkLoadTool.run(CsvBulkLoadTool.java:183)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos$MetaRegionServer.hasState()Z
at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.getMetaRegionState(MetaRegionTracker.java:219)
at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:204)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:58)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1147)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1239)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1150)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1107)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:948)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:410)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:919)
... 30 more

Naga


On Jan 28, 2015, at 4:25 PM, Rajeshbabu Chintaguntla <chrajeshbabu32@gmail.com<mailto:chrajeshbabu32@gmail.com>>
wrote:

Hi Naga,

I have uploaded patch at PHOENIX-1248 please apply the patch and try now if possible.

Thanks,
Rajeshbabu.

On Thu, Jan 22, 2015 at 8:57 PM, Naga Vijayapuram <Naga_Vijayapuram@gap.com<mailto:Naga_Vijayapuram@gap.com>>
wrote:
Thanks Rajeshbabu !

On Jan 22, 2015, at 8:50 PM, Rajeshbabu Chintaguntla <chrajeshbabu32@gmail.com<mailto:chrajeshbabu32@gmail.com>>
wrote:

Hi Naga Vijayapuram,

Sorry I have missed that I can fix it in next week.

Thanks,
Rajeshbabu.

On Fri, Jan 23, 2015 at 9:37 AM, Naga Vijayapuram <Naga_Vijayapuram@gap.com<mailto:Naga_Vijayapuram@gap.com>>
wrote:
I have hit upon https://issues.apache.org/jira/browse/PHOENIX-1248

Any idea when it will be fixed?

Thanks
Naga


On Jan 22, 2015, at 8:01 PM, Naga Vijayapuram <Naga_Vijayapuram@gap.com<mailto:Naga_Vijayapuram@gap.com>>
wrote:

Hello,

Any idea why this exception shows up when running CsvBulkLoadTool?

15/01/22 22:16:50 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LOCAL_INDEX_ON_TABLE1
failed due to exception:java.lang.IllegalArgumentException: No regions passed

What’s the “No regions passed” message about, and how to overcome?

Thanks
Naga








Mime
View raw message