phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <Deepak_Gatt...@Dell.com>
Subject RE: Kerberos Secure cluster and phoenix
Date Tue, 02 Sep 2014 00:11:42 GMT
Hi Anil,

I did try that and actually that was the error I forwarded to you what I was getting

[deepak_gattala@ausgtmhadoop10 ~]cat /etc/hbase/conf.cloudera.hbase1/jaas.conf

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=false
  useTicketCache=true;
};

Thanks for letting me know that it’s not possible to do what I was using with phoenix, I
think currenly I am only left out with the option that I have to send the principle and keytab
file.


I just want to give one last try with what I was doing if you can help me to understand what
is the error below is:-

Error: com.google.protobuf.ServiceException: java.io.IOException: Call to ausgtmhadoop08.us-poclab.dellpoc.com/192.168.1.102:60000
failed on local exception: java.io.EOFException (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: com.google.protobuf.ServiceException: java.io.IOException:
Call to ausgtmhadoop08.us-poclab.dellpoc.com/192.168.1.102:60000 failed on local exception:
java.io.EOFException
        at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:101)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:817)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1107)
        at org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:114)
        at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1315)
        at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:445)
        at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
        at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:256)
        at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:248)
        at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:246)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:960)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1519)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1489)
        at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1489)
       at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162)
        at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:129)
        at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133)
        at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4650)
        at sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4701)
        at sqlline.SqlLine$Commands.connect(SqlLine.java:3942)
        at sqlline.SqlLine$Commands.connect(SqlLine.java:3851)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810)
        at sqlline.SqlLine.dispatch(SqlLine.java:817)
        at sqlline.SqlLine.initArgs(SqlLine.java:633)
        at sqlline.SqlLine.begin(SqlLine.java:680)
        at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
        at sqlline.SqlLine.main(SqlLine.java:424)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException:
java.io.IOException: Call to ausgtmhadoop08.us-poclab.dellpoc.com/192.168.1.102:60000 failed
on local exception: java.io.EOFException
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1650)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1676)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1884)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHTableDescriptor(HConnectionManager.java:2671)
        at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:397)
        at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:402)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:746)
        ... 31 more
Caused by: com.google.protobuf.ServiceException: java.io.IOException: Call to ausgtmhadoop08.us-poclab.dellpoc.com/192.168.1.102:60000
failed on local exception: java.io.EOFException
        at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
        at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1687)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1596)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1622)
        ... 37 more
Caused by: java.io.IOException: Call to ausgtmhadoop08.us-poclab.dellpoc.com/192.168.1.102:60000
failed on local exception: java.io.EOFException
        at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1485)
        at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
        at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1657)
        ... 42 more
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1072)
        at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:728)

Thanks
Deepak Gattala


From: anil gupta [mailto:anilgupta84@gmail.com]
Sent: Monday, September 1, 2014 7:04 PM
To: user@phoenix.apache.org
Subject: Re: Kerberos Secure cluster and phoenix

Hi Deepak,
Current feature only supports connecting using a keytab and principal.
IMO, You have got following options now:
1. Get the keytab generated and use OOTB feature.
2. Try to inherit secure session. In this case Phoenix OOTB secure connection feature will
not play any role at all.
3. Enhance Phoenix to support your use case.
At present, #2 seems like a quick thing to try out.
Can you try using jaas.conf file and use similar classpath as i specified earlier. In this
case just use "<zk>:<zk_port>:<root_dir>" to invoke sqlline.
Make sure "useTicketCache=true;" in your jaas.conf file. Also, make sure that you only have
one jaas.conf file in your classpath.

~Anil

On Mon, Sep 1, 2014 at 4:48 PM, <Deepak_Gattala@dell.com<mailto:Deepak_Gattala@dell.com>>
wrote:
Hello Anil, sorry for the confusion. I think the details below will help you some visibility,
if you need any further information let me know.

I just login to the edge node as deepak_gattala

And it will talk to my AD and figure it out who I am, so when I do klist it already knows
me, I do not have to do kinit –kt  hbase…….

[deepak_gattala@ausgtmhadoop10 ~]klist
Ticket cache: FILE:/tmp/krb5cc_134810
Default principal: Deepak_Gattala@AMER.DELL.COM<mailto:Deepak_Gattala@AMER.DELL.COM>

Valid starting     Expires            Service principal
09/01/14 15:37:40  09/02/14 01:37:40  krbtgt/AMER.DELL.COM@AMER.DELL.COM<mailto:AMER.DELL.COM@AMER.DELL.COM>
09/01/14 15:37:40  09/02/14 01:37:40  krbtgt/DELL.COM@AMER.DELL.COM<mailto:DELL.COM@AMER.DELL.COM>
09/01/14 15:37:41  09/02/14 01:37:40  krbtgt/DELLPOC.COM@DELL.COM<mailto:DELLPOC.COM@DELL.COM>
09/01/14 15:37:41  09/02/14 01:37:40  krbtgt/US-POCLAB.DELLPOC.COM@DELLPOC.COM<mailto:US-POCLAB.DELLPOC.COM@DELLPOC.COM>
09/01/14 15:37:41  09/02/14 01:37:40  AUSGTMHADOOP10$@US-POCLAB.DELLPOC.COM<mailto:AUSGTMHADOOP10$@US-POCLAB.DELLPOC.COM>

After that I can just do hbase shell and do run whatever I ran. I do not have to use any hbase.keytab
file to do so.

The goal is that we run stuff as users liked who are in NT like deepak_gattala or anil_kumar,
james_tylor ….etc.

I hope this helps, we do not want to use the keytab files of the services.

Thanks
Deepak Gattala

From: anil gupta [mailto:anilgupta84@gmail.com<mailto:anilgupta84@gmail.com>]
Sent: Monday, September 1, 2014 6:43 PM

To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Kerberos Secure cluster and phoenix

Hi Deepak,
You need to use the following command to invoke sqlline when you want to use OOTB feature:
sqlline.sh <zk>:<zk_port>:<root_dir>:<principal>:<keytab>
The Phoenix client does the authentication using keytab and principal.
Do you do kinit before running "hbase shell"? I am assuming you have keytab file on this box.
Can you provide entire log when you try to invoke phoenix.
Thanks,
Anil Gupta


On Mon, Sep 1, 2014 at 4:33 PM, <Deepak_Gattala@dell.com<mailto:Deepak_Gattala@dell.com>>
wrote:
Hi Anil,

I am logging in the edge node as deepak_gattala and I am able to do this.

[deepak_gattala@ausgtmhadoop10 ~]hbase shell
14/09/01 18:31:11 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead,
use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.98.1-cdh5.1.0, rUnknown, Sat Jul 12 08:20:49 PDT 2014

hbase(main):001:0> scan 'weblog'
ROW                                                    COLUMN+CELL
 row1                                                  column=stats:daily, timestamp=1405608596314,
value=test-daily-value
 row1                                                  column=stats:monthly, timestamp=1405608606261,
value=test-monthly-value
 row1                                                  column=stats:weekly, timestamp=1405608606216,
value=test-weekly-value
 row2                                                  column=stats:weekly, timestamp=1405609101013,
value=test-weekly-value
 row3                                                  column=stats:daily, timestamp=1406661440128,
value=test-daily-value
3 row(s) in 5.9620 seconds

hbase(main):002:0> quit
[deepak_gattala@ausgtmhadoop10 ~]hadoop fs -ls /
Found 5 items
drwxrwxr-x   - hbase hbase               0 2014-09-01 17:38 /hbase
drwxrwxr-x   - solr  solr                0 2014-07-18 17:38 /solr
drwxrwxr-x   - hdfs  supergroup          0 2013-12-26 23:53 /system
drwxrwxrwt   - hdfs  supergroup          0 2014-09-01 18:20 /tmp
drwxrwxr-x   - hdfs  supergroup          0 2014-08-29 10:13 /user

is that I am still required to send the keytab file while I am calling the sqlline as you
mentioned below:-

Use following command to invoke sqlline:
sqlline.sh <zk>:<zk_port>:<root_dir>:<principal>:<keytab>

or can I just call it as like sqlline <zk>:2181:/hbase

can you please clarify that.

Thanks
Deepak Gattala

From: anil gupta [mailto:anilgupta84@gmail.com<mailto:anilgupta84@gmail.com>]
Sent: Monday, September 1, 2014 6:27 PM

To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Kerberos Secure cluster and phoenix

Hi Deepak,
AFAIK, jaas.conf is not required when using OOTB feature of connecting to a secure cluster.
It seems like User connecting to secure HBase cluster does not have proper permission setup
for znode of ZK. Can you check the permission of znode "/hbase" and make sure that User has
proper permission.
Thanks,
Anil Gupta

On Mon, Sep 1, 2014 at 4:21 PM, <Deepak_Gattala@dell.com<mailto:Deepak_Gattala@dell.com>>
wrote:
Hi Anil,

Thanks for sharing the details,

I am now getting the error like.

14/09/01 18:17:33 ERROR client.HConnectionManager$HConnectionImplementation: Can't get connection
to ZooKeeper: KeeperErrorCode = AuthFailed for /hbase

Are you familiar with this, it looks like its having issues communicating with Zookeeper.
Any thoughts around it?

I think you need a Jaas.conf file, is that not required any more I don’t see that in your
script.

Thanks
Deepak Gattala

From: anil gupta [mailto:anilgupta84@gmail.com<mailto:anilgupta84@gmail.com>]
Sent: Monday, September 1, 2014 6:12 PM
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>

Subject: Re: Kerberos Secure cluster and phoenix

Hi Deepak,
What version of phoenix you are using? Phoenix 3.1 and 4.1 support connecting to secure Hadoop/HBase
cluster out of the box(Phoenix-19). Are you running HBase on a fully distributed cluster?
I would recommend you to use phoenix-*-client-without-hbase.jar file.

Use following command to invoke sqlline:
sqlline.sh <zk>:<zk_port>:<root_dir>:<principal>:<keytab>
Last week i used 3.1 release to connect to a secure HBase cluster running cdh4.6. Here is
the bash script with modified classpath:
---------------------------------------------------------------------------------------
#!/bin/bash
current_dir=$(cd $(dirname $0);pwd)
phoenix_jar_path="$current_dir/.."
phoenix_client_jar=$(find $phoenix_jar_path/phoenix-*-client-without-hbase.jar)


if [ -z "$1" ]
  then echo -e "Zookeeper not specified. \nUsage: sqlline.sh <zookeeper> <optional_sql_file>
\nExample: \n 1. sqlline.sh localhost \n 2. sqlline.sh localhost ../examples/stock_symbol.sql";
  exit;
fi

if [ "$2" ]
  then sqlfile="--run=$2";
fi

echo Phoenix_Client_Jar=$phoenix_client_jar

java -cp "/etc/hbase/conf:.:../sqlline-1.1.2.jar:../jline-2.11.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-0.94.15-cdh4.6.0-security.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/*:/opt/cloudera/parcels/CDH/lib/hadoop/*:/opt/cloudera/parcels/CDH/lib/hadoop/lib/*:../phoenix-core-3.1.0.jar:$phoenix_client_jar"
-Dlog4j.configuration=file:$current_dir/log4j.properties sqlline.SqlLine -d org.apache.phoenix.jdb
c.PhoenixDriver -u jdbc:phoenix:$1 -n none -p none --color=true --fastConnect=false --verbose=true
--isolation=TRANSACTION_READ_COMMITTED $sqlfile
---------------------------------------------------------------------------------------
Just modify above script as per 5.1 release of CDH and your environment setup.
Let us know if it doesn't works.

Thanks,
Anil Gupta

On Mon, Sep 1, 2014 at 3:12 PM, Alex Kamil <alex.kamil@gmail.com<mailto:alex.kamil@gmail.com>>
wrote:
Deepak,

also I'd check first if hbase is working and accessible in secure mode with the same kerberos
principal you use for phoenix client
-  start hbase shell and see if you can run some commands in secure mode
- verify hbase, hadoop, zookeeper running in secure mode, are there any exceptions in server
logs
- can you execute command in hdfs shell and with zookeeper client
- run kinit as shown in cdh security guide for hbase, what do you see when you run klist
- enable kerberos debug mode in sqlline.py, with something like

kerberos="-Djava.security.auth.login.config=/myapp/phoenix/bin/zk-jaas.conf -Dsun.security.krb5.debug=true
-Djava.security.krb5.realm=MYDOMAIN -Djava.security.krb5.kdc=MYKDC -Djava.security.krb5.conf=/etc/krb5.conf"

java_cmd = 'java ' + kerberos + ' -classpath ".' + os.pathsep +extrajars+ os.pathsep+ phoenix_utils.phoenix_client_jar
+ \

Alex

On Mon, Sep 1, 2014 at 6:09 PM, James Taylor <jamestaylor@apache.org<mailto:jamestaylor@apache.org>>
wrote:
Please try with the 4.1 jars in our binary distribution here:

http://phoenix.apache.org/download.html

Make sure to use the jars for the client and server in the hadoop2 directory.

Then follow the directions that Alex posted here:

http://bigdatanoob.blogspot.com/2013/09/connect-phoenix-to-secure-hbase-cluster.html

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Security-Guide/CDH5-Security-Guide.html

It sounds to me like there's a mismatch between your client and server jars.

Thanks,
James

On Mon, Sep 1, 2014 at 2:43 PM,  <Deepak_Gattala@dell.com<mailto:Deepak_Gattala@dell.com>>
wrote:
> I am getting this following error really appreciate any comments. please
>
> Error: com.google.protobuf.ServiceException: java.io.IOException: Call to ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000<http://ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000>
failed on local exception: java.io.EOFException (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: com.google.protobuf.ServiceException:
java.io.IOException: Call to ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000<http://ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000>
failed on local exception: java.io.EOFException
>         at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:101)
>        at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:846)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1057)
>         at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1156)
>         at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:422)
>         at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>         at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:226)
>         at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:908)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1452)
>         at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:131)
>         at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:112)
>         at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4650)
>         at sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4701)
>         at sqlline.SqlLine$Commands.connect(SqlLine.java:3942)
>         at sqlline.SqlLine$Commands.connect(SqlLine.java:3851)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810)
>         at sqlline.SqlLine.dispatch(SqlLine.java:817)
>         at sqlline.SqlLine.initArgs(SqlLine.java:633)
>         at sqlline.SqlLine.begin(SqlLine.java:680)
>         at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
>         at sqlline.SqlLine.main(SqlLine.java:424)
> Caused by: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException:
java.io.IOException: Call to ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000<http://ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000>
failed on local exception: java.io.EOFException
>         at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1650)
>         at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1676)
>         at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1884)
>         at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHTableDescriptor(HConnectionManager.java:2671)
>         at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:397)
>         at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:402)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:772)
>         ... 23 more
> Caused by: com.google.protobuf.ServiceException: java.io.IOException: Call to ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000<http://ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000>
failed on local exception: java.io.EOFException
>         at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
>         at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
>         at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
>         at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1687)
>         at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1596)
>         at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1622)
>         ... 29 more
> Caused by: java.io.IOException: Call to ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000<http://ausgtmhadoop10.us-poclab.dellpoc.com/192.168.1.100:60000>
failed on local exception: java.io.EOFException
>         at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1485)
>         at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
>         at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1657)
>         ... 34 more
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readInt(DataInputStream.java:392)
>         at org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1072)
>         at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:728)
> sqlline version 1.1.2
>
> -----Original Message-----
> From: James Taylor [mailto:jamestaylor@apache.org<mailto:jamestaylor@apache.org>]
> Sent: Monday, September 1, 2014 4:35 PM
> To: user
> Subject: Re: Kerberos Secure cluster and phoenix
>
> In addition to the above, in our 3.1/4.1 release, you can pass through the principal
and keytab file on the connection URL to connect to different secure clusters, like this:
>
> DriverManager.getConnection("jdbc:phoenix:h1,h2,h3:2181:user/principal:/user.keytab");
>
> The full URL is now of the form
> jdbc:phoenix:<quorom>:<port>:<rootNode>:<principal>:<keytabFile>
>
> where <port> and <rootNode> may be absent. We determine that <port>
is present if it's a number and <rootNode> if it begins with a '/'.
>
> One other useful feature from this work, not related to connecting to a secure cluster,
you may specify only the <principal> which would cause a different HConnection to be
used (per unique principal per cluster). In this way, you can pass through different HBase
properties that apply to the HConnection (such as timeout parameters).
>
> For example:
>
> DriverManager.getConnection("jdbc:phoenix:h1:longRunning", props);
>
> where props would contain the HBase config parameters and values for timeouts in a "longRunning"
connection which could be completely different than connection gotten through this URL:
>
> DriverManager.getConnection("jdbc:phoenix:h1:shortRunning", props);
>
> Thanks,
> James
>
> On Mon, Sep 1, 2014 at 2:13 PM, Alex Kamil <alex.kamil@gmail.com<mailto:alex.kamil@gmail.com>>
wrote:
>> see
>> http://bigdatanoob.blogspot.com/2013/09/connect-phoenix-to-secure-hbas
>> e-cluster.html
>>
>> and
>> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/la
>> test/CDH5-Security-Guide/CDH5-Security-Guide.html
>>
>>
>> On Mon, Sep 1, 2014 at 5:02 PM, <Deepak_Gattala@dell.com<mailto:Deepak_Gattala@dell.com>>
wrote:
>>>
>>> Hi all,
>>>
>>>
>>>
>>> Any one has success doing a Phoenix connection  to a secure Hbase
>>> Hadoop cluster, if yes can you please kindly let me know the steps
>>> taken, I am on the recent version of phoenix and using Cloudera CDH 5.1 with
hbase 0.98.
>>>
>>>
>>>
>>> Appreciate your help.
>>>
>>>
>>>
>>> Thanks
>>>
>>> Deepak Gattala
>>
>>




--
Thanks & Regards,
Anil Gupta



--
Thanks & Regards,
Anil Gupta



--
Thanks & Regards,
Anil Gupta



--
Thanks & Regards,
Anil Gupta
Mime
View raw message