phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mallieswari Dineshbabu <dmalliesw...@gmail.com>
Subject Re: Phoenix client failed when used HACluster name on hbase.rootdir property
Date Wed, 25 Oct 2017 07:03:18 GMT
Hi Rafa,

Dfs name service issue for phoenix got resolved after setting class path of
Hadoop configuration and HBase configuration. This can be done by setting
environment variable named HADOOP_HOME and HBASE_HOME in the respective
machines.

Thanks for your support.

Regards,
Mallieswari

On Thu, Oct 12, 2017 at 5:26 PM, rafa <rafa13@gmail.com> wrote:

> You cannot  use "hacluster" if that hostname is not resolved to a IP. Is
> what I tried to explain in my last mail.
>
> Use the ip of te machine that is running query server or its hostname
>
> Regards
> Rafa
>
> El 12 oct. 2017 6:19, "Mallieswari Dineshbabu" <dmallieswari@gmail.com>
> escribió:
>
>> Hi Rafa,
>>
>> Still, faced “UnKnownHostException:hacluster” when started the query
>> server with cluster name 'hacluster' and try to connect phoenix client like
>> below.
>>
>>
>>
>> bin>python sqlline-thin.py http://hacluster:8765
>>
>>
>>
>> Setting property: [incremental, false]
>>
>> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
>>
>> issuing: !connect jdbc:phoenix:thin:url=http://h
>> acluster:8765;serialization=PROTOBUF none none
>> org.apache.phoenix.queryserver.client.Driver
>>
>> Connecting to jdbc:phoenix:thin:url=http://h
>> acluster:8765;serialization=PROTOBUF
>>
>> java.lang.RuntimeException: java.net.UnknownHostException: hacluster
>>
>>         at org.apache.calcite.avatica.remote.AvaticaCommonsHttpClientIm
>> pl.send(AvaticaCommonsHttpClientImpl.java:169)
>>
>>         at org.apache.calcite.avatica.remote.RemoteProtobufService._app
>> ly(RemoteProtobufService.java:45)
>>
>>         at org.apache.calcite.avatica.remote.ProtobufService.apply(Prot
>> obufService.java:81)
>>
>>         at org.apache.calcite.avatica.remote.Driver.connect(Driver.java
>> :176)
>>
>>         at sqlline.DatabaseConnection.connect(DatabaseConnection.java:
>> 157)
>>
>>         at sqlline.DatabaseConnection.getConnection(DatabaseConnection.
>> java:203)
>>
>>         at sqlline.Commands.connect(Commands.java:1064)
>>
>>         at sqlline.Commands.connect(Commands.java:996)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:57)
>>
>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>>
>>         at java.lang.reflect.Method.invoke(Method.java:606)
>>
>>         at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa
>> ndler.java:38)
>>
>>         at sqlline.SqlLine.dispatch(SqlLine.java:809)
>>
>>         at sqlline.SqlLine.initArgs(SqlLine.java:588)
>>
>>         at sqlline.SqlLine.begin(SqlLine.java:661)
>>
>>         at sqlline.SqlLine.start(SqlLine.java:398)
>>
>>         at sqlline.SqlLine.main(SqlLine.java:291)
>>
>>
>>
>> Regards,
>>
>> Mallieswari D
>>
>>
>>
>> On Wed, Oct 11, 2017 at 5:53 PM, rafa <rafa13@gmail.com> wrote:
>>
>>> Hi Mallieswari,
>>>
>>> The hbase.rootdir is a filesystem resource. If you have a HA NAmenode
>>> and a created nameservice you can point to the active namenode
>>> automatically. As far as I know it is not related to the HBase Master HA.
>>>
>>> The "hacluster" used in this command : python sqlline-thin.py http://hacluster:8765
>>>   is a hostname resource. What do you obtain from nslookup hacluster ?
>>>
>>> To have serveral Phoenix query servers to achieve HA in that layer you
>>> would need a balancer (sw or hw) defined to balance across all your
>>> available query servers.
>>> Regards,
>>>
>>> rafa
>>>
>>> On Wed, Oct 11, 2017 at 1:30 PM, Mallieswari Dineshbabu <
>>> dmallieswari@gmail.com> wrote:
>>>
>>>> Hi All,
>>>>
>>>> I am trying to integrate Phoenix in a High availability enabled
>>>> Hadoop-HBase cluster. I have used nameservice ID
>>>> <https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Configuration_overview>
instead
>>>> of HMaster's hostname in the following property. So that any of my active
>>>> HMaster will be identified automatically in case of fail over,
>>>>
>>>> <property>
>>>>
>>>>    <name>hbase.rootdir</name>
>>>>
>>>>    <value>hdfs://hacluster:9000/HBase</value>
>>>>
>>>>  </property>
>>>>
>>>>
>>>>
>>>> Similarly, I tried connecting to QueryServer that is *running in one
>>>> of the HMaster node*, from my thin client with the following URL . But
>>>> I get the error, “No suitable driver found for http://hacluster:8765"
>>>>
>>>>
>>>>
>>>> python sqlline-thin.py http://hacluster:8765
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *Please tell what configuration need to be done to connect QueryServer
>>>> with nameserviceID.*
>>>>
>>>>
>>>>
>>>> Note: The same works fine when I specify HMaster's ip address in both
>>>> my HBase configuration and sqline connection string.
>>>>
>>>>
>>>> --
>>>> Thanks and regards
>>>> D.Mallieswari
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks and regards
>> D.Mallieswari
>>
>


-- 
Thanks and regards
D.Mallieswari

Mime
View raw message