phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josh Elser <josh.el...@gmail.com>
Subject Re: 回复: 回复: 回复: Can query server run with hadoop ha mode?
Date Thu, 08 Sep 2016 22:14:28 GMT
I was going to say that
https://issues.apache.org/jira/browse/PHOENIX-3223 might be related,
but it looks like the HADOOP_CONF_DIR is already put on the classpath.
Glad to see you goth this working :)

On Thu, Sep 8, 2016 at 5:56 AM, F21 <f21.groups@gmail.com> wrote:
> Glad you got it working! :)
>
> Cheers,
> Francis
>
>
> On 8/09/2016 7:11 PM, zengbaitang wrote:
>
>
> I found the reason ,  because i have not set the env : HADOOP_CONF_DIR
> while i set the env, the problem solved .
>
> Thank you F21, thank you very much!
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "F21";<f21.groups@gmail.com>;
> 发送时间: 2016年9月8日(星期四) 下午3:33
> 收件人: "user"<user@phoenix.apache.org>;
> 主题: Re: 回复: 回复: Can query server run with hadoop ha mode?
>
> From the response of your curl, it appears that the query server is started
> correctly and running. The next bit to check is to see if it can talk to the
> HBase servers properly.
>
> Add phoenix.queryserver.serialization to the hbase-site.xml for the query
> server and set the value to JSON.
>
> Then try and send a CatalogsRequest to the query server using curl or wget.
> See here for how to set up the request
> https://calcite.apache.org/docs/avatica_json_reference.html#catalogsrequest
>
> Before sending the CatalogsRequest, remember to send an
> OpenConnectionRequest first:
> https://calcite.apache.org/docs/avatica_json_reference.html#openconnectionrequest
>
> In your case, the `info` key of the OpenConnectionRequest can be omitted.
>
> Cheers,
> Francis
>
> On 8/09/2016 4:12 PM, zengbaitang wrote:
>
> yes, the query server run on one of the regionservers
>
> and exec curl 'http://tnode02:8765' the terminal returns :
> <HTML>
> <HEAD>
> <TITLE>Error 404 - Not Found</TITLE>
> <BODY>
> <H2>Error 404 - Not Found.</H2>
> No context on this server matched or handled this request.<BR>Contexts known
> to this server are: <ul></ul><hr><a href="http://eclipse.org/jetty"><img
> border=0 src="/favicon.ico"/></a>&nbsp;<a
> href="http://eclipse.org/jetty">Powered by Jetty:// Java Web Server</a><hr/>
>
> </BODY>
> </HTML>
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "F21";<f21.groups@gmail.com>;
> 发送时间: 2016年9月8日(星期四) 下午2:01
> 收件人: "user"<user@phoenix.apache.org>;
> 主题: Re: 回复: Can query server run with hadoop ha mode?
>
> Your logs do not seem to show any errors.
>
> You mentioned that you have 2 hbase-site.xml. Are the Phoenix query servers
> running on the same machine as the HBase servers? If not, the hbase-site.xml
> for the phoenix query servers also needs the zookeeper configuration.
>
> Did you also try to use curl or wget to get
> http://your-phoenix-query-server:8765 to see if there's a response?
>
> Cheers,
> Francis
>
> On 8/09/2016 3:54 PM, zengbaitang wrote:
>
> hi F21 ,  I am sure hbase-site.xml was configured properly ,
>
> here is my hbase-site.xml (hbase side) :
> <configuration>
>     <property>
>         <name>hbase.rootdir</name>
>         <value>hdfs://stage-cluster/hbase</value>
>     </property>
>
>     <property>
>         <name>hbase.cluster.distributed</name>
>         <value>true</value>
>     </property>
>     <property>
>         <name>hbase.zookeeper.quorum</name>
>         <value>tnode01,tnode02,tnode03</value>
>     </property>
>     <property>
>         <name>zookeeper.znode.parent</name>
>         <value>/hbase</value>
>     </property>
>     <property>
>         <name>dfs.support.append</name>
>         <value>true</value>
>     </property>
>     <property>
>         <name>zookeeper.session.timeout</name>
>         <value>180000</value>
>     </property>
>     <property>
>         <name>hbase.rpc.timeout</name>
>         <value>120000</value>
>     </property>
>     <property>
>         <name>hbase.hregion.memstore.flush.size</name>
>         <value>67108864</value>
>     </property>
>     <property>
>         <name>hfile.block.cache.size</name>
>         <value>0.1</value>
>     </property>
>
>     <!-- phoenix conf -->
>     <property>
>         <name>phoenix.schema.isNamespaceMappingEnabled</name>
>         <value>true</value>
>     </property>
>
>     <property>
>         <name>hbase.regionserver.wal.codec</name>
>
> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
>     </property>
>
>     <property>
>         <name>hbase.region.server.rpc.scheduler.factory.class</name>
>
> <value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>
>         <description>Factory to create the Phoenix RPC Scheduler that uses
> separate queues for index and metadata updates</description>
>     </property>
>
>     <property>
>         <name>hbase.rpc.controllerfactory.class</name>
>
> <value>org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory</value>
>         <description>Factory to create the Phoenix RPC Scheduler that uses
> separate queues for index and metadata updates</description>
>     </property>
>
>
> </configuration>
>
> and the following is phoenix side hbase-site.xml
> <configuration>
>   <property>
>     <name>hbase.regionserver.wal.codec</name>
>
> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
>   </property>
>
>   <property>
>     <name>phoenix.schema.isNamespaceMappingEnabled</name>
>     <value>true</value>
>   </property>
>
> </configuration>
>
> and the following is query server log
>
> 2016-09-08 13:33:03,218 INFO org.apache.phoenix.queryserver.server.Main:
> env:PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/usr/local/hadoop-2.7.1/bin:/usr/local/hbase-1.1.2/bin:/usr/local/apache-hive-1.2.1-bin/bin:/usr/local/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/bin
> 2016-09-08 13:33:03,219 INFO org.apache.phoenix.queryserver.server.Main:
> env:HISTCONTROL=ignoredups
> 2016-09-08 13:33:03,219 INFO org.apache.phoenix.queryserver.server.Main:
> env:HCAT_HOME=/usr/local/apache-hive-1.2.1-bin/hcatalog
> 2016-09-08 13:33:03,220 INFO org.apache.phoenix.queryserver.server.Main:
> env:HISTSIZE=1000
> 2016-09-08 13:33:03,220 INFO org.apache.phoenix.queryserver.server.Main:
> env:JAVA_HOME=/usr/local/java/latest
> 2016-09-08 13:33:03,220 INFO org.apache.phoenix.queryserver.server.Main:
> env:TERM=xterm
> 2016-09-08 13:33:03,220 INFO org.apache.phoenix.queryserver.server.Main:
> env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
> 2016-09-08 13:33:03,220 INFO org.apache.phoenix.queryserver.server.Main:
> env:LANG=en_US.UTF-8
> 2016-09-08 13:33:03,220 INFO org.apache.phoenix.queryserver.server.Main:
> env:G_BROKEN_FILENAMES=1
> 2016-09-08 13:33:03,220 INFO org.apache.phoenix.queryserver.server.Main:
> env:SELINUX_LEVEL_REQUESTED=
> 2016-09-08 13:33:03,221 INFO org.apache.phoenix.queryserver.server.Main:
> env:SELINUX_ROLE_REQUESTED=
> 2016-09-08 13:33:03,221 INFO org.apache.phoenix.queryserver.server.Main:
> env:MAIL=/var/spool/mail/hadoop
> 2016-09-08 13:33:03,221 INFO org.apache.phoenix.queryserver.server.Main:
> env:LOGNAME=hadoop
> 2016-09-08 13:33:03,221 INFO org.apache.phoenix.queryserver.server.Main:
> env:PWD=/usr/local/apache-phoenix-4.8.0-HBase-1.1-bin/bin
> 2016-09-08 13:33:03,221 INFO org.apache.phoenix.queryserver.server.Main:
> env:KYLIN_HOME=/usr/local/apache-kylin-1.5.1-bin
> 2016-09-08 13:33:03,221 INFO org.apache.phoenix.queryserver.server.Main:
> env:_=./queryserver.py
> 2016-09-08 13:33:03,221 INFO org.apache.phoenix.queryserver.server.Main:
> env:LESSOPEN=|/usr/bin/lesspipe.sh %s
> 2016-09-08 13:33:03,222 INFO org.apache.phoenix.queryserver.server.Main:
> env:SHELL=/bin/bash
> 2016-09-08 13:33:03,222 INFO org.apache.phoenix.queryserver.server.Main:
> env:SELINUX_USE_CURRENT_RANGE=
> 2016-09-08 13:33:03,222 INFO org.apache.phoenix.queryserver.server.Main:
> env:QTINC=/usr/lib64/qt-3.3/include
> 2016-09-08 13:33:03,222 INFO org.apache.phoenix.queryserver.server.Main:
> env:CVS_RSH=ssh
> 2016-09-08 13:33:03,222 INFO org.apache.phoenix.queryserver.server.Main:
> env:SSH_TTY=/dev/pts/0
> 2016-09-08 13:33:03,222 INFO org.apache.phoenix.queryserver.server.Main:
> env:SSH_CLIENT=172.18.100.27 51441 22
> 2016-09-08 13:33:03,223 INFO org.apache.phoenix.queryserver.server.Main:
> env:HIVE_HOME=/usr/local/apache-hive-1.2.1-bin
> 2016-09-08 13:33:03,223 INFO org.apache.phoenix.queryserver.server.Main:
> env:OLDPWD=/usr/local/hadoop-2.7.1/etc/hadoop
> 2016-09-08 13:33:03,223 INFO org.apache.phoenix.queryserver.server.Main:
> env:USER=hadoop
> 2016-09-08 13:33:03,223 INFO org.apache.phoenix.queryserver.server.Main:
> env:SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
> 2016-09-08 13:33:03,223 INFO org.apache.phoenix.queryserver.server.Main:
> env:SSH_CONNECTION=172.18.100.27 51441 172.23.201.49 22
> 2016-09-08 13:33:03,223 INFO org.apache.phoenix.queryserver.server.Main:
> env:HOSTNAME=tnode02
> 2016-09-08 13:33:03,223 INFO org.apache.phoenix.queryserver.server.Main:
> env:QTDIR=/usr/lib64/qt-3.3
> 2016-09-08 13:33:03,224 INFO org.apache.phoenix.queryserver.server.Main:
> env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
> 2016-09-08 13:33:03,224 INFO org.apache.phoenix.queryserver.server.Main:
> env:HADOOP_HOME=/usr/local/hadoop-2.7.1
> 2016-09-08 13:33:03,224 INFO org.apache.phoenix.queryserver.server.Main:
> env:HBASE_HOME=/usr/local/hbase-1.1.2
> 2016-09-08 13:33:03,224 INFO org.apache.phoenix.queryserver.server.Main:
> env:LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
> 2016-09-08 13:33:03,225 INFO org.apache.phoenix.queryserver.server.Main:
> env:QTLIB=/usr/lib64/qt-3.3/lib
> 2016-09-08 13:33:03,225 INFO org.apache.phoenix.queryserver.server.Main:
> env:HOME=/home/hadoop
> 2016-09-08 13:33:03,225 INFO org.apache.phoenix.queryserver.server.Main:
> env:SHLVL=1
> 2016-09-08 13:33:03,225 INFO org.apache.phoenix.queryserver.server.Main:
> env:ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.8
> 2016-09-08 13:33:03,228 INFO org.apache.phoenix.queryserver.server.Main:
> vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation,
> vmVersion=25.65-b01
> 2016-09-08 13:33:03,229 INFO org.apache.phoenix.queryserver.server.Main:
> vmInputArguments=[-Dproc_phoenixserver,
> -Dlog4j.configuration=file:/usr/local/apache-phoenix-4.8.0-HBase-1.1-bin/bin/log4j.properties,
> -Dpsql.root.logger=INFO,DRFA, -Dpsql.log.dir=/var/log/hbase/logs,
> -Dpsql.log.file=hadoop-queryserver.log]
> 2016-09-08 13:33:03,444 WARN org.apache.hadoop.util.NativeCodeLoader: Unable
> to load native-hadoop library for your platform... using builtin-java
> classes where applicable
> 2016-09-08 13:33:03,709 INFO
> org.apache.calcite.avatica.metrics.MetricsSystemLoader: No metrics
> implementation available on classpath. Using No-op implementation
> 2016-09-08 13:33:03,736 INFO
> org.apache.phoenix.shaded.org.eclipse.jetty.util.log: Logging initialized
> @1458ms
> 2016-09-08 13:33:04,129 INFO
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server:
> jetty-9.2.z-SNAPSHOT
> 2016-09-08 13:33:04,194 INFO
> org.apache.phoenix.shaded.org.eclipse.jetty.server.ServerConnector: Started
> ServerConnector@131ef10{HTTP/1.1}{0.0.0.0:8765}
> 2016-09-08 13:33:04,195 INFO
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server: Started @1922ms
> 2016-09-08 13:33:04,195 INFO org.apache.calcite.avatica.server.HttpServer:
> Service listening on port 8765.
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:host.name=tnode02
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:java.version=1.8.0_65
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:java.vendor=Oracle Corporation
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:java.home=/usr/local/java/jdk1.8.0_65/jre
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:java.class.path=/usr/local/hbase-1.1.2/conf:/etc/hadoop/conf:/usr/local/apache-phoenix-4.8.0-HBase-1.1-bin/bin/../phoenix-4.8.0-HBase-1.1-client.jar:/usr/local/apache-phoenix-4.8.0-HBase-1.1-bin/bin/../phoenix-4.8.0-HBase-1.1-queryserver.jar
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:os.version=2.6.32-431.el6.x86_64
> 2016-09-08 13:33:36,903 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:user.name=hadoop
> 2016-09-08 13:33:36,904 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:user.home=/home/hadoop
> 2016-09-08 13:33:36,904 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Client
> environment:user.dir=/
> 2016-09-08 13:33:36,904 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Initiating client
> connection, connectString=tnode01:2181,tnode02:2181,tnode03:2181
> sessionTimeout=180000 watcher=hconnection-0x47e590f30x0,
> quorum=tnode01:2181,tnode02:2181,tnode03:2181, baseZNode=/hbase
> 2016-09-08 13:33:36,925 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ClientCnxn: Opening socket
> connection to server tnode02/172.23.201.49:2181. Will not attempt to
> authenticate using SASL (unknown error)
> 2016-09-08 13:33:36,927 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ClientCnxn: Socket connection
> established to tnode02/172.23.201.49:2181, initiating session
> 2016-09-08 13:33:36,950 INFO
> org.apache.phoenix.shaded.org.apache.zookeeper.ClientCnxn: Session
> establishment complete on server tnode02/172.23.201.49:2181, sessionid =
> 0x25702951a6e001b, negotiated timeout = 40000
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "F21";<f21.groups@gmail.com>;
> 发送时间: 2016年9月8日(星期四) 中午11:58
> 收件人: "user"<user@phoenix.apache.org>;
> 主题: Re: Can query server run with hadoop ha mode?
>
> I have a test cluster running HDFS in HA mode with HBase + Phoenix on docker
> running successfully.
>
> Can you check if you have a properly configured hbase-site.xml that is
> available to your phoenix query server? Make sure hbase.zookeeper.quorum and
> zookeeper.znode.parent is present. If zookeeper does not run on 2181, you
> will also need hbase.zookeeper.property.clientPort.
>
> As a quick test, can you wget or curl http://your-phoenix-server:8765 to see
> if it has any response? Finally, if you could post the logs from the query
> server, that would be great too.
>
> Cheers,
> Francis
>
>
> On 8/09/2016 12:55 PM, zengbaitang wrote:
>
> I have a hadoop ha cluster and hbase, and  have installed phoenix.
>
> I try to use query server today , I start the queryserver and then I exec
> the following command
>
> ./sqlline-thin.py http://tnode02:8765 sel.sql
>
> the terminal responds the following error , and the  stage-cluster  is the
> value of  hadoop dfs.nameservices ,
> how to solve this error?
>
> AvaticaClientRuntimeException: Remote driver error: RuntimeException:
> java.sql.SQLException: ERROR 103 (08004): Unable to establish connection. ->
> SQLException: ERROR 103 (08004): Unable to establish connection. ->
> IOException: java.lang.reflect.InvocationTargetException ->
> InvocationTargetException: (null exception message) ->
> ExceptionInInitializerError: (null exception message) ->
> IllegalArgumentException: java.net.UnknownHostException: stage-cluster ->
> UnknownHostException: stage-cluster. Error -1 (00000) null
>
> java.lang.RuntimeException: java.sql.SQLException: ERROR 103 (08004): Unable
> to establish connection.
>         at
> org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(JdbcMeta.java:619)
>         at
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:299)
>         at
> org.apache.calcite.avatica.remote.Service$OpenConnectionRequest.accept(Service.java:1748)
>         at
> org.apache.calcite.avatica.remote.Service$OpenConnectionRequest.accept(Service.java:1728)
>         at
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:95)
>         at
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>         at
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:124)
>         at
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>         at
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>         at
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>         at
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>         at
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>         at
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>         at
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>         at
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: ERROR 103 (08004): Unable to establish
> connection.
>         at
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:454)
>         at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>         at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:393)
>         at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(ConnectionQueryServicesImpl.java:219)
>         at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2321)
>         at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2300)
>         at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>         at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2300)
>         at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:231)
>         at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
>         at
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
>         at java.sql.DriverManager.getConnection(DriverManager.java:664)
>         at java.sql.DriverManager.getConnection(DriverManager.java:208)
>         at
> org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(JdbcMeta.java:616)
>         ... 15 more
> Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
>         at
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>         at
> org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:421)
>         at
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:330)
>         at
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
>         at
> org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
>         at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:391)
>         ... 26 more
> Caused by: java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>         at
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>         ... 31 more
> Caused by: java.lang.ExceptionInInitializerError
>         at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
>         at
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
>         at
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
>         at
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:880)
>         at
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:636)
>         ... 36 more
> Caused by: java.lang.IllegalArgumentException:
> java.net.UnknownHostException: stage-cluster
>         at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
>         at
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
>         at
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>         at
> org.apache.hadoop.hbase.util.DynamicClassLoader.initTempDir(DynamicClassLoader.java:118)
>         at
> org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:98)
>         at
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:241)
>         ... 41 more
> Caused by: java.net.UnknownHostException: stage-cluster
>         ... 56 more
>
>
>         at
> org.apache.calcite.avatica.remote.Service$ErrorResponse.toException(Service.java:2453)
>         at
> org.apache.calcite.avatica.remote.RemoteProtobufService._apply(RemoteProtobufService.java:61)
>         at
> org.apache.calcite.avatica.remote.ProtobufService.apply(ProtobufService.java:81)
>         at org.apache.calcite.avatica.remote.Driver.connect(Driver.java:175)
>         at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>         at
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>         at sqlline.Commands.connect(Commands.java:1064)
>         at sqlline.Commands.connect(Commands.java:996)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:497)
>         at
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>         at sqlline.SqlLine.dispatch(SqlLine.java:803)
>         at sqlline.SqlLine.initArgs(SqlLine.java:588)
>         at sqlline.SqlLine.begin(SqlLine.java:656)
>         at sqlline.SqlLine.start(SqlLine.java:398)
>         at sqlline.SqlLine.main(SqlLine.java:292)
>         at
> org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:83)
>
>
>
>
>
>

Mime
View raw message