phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Friedman <mike.fried...@salesforce.com>
Subject Re: testing problem
Date Fri, 06 Feb 2015 04:54:13 GMT
James,

Thanks! That worked. Some details that might help people:

- Separate the columns in /etc/hosts with tabs.
- Run command "file -b /etc/hosts" and make sure the output is "ASCII
English text". If you copy/paste something into /etc/hosts, you might
introduce a character that will coerce the file into UTF-8 or some other
format that will be problematical. Use a native editor like vi to clean up
/etc/hosts to get it back to "ASCII English text".
- Run command "dscacheutil -flushcache" to make sure the OS will your new
/etc/hosts immediately



Mike


On Thu, Feb 5, 2015 at 7:03 PM, James Taylor <jamestaylor@apache.org> wrote:

> Mike,
> Nothing is required that I'm aware of to just run the unit tests. I do
> remember having to add a line in my /etc/host on my Mac laptop that
> wasn't required on my linux box. Something like this, where the second
> entry after localhost is the ip address of your machine.
>
> 127.0.0.1       localhost your-machine-name.your.domain.com
>
> HTH,
> James
>
> On Thu, Feb 5, 2015 at 6:42 PM, Mike Friedman
> <mike.friedman@salesforce.com> wrote:
> > In my environment there is no Hadoop cluster other than the one I have
> been
> > led to believe is built into Phoenix itself specifically for the purpose
> of
> > running the automated tests.
> >
> >
> >
> > Mike
> >
> > On Thu, Feb 5, 2015 at 5:32 PM, sunfl@certusnet.com.cn
> > <sunfl@certusnet.com.cn> wrote:
> >>
> >> Hi, Mike
> >> You are connecting to a remote hbase cluster, aren't you? Can you ping
> the
> >> master node from localhost?
> >> Seems like some net connection exception and I think you may check that.
> >>
> >> Thanks,
> >> Sun.
> >>
> >> ________________________________
> >> ________________________________
> >>
> >> CertusNet
> >>
> >>
> >> From: Mike Friedman
> >> Date: 2015-02-06 03:53
> >> To: user
> >> Subject: testing problem
> >> Hi,
> >>
> >> In Eclipse, when I right-click on *IT.java files and choose Run As
> JUnit,
> >> I get this kind of exceptions. Is there some step I am missing? Thanks
> for
> >> any suggestions.  - Mike
> >>
> >> . . .
> >>
> >> 2015-02-05 11:32:38,889 DEBUG
> >>
> [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@30e868be
> ]
> >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1358): BLOCK*
> >> neededReplications = 0 pendingReplications = 0
> >>
> >> 2015-02-05 11:32:39,942 INFO  [M:0;192.168.0.169:50970]
> >> org.apache.hadoop.hbase.master.ServerManager(868): Waiting for region
> >> servers count to settle; currently checked in 0, slept for 16880 ms,
> >> expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of
> 1500
> >> ms.
> >>
> >> 2015-02-05 11:32:41,460 INFO  [M:0;192.168.0.169:50970]
> >> org.apache.hadoop.hbase.master.ServerManager(868): Waiting for region
> >> servers count to settle; currently checked in 0, slept for 18398 ms,
> >> expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of
> 1500
> >> ms.
> >>
> >> 2015-02-05 11:32:41,891 DEBUG
> >>
> [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@30e868be
> ]
> >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1358): BLOCK*
> >> neededReplications = 0 pendingReplications = 0
> >>
> >> 2015-02-05 11:32:43,012 INFO  [M:0;192.168.0.169:50970]
> >> org.apache.hadoop.hbase.master.ServerManager(868): Waiting for region
> >> servers count to settle; currently checked in 0, slept for 19950 ms,
> >> expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of
> 1500
> >> ms.
> >>
> >> 2015-02-05 11:32:43,085 WARN  [RS:0;192.168.0.169:50972]
> >> org.apache.hadoop.hbase.regionserver.HRegionServer(2130): error telling
> >> master we are up
> >>
> >> com.google.protobuf.ServiceException:
> >> org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout
> while
> >> waiting for channel to be ready for connect. ch :
> >> java.nio.channels.SocketChannel[connection-pending
> >> local=/192.168.0.169:50983 remote=/192.168.0.169:50970]
> >>
> >> at
> >>
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1678)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8277)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2120)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:885)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:155)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:107)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:139)
> >>
> >> at java.security.AccessController.doPrivileged(Native Method)
> >>
> >> at javax.security.auth.Subject.doAs(Subject.java:360)
> >>
> >> at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1471)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:310)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:137)
> >>
> >> at java.lang.Thread.run(Thread.java:745)
> >>
> >> Caused by: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis
> >> timeout while waiting for channel to be ready for connect. ch :
> >> java.nio.channels.SocketChannel[connection-pending
> >> local=/192.168.0.169:50983remote=/192.168.0.169:50970]
> >>
> >> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532)
> >>
> >> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:578)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:868)
> >>
> >> at
> >> org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1543)
> >>
> >> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442)
> >>
> >> at
> >>
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
> >>
> >> ... 13 more
> >>
> >> 2015-02-05 11:32:43,087 WARN  [RS:0;192.168.0.169:50972]
> >> org.apache.hadoop.hbase.regionserver.HRegionServer(887): reportForDuty
> >> failed; sleeping and then retrying.
> >>
> >> 2015-02-05 11:32:44,555 INFO  [M:0;192.168.0.169:50970]
> >> org.apache.hadoop.hbase.master.ServerManager(868): Waiting for region
> >> servers count to settle; currently checked in 0, slept for 21493 ms,
> >> expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of
> 1500
> >> ms.
> >>
> >> 2015-02-05 11:32:44,896 DEBUG
> >>
> [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@30e868be
> ]
> >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1358): BLOCK*
> >> neededReplications = 0 pendingReplications = 0
> >>
> >> 2015-02-05 11:32:46,092 INFO  [RS:0;192.168.0.169:50972]
> >> org.apache.hadoop.hbase.regionserver.HRegionServer(2112): reportForDuty
> to
> >> master=192.168.0.169,50970,1423164741559 with port=50972,
> >> startcode=1423164741796
> >>
> >> 2015-02-05 11:32:46,099 INFO  [M:0;192.168.0.169:50970]
> >> org.apache.hadoop.hbase.master.ServerManager(868): Waiting for region
> >> servers count to settle; currently checked in 0, slept for 23037 ms,
> >> expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of
> 1500
> >> ms.
> >>
> >> . . .
> >
> >
>

Mime
View raw message