phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 倪项菲<nixiangfei_...@chinamobile.com>
Subject Re: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6
Date Tue, 14 Aug 2018 01:06:31 GMT


Hi Ankit,




   I have put phoenix-4.14.0-HBase-1.2-server.jar to /opt/hbase-1.2.6/lib,is
this correct?











 



发件人: Ankit Singhal

时间: 2018/08/14(星期二)02:25

收件人:
user;

主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

Skipping
sanity checks may unstabilize the functionality on which Phoenix relies on, SplitPolicy should
have been loaded to prevent splitting of SYSTEM.CATALOG table, so to actually fix the issue
please check if you have right phoenix-server.jar in HBase classpath

"Unable to load configured
region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 'SYSTEM.CATALOG'
Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity
checks"





Regards,

Ankit Singhal




On Sun, Aug 12, 2018 at 6:46 PM, 倪项菲 <nixiangfei_iov@chinamobile.com>
wrote:




Thanks all.




at last I set hbase.table.sanity.checks to false in hbase-site.xml
and restart hbase cluster,it works.











 



发件人: Josh Elser

时间: 2018/08/07(星期二)20:58

收件人:
user;





主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6



"Phoenix-server"
refers to the phoenix-$VERSION-server.jar that is 
either included in the binary tarball or
is generated by the official 
source-release.

"Deploying" it means copying the jar to $HBASE_HOME/lib.

On
8/6/18 9:56 PM, 倪项菲 wrote:
> 
> Hi Zhang Yun,
>      the link you mentioned
tells us to add the phoenix jar to  hbase 
> lib directory,it doesn't tell us how to deploy
the phoenix server.
> 
>     发件人: Jaanai Zhang <mailto:cloud.poster@gmail.com>
>
    时间: 2018/08/07(星期二)09:36
>     收件人: user <mailto:user@phoenix.apache.org>;
>
    主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
>     with hbase
1.2.6
> 
> reference link: http://phoenix.apache.org/installation.html
> 
> 
>
----------------------------------------
>     Yun Zhang
>     Best regards!
> 
>

> 2018-08-07 9:30 GMT+08:00 倪项菲 <nixiangfei_iov@chinamobile.com 
> <mailto:nixiangfei_iov@chinamobile.com>>:
>

>     Hi Zhang Yun,
>          how to deploy the Phoenix server?I just have the infomation
>
    from phoenix website,it doesn't mention the phoenix server
> 
>         发件人:
Jaanai Zhang <mailto:cloud.poster@gmail.com>
>         时间: 2018/08/07(星期二)09:16
>
        收件人: user <mailto:user@phoenix.apache.org>;
>         主题: Re: error
when using apache-phoenix-4.14.0-HBase-1.2-bin
>         with hbase 1.2.6
> 
>  
  Please ensure your Phoenix server was deployed and had resarted
> 
> 
>     ----------------------------------------
>
        Yun Zhang
>         Best regards!
> 
> 
>     2018-08-07 9:10 GMT+08:00
倪项菲 <nixiangfei_iov@chinamobile.com
>     <mailto:nixiangfei_iov@chinamobile.com>>:
>

> 
>         Hi Experts,
>              I am using HBase 1.2.6,the cluster is working
good with
>         HMaster HA,but when we integrate phoenix with hbase,it
>       
 failed,below are the steps
>              1,download apache-phoenix-4.14.0-HBase-1.2-bin
from
>         http://phoenix.apache.org,the copy the tar file to the HMaster
>    
    and unzip the file
>             
>         2,copy phoenix-core-4.14.0-HBase-1.2.jar
phoenix-4.14.0-HBase-1.2-server.jar
>         to all HBase nodes including HMaster and
HRegionServer ,put them
>         to hbasehome/lib,my path is /opt/hbase-1.2.6/lib
>
             3,restart hbase cluster
>              4,then start to use phoenix,but it
return below error:
>         [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
>
        plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
>
        Setting property: [incremental, false]
>         Setting property: [isolation,
TRANSACTION_READ_COMMITTED]
>         issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01
none
>         none org.apache.phoenix.jdbc.PhoenixDriver
>         Connecting to
>
        jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
>
        SLF4J: Class path contains multiple SLF4J bindings.
>         SLF4J: Found binding
in
>         [jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
        SLF4J: Found binding in
>         [jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
        SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
>         <http://www.slf4j.org/codes.html#multiple_bindings>
for an
>         explanation.
>         18/08/06 18:40:08 WARN util.NativeCodeLoader:
Unable to load
>         native-hadoop library for your platform... using builtin-java
>
        classes where applicable
>         Error: org.apache.hadoop.hbase.DoNotRetryIOException:
Unable to
>         load configured region split policy
>         'org.apache.phoenix.schema.MetaDataSplitPolicy'
for table
>         'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
>
        or table descriptor if you want to bypass sanity checks
>                  at
>
        org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
>
                 at
>         org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
>
                 at
>         org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
>
                 at
>         org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
>
                 at
>         org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
>
                 at
>         org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
>
                 at org.apache.hadoop.hbase.ipc.Ca
>         <http://org.apache.hadoop.hbase.ipc.Ca>llRunner.run(CallRunner.java:112)
>
                 at
>         org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>
                 at
>         org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>
                 at java.lang.Thread.run(Thread.java:745)
>         (state=08000,code=101)
>
        org.apache.phoenix.exception.PhoenixIOException:
>         org.apache.hadoop.hbase.DoNotRetryIOException:
Unable to load
>         configured region split policy
>         'org.apache.phoenix.schema.MetaDataSplitPolicy'
for table
>         'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
>
        or table descriptor if you want to bypass sanity checks
>                  at
>
        org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
>
                 at
>         org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
>
                 at
>         org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
>
                 at
>         org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
>
                 at
>         org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
>
                 at
>         org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
>
                 at org.apache.hadoop.hbase.ipc.Ca
>         <http://org.apache.hadoop.hbase.ipc.Ca>llRunner.run(CallRunner.java:112)
>
                 at
>         org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>
                 at
>         org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>
                 at java.lang.Thread.run(Thread.java:745)
> 
>                  at
>
        org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)
>
                 at
>         org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197)
>
                 at
>         org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)
>
                 at
>         org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2717)
>
                 at
>         org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
>
                 at
>         org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
>
                 at
>         org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
>
                 at
>         org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>
                 at
>         org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>
                 at
>         org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
>
                 at
>         org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>
                 at
>         org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
>
                 at
>         org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2528)
>
                 at
>         org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2491)
>
                 at
>         org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>
                 at
>         org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2491)
>
                 at
>         org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>
                 at
>         org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>
                 at
>         org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>
                 at
>         sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>
                 at
>         sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>
                 at sqlline.Commands.connect(Commands.java:1064)
>                  at
sqlline.Commands.connect(Commands.java:996)
>                  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>
        Method)
>                  at
>         sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
                 at
>         sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
                 at java.lang.reflect.Method.invoke(Method.java:498)
>                
 at
>         sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>
                 at sqlline.SqlLine.dispatch(SqlLine.java:809)
>                  at sqlline.SqlLine.initArgs(SqlLine.java:588)
>
                 at sqlline.SqlLine.begin(SqlLine.java:661)
>                  at sqlline.SqlLine.start(SqlLine.java:398)
>
                 at sqlline.SqlLine.main(SqlLine.java:291)
>         Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
>
        org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load
>         configured
region split policy
>         'org.apache.phoenix.schema.MetaDataSplitPolicy' for table
>
        'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
>         or table
descriptor if you want to bypass sanity checks
> 
>                I searched from internet,but
got no help.
>                Any help will be highly appreciated!
> 
> 
> 
 
Mime
View raw message