Phoenix 5.1 doesn't actually exist yet, at least not at the Apache level. We haven't released it yet. It's possible that a vendor or user has cut an unofficial release off one of our development branches, but that's not something we can give support on. You should contact your vendor. 

Also, since I see you're upgrading from Phoenix 4.14 to 5.1: The 4.x branch of Phoenix is for HBase 1.x systems, and the 5.x branch is for HBase 2.x systems. If you're upgrading from a 4.x to a 5.x, make sure that you also upgrade your HBase. If you're still on HBase 1.x, we recently released Phoenix 4.15, which does have a supported upgrade path from 4.14 (and a very similar set of features to what 5.1 will eventually get). 

Geoffrey

On Tue, Jan 14, 2020 at 5:23 AM Prathap Rajendran <prathapmdu@gmail.com> wrote:
Hello All,

We are trying to upgrade the phoenix version from "apache-phoenix-4.14.0-cdh5.14.2" to "APACHE_PHOENIX-5.1.0-cdh6.1.0."

I couldn't find out any upgrade steps for the same. Please help me out to get any documents available. 
 
Note:
I have downloaded the below phoenix parcel and trying to access some DML operation. I am getting the following error 

https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel

Error:
20/01/13 04:22:41 WARN client.HTable: Error calling coprocessor service org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for row \x00\x00WEB_STAT
java.util.concurrent.ExecutionException: org.apache.hadoop.hbase.TableNotFoundException: org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CHILD_LINK
        at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
        at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
        at org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
        at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
        at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
        at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
        at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
        at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
        at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
        at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
        at org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
        at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
        at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
        at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
        at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
        at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)

Thanks,
Prathap