Please see my earlier message that has more detail on the changes you'll need to make to your DDL statements.

Thanks


On Fri, Feb 7, 2014 at 12:30 PM, Sean Huo <sean@crunchyroll.com> wrote:
Hi James,

So All I need to do is to drop the system table and rerun the DDL. Do I need to modify the DDL at all? I do have timestamp type in it.

Thanks
Sean


On Thu, Feb 6, 2014 at 7:51 PM, James Taylor <jamestaylor@apache.org> wrote:
Sorry, but no plans for a migration path for 3.0.0-SNAPSHOT to 3.0 (other than dropping the system table and re-issuing your DDL). There are just too many different states that the system table could have been in to make it tenable.

Thanks,
James


On Thu, Feb 6, 2014 at 7:35 PM, Sean Huo <sean@crunchyroll.com> wrote:
Thanks for getting back, James. 
I suppose I can wait for the 3.0 release. Is there a migration path from pre apache phoenix 3.0.0-SNAPSHOT to 3.0?

Thanks
Sean


On Thu, Feb 6, 2014 at 5:06 PM, James Taylor <jamestaylor@apache.org> wrote:
Hi Sean,
Your data isn't affected, only your metadata. Not sure what your time frame is, but we plan to release 3.0 by the end of the month. If you need something before that, another possible, riskier migration path would be:
- Migrate from Github Phoenix 2.2.2 -> Apache Phoenix 2.2.3. This will update your coprocessors to point to the ones with the new package names. We're aiming for a 2.2.3 release next week if all goes well.
- After this migration, you could use the 3.0.0-snapshot if you
  - rerun all your DDL statements and replace any DATE, TIME, and TIMESTAMP declarations with UNSIGNED_DATE, UNSIGNED_TIME, and UNSIGNED_TIMESTAMP
  - add a DEFAULT_COLUMN_FAMILY='_0' property at the end of each DDL statement

But keep in mind that the metadata will change a bit more before we release, so you may have to go through this again.
Thanks,
James


On Thu, Feb 6, 2014 at 3:31 PM, Mujtaba Chohan <mujtaba@apache.org> wrote:
James can comment on exact timeframe for master branch but he is working today to add metadata update process in 2.2.3 branch that will update all com.salesforce.* coprocessors to org.apache.*.

Thanks,
Mujtaba


On Thu, Feb 6, 2014 at 3:13 PM, Sean Huo <sean@crunchyroll.com> wrote:
So looks like I have some tables created with the previous version of phoenix before the migration toward the apache project. 
The meta data on the tables have their coprocessors defined like this:

coprocessor$5 => '|com.salesforce.hbase.index.Indexer|1073741823|com.salesforce.hbase.index.codec.class=com.salesforce.phoenix.index.PhoenixIndexCodec,index.builder=com.salesforce. true                                                                                                             

 phoenix.index.PhoenixIndexBuilder', coprocessor$4 => '|com.salesforce.phoenix.coprocessor.ServerCachingEndpointImpl|1|', coprocessor$3 => '|com.salesforce.phoenix.coprocessor.GroupedAggregateRegionObser                                                                                                                  

 ver|1|', coprocessor$2 => '|com.salesforce.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|', coprocessor$1 => '|com.salesforce.phoenix.coprocessor.ScanRegionObserver|1|'


Clearly it still references the old package name, and won't work with the latest Phoenix.

What do I need to do to be able to run the latest Phoenix without losing data?

Thanks

Sean






On Thu, Feb 6, 2014 at 11:50 AM, Sean Huo <sean@crunchyroll.com> wrote:
I pushed the latest phoenix jar to the regionservers and restart.
There are tons of exception pertaining to the coprocessor like

2014-02-06 11:39:00,570 DEBUG org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor class com.salesforce.phoenix.coprocessor.UngroupedAggregateRegionObserver with path null and priority 1

2014-02-06 11:39:00,571 WARN org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute 'coprocessor$2' has invalid coprocessor specification '|com.salesforce.phoenix.coprocessor.UngroupedAggregateRegionObserver|1|'

2014-02-06 11:39:00,571 WARN org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: java.io.IOException: No jar path specified for com.salesforce.phoenix.coprocessor.UngroupedAggregateRegionObserver

at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)


I understand that the new code is under apache , and the package name has been changed to 

org.apache.phoenix, hence the error can be understood.

Are there any migrations that have to be undertaken to get rid of the errors?


Thanks

Sean