phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <>
Subject Re: Issue with maintaining the index with not null in table definition
Date Sun, 21 Sep 2014 06:11:47 GMT
Hi Mohamed,
Thanks for the detail on this issue. I filed and fixed the issue with
declaring a NOT NULL constraint on a non PK column (PHOENIX-1266).
We've never enforced a NOT NULL constraint on a non PK column, so we
shouldn't allow a declaration of it. Note that in most cases we
already caught this and disallowed it, but this was a slightly
different case that wasn't being flagged as an error.

I also filed HBASE-12039 for the warning you're seeing in sqlline. You
can safely ignore it.

I verified that without the declaration of the NOT NULL constraint,
everything works fine with the index in place (and I made sure that
existing tables that may have been declared this way will not cause
this same indexing issue you're seeing).


On Sat, Sep 20, 2014 at 5:16 PM, Mohamed Ibrahim <> wrote:
> Hi All,
> I faced an issue where wrong values started appearing in my queries, values
> that I did not insert into the table. Those values appeared after upserting
> (updating) an existing row.
> By debugging, I found that if I create a table with a bigint not null
> column, then create an index on that column updating an existing row results
> in what it seems corrupted values.
> I'm using phoenix 3.0, and saved the sqline output in a gist here (
> ).
> Steps to reproduce:
> 1. Create a table with a bigint not null field
> 2. Create an index on that field
> 3. upsert a row
> 4. update (upsert) a row with an existing primary key
> errors show in the table.
> I removed the not null from my application so it's not blocking me, but I
> believe it is a bug.
> Thanks,
> Mohamed

View raw message