phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gabriel Reid <>
Subject Re: internal table client cache
Date Fri, 27 Jun 2014 14:38:19 GMT
Hi Jody,

I can't provide you with too much insight on the exact working of the
metadata cache updates; however, this should work. I just did some
small tests on my machine and it seemed to be working as intended, so
I'm guessing that I'm doing something differently than you.

Could you log an issue in Jira
( with details on how to
replicate this?



On Wed, Jun 25, 2014 at 8:43 PM, Jody Landreneau
<> wrote:
> I have a use case where I have multiple instances of the phoenix client
> running. Essentially, they are taking data and performing upserts. These are
> on multiple machines. When I update a table schema, ie)by adding a column,
> the clients start failing. The update is performed via a client outside
> these running instances. The code that builds the upsert statement
> understands that an additional column was added and creates the proper
> upsert statement.
> Thinking this was a connection cache issue, I tried setting a max time to
> close connections(they are in a pool). This did not work. I ended up tracing
> the issue and finding that there is a MetaDataImpl cache that gets populated
> on startup. Table schema are stored in this cache. And when something is
> performed like an upsert there is code in the FromCompiler that checks
> columns, but the columns are not updated in this internal cache.
> I can file an issue on this with more details but wanted to get some insight
> from others as to if it were some property to pass in that can tell the
> cache to refresh at some interval. Possibly this should have been stored on
> the connections and used that connection timeout or if there was a property
> to pass in to the driver itself to cause a refresh.
> thanks --

View raw message