phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kiru Pakkirisamy <kirupakkiris...@yahoo.com>
Subject Re: select w/ limit hanging on large tables
Date Thu, 14 May 2015 20:37:07 GMT
Gabriel,I think the load did not run through fully. So yesterday, we tried to load the table
again. But we ran out of disk space (we had 30TB free).Are there any blogs or slideshare
that explains how this csvbulkloadtool works ?We need to get this to run through to success
before we can access/query I guess. Regards,
- kiru
      From: Gabriel Reid <gabriel.reid@gmail.com>
 To: Kiru Pakkirisamy <kirupakkirisamy@yahoo.com>; user@phoenix.apache.org 
 Sent: Tuesday, May 12, 2015 10:27 PM
 Subject: Re: select w/ limit hanging on large tables
   
Hi Kiru,

How many regions are there on this table?

Could you also share some information on the schema of the table (e.g. how many columns are
defined)?

Does a "limit 10" query also hang in this table?

Could you also elaborate a bit on the issues you were running into when loading data into
the table? We're there performance issues, or we're things not working at all?

- Gabriel


On Tue, May 12, 2015 at 23:56 Kiru Pakkirisamy <kirupakkirisamy@yahoo.com> wrote:

We are trying to benchmark/test Phoenix with large tables.A 'select * from table1 limit 100000'
hangs on a 1.4 billion row table (in sqlline.py or SQuirreL)The same select of 1million rows
works on smaller table (300 million).Mainly we wanted to create a smaller version of the 1.4
billion table and ran into this issue.Any ideas why this is happening ?We had quite a few
problems crossing the 1 billion mark even when loading (using CsvBulkLoadTool) the table.We
are also wondering whether our HBase is configured correctly.
Any tips on HBase Configuration for loading/running Phoenix is highly appreciated as well.(We
are on HBase 0.98.12 and Phoenix 4.3.1) Regards,
- kiru
 Regards,
- kiru




  
Mime
View raw message