How many regions are there on this table?
Could you also share some information on the schema of the table (e.g. how many columns are defined)?
Does a "limit 10" query also hang in this table?
Could you also elaborate a bit on the issues you were running into when loading data into the table? We're there performance issues, or we're things not working at all?
We are trying to benchmark/test Phoenix with large tables.A 'select * from table1 limit 100000' hangs on a 1.4 billion row table (in sqlline.py or SQuirreL)The same select of 1million rows works on smaller table (300 million).Mainly we wanted to create a smaller version of the 1.4 billion table and ran into this issue.Any ideas why this is happening ?We had quite a few problems crossing the 1 billion mark even when loading (using CsvBulkLoadTool) the table.We are also wondering whether our HBase is configured correctly.
Any tips on HBase Configuration for loading/running Phoenix is highly appreciated as well.(We are on HBase 0.98.12 and Phoenix 4.3.1)Regards,