Hi All,
Thanks for your response.
I have not used Apache Drill yet for comparison.
Below is snippet of the code used for measuring the read latency.
long before_read = System.currentTimeMillis();
String query = "SELECT FIELD_1, FIELD_2, FIELD_3, FIELD_4, FIELD_5,
FIELD_6, FIELD_7, FIELD_8, FIELD_9, FIELD_10, FIELD_11, FIELD_12, FIELD_13,
FIELD_14
FROM MY_DETAILS";
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
while (rs.next()) {
HashMap<String, String> row = new HashMap<String, String>();
for (String column : columns)
row.put(column, rs.getString(column));
output.add(row);
}
long after_read = System.currentTimeMillis();
System.out.println("Time to read (ms) = " + (after_read - before_read));
Below is the Phoenix table DDL
CREATE TABLE MY_DETAILS
(
FIELD_1 VARCHAR(20) NOT NULL,
FIELD_2 CHAR(6) NOT NULL,
FIELD_3 VARCHAR(20) NOT NULL,
FIELD_4 CHAR(1) NOT NULL,
FIELD_5 DATE NOT NULL,
FIELD_6 VARCHAR(20) NOT NULL,
FIELD_7 VARCHAR(10) NOT NULL,
FIELD_8 VARCHAR(10),
FIELD_9 VARCHAR(10),
FIELD_10 VARCHAR(10),
FIELD_11 VARCHAR(10),
FIELD_12 VARCHAR(10),
FIELD_13 VARCHAR(10),
FIELD_14 CHAR(1),
CONSTRAINT PK PRIMARY KEY (FIELD_1, FIELD_2, FIELD_3, FIELD_4, FIELD_5,
FIELD_6, FIELD_7)
)
SALT_BUCKETS=2,
DEFAULT_COLUMN_FAMILY='DETAILS',
DATA_BLOCK_ENCODING='FAST_DIFF'
;
The equivalent HBase table has composite rowkey
Hash(FIELD_1)|FIELD_1|FIELD_2|FIELD_3|FIELD_4|FIELD_5|FIELD_6|FIELD_7. Rest
of the fields are column qualifiers.
On Thu, Jan 7, 2016 at 11:36 PM, Mujtaba Chohan <mujtaba@apache.org> wrote:
> Just a pointer that if you are measuring time via a newly created JVM then
> you might also be measuring one time cost of initializing HConnection when
> for the first time Phoenix establish connection to the cluster.
>
> On Thu, Jan 7, 2016 at 9:28 AM, James Taylor <jamestaylor@apache.org>
> wrote:
>
>> Would be good to see a code snippet too. Your create table statement,
>> query, and how you're measuring time, plus the same on the native HBase
>> side.
>> Thanks,
>> James
>>
>> On Thu, Jan 7, 2016 at 9:20 AM, Thomas Decaux <ebuildy@gmail.com> wrote:
>>
>>> Can you update Phoenix to la test version?
>>>
>>> 1s is really slow, could be network or client issue ?
>>>
>>> Did you try apache drill to compare ?
>>> Le 7 janv. 2016 2:50 PM, "Sreeram" <sreeram.v@gmail.com> a écrit :
>>>
>>>> Hi,
>>>>
>>>> I am new to Phoenix & I am trying to perform basic full table select
>>>> from a table.
>>>>
>>>> I am connecting using JDBC and I am seeing that a full table scan for
>>>> 1000 records (14 columns, approx 150 bytes per record) is alwasy taking
>>>> more than a second. Scan from Hbase on equivalent HBase table takes close
>>>> to 170 ms on average. The HBase table has a composite row key & the same
>>>> columns are provided as part of PRIMARY KEY CONSTRAINT in the Phoenix table.
>>>>
>>>> I use a two node cluster & have specified SALT_BUCKETS=2 as part of
>>>> table creation.
>>>>
>>>> I am using Phoenix version 4.3 and Hbase version 1.0.0
>>>>
>>>> I think I am missing something basic here - will appreciate any inputs
>>>> on how I can reduce the Phoenix read latency.
>>>>
>>>> Regards,
>>>> Sreeram
>>>>
>>>
>>
>
|