From user-return-3397-apmail-phoenix-user-archive=phoenix.apache.org@phoenix.apache.org Sat Jul 25 11:05:22 2015 Return-Path: X-Original-To: apmail-phoenix-user-archive@minotaur.apache.org Delivered-To: apmail-phoenix-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id BD31E17758 for ; Sat, 25 Jul 2015 11:05:22 +0000 (UTC) Received: (qmail 97404 invoked by uid 500); 25 Jul 2015 11:05:22 -0000 Delivered-To: apmail-phoenix-user-archive@phoenix.apache.org Received: (qmail 97330 invoked by uid 500); 25 Jul 2015 11:05:22 -0000 Mailing-List: contact user-help@phoenix.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@phoenix.apache.org Delivered-To: mailing list user@phoenix.apache.org Received: (qmail 97320 invoked by uid 99); 25 Jul 2015 11:05:22 -0000 Received: from Unknown (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 25 Jul 2015 11:05:22 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id A144818F45C for ; Sat, 25 Jul 2015 11:05:21 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.989 X-Spam-Level: ** X-Spam-Status: No, score=2.989 tagged_above=-999 required=6.31 tests=[HTML_MESSAGE=3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01] autolearn=disabled Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id qzywPwKHMmHT for ; Sat, 25 Jul 2015 11:05:16 +0000 (UTC) Received: from mx01.sensus.com (MX01.sensus.com [8.224.178.181]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTP id 1D6862156C for ; Sat, 25 Jul 2015 11:05:16 +0000 (UTC) X-ASG-Debug-ID: 1437822280-0592084386d1f5a0001-6SiX4K Received: from DURCAS02.sensus.net (durcas02.sensus.net [10.25.31.24]) by mx01.sensus.com with ESMTP id NmA5VhsIU22OFOyA for ; Sat, 25 Jul 2015 07:04:40 -0400 (EDT) X-Barracuda-Envelope-From: Zack.Riesland@sensus.com Received: from DUREXC02.sensus.net ([fe80::ed7d:bc24:6a62:dbd5]) by DURCAS02.sensus.net ([::1]) with mapi id 14.03.0235.001; Sat, 25 Jul 2015 07:04:40 -0400 From: "Riesland, Zack" To: "user@phoenix.apache.org" CC: "Haisty, Geoffrey" Subject: Exception from RowCounter Thread-Topic: Exception from RowCounter X-ASG-Orig-Subj: Exception from RowCounter Thread-Index: AdDGyJedfpgMmZZ8ROG1mnh0sb5ZIQ== Date: Sat, 25 Jul 2015 11:04:39 +0000 Message-ID: <578A432DAF7DEC49A4F287996FA20EE118F3CA63@DUREXC02.sensus.net> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [50.58.65.6] Content-Type: multipart/alternative; boundary="_000_578A432DAF7DEC49A4F287996FA20EE118F3CA63DUREXC02sensusn_" MIME-Version: 1.0 X-Barracuda-Connect: durcas02.sensus.net[10.25.31.24] X-Barracuda-Start-Time: 1437822280 X-Barracuda-URL: https://10.25.31.15:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at sensus.com X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests=HTML_MESSAGE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.21068 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 HTML_MESSAGE BODY: HTML included in message --_000_578A432DAF7DEC49A4F287996FA20EE118F3CA63DUREXC02sensusn_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable I decided to start from scratch with my table schema in attempt to get a be= tter distribution across my regions/region servers. So, I created a table like this: CREATE TABLE fma.er_keyed_gz_hashed_indexed_meterkey_immutable ( hashed_key varchar not null, meter_key varchar , ... endpoint_id integer, sample_point integer not null, ... CONSTRAINT pk_fma_er_keyed_gz_hashed_indexed_meterkey_immutable PRIMARY= KEY (hashed_key, sample_point) ) COMPRESSION=3D'GZ' SPLIT ON ('0-', '1-', '2-', '3-', '4-', '5-', '6-', '7-', '8-', '9-', '= 10-', '11-', '12-', '13-', '14-', '15-', '16-', '17-', '18-', '19-', '20-',= '21-', '22-', '23-', '24-', '25-', '26-', '27-', '28-', '29-', '30-', '31-= ', '32-', '33-', '34-', '35-', '36-', '37-', '38-', '39-', '40-', '41-', '4= 2-', '43-', '44-', '45-', '46-', '47-', '48-', '49-', '50-', '51-', '52-', = '53-', '54-', '55-', '56-', '57-', '58-', '59-', '60-', '61-', '62-', '63-'= , '64-', '65-', '66-', '67-', '68-', '69-', '70-', '71-', '72-', '73-', '74= -', '75-', '76-', '77-', '78-', '79-', '80-', '81-', '82-', '83-', '84-', '= 85-', '86-', '87-', '88-', '89-', '90-', '91-', '92-', '93-', '94-', '95-',= '96-', '97-', '98-', '99-', '100-', '101-', '102-', '103-', '104-', '105-'= , '106-', '107-', '108-', '109-', '110-', '111-', '112-', '113-', '114-', '= 115-', '116-', '117-', '118-', '119-', '120-', '121-', '122-', '123-', '124= -', '125-', '126-', '127-', '128-', '129-', '130-', '131-', '132-', '133-',= '134-', '135-', '136-', '137-', '138-', '139-', '140-', '141-', '142-', '1= 43-', '144-', '145-', '146-', '147-', '148-', '149-', '150-', '151-', '152-= ', '153-', '154-', '155-', '156-', '157-', '158-', '159-', '160-', '161-', = '162-', '163-', '164-', '165-', '166-', '167-', '168-', '169-', '170-', '17= 1-', '172-', '173-', '174-', '175-', '176-', '177-', '178-', '179-', '180-'= , '181-', '182-', '183-', '184-', '185-', '186-', '187-', '188-', '189-', '= 190-', '191-', '192-', '193-', '194-', '195-', '196-', '197-', '198-', '199= -', '200-', '201-', '202-', '203-', '204-', '205-', '206-', '207-', '208-',= '209-', '210-', '211-', '212-', '213-', '214-', '215-', '216-', '217-', '2= 18-', '219-', '220-', '221-', '222-', '223-', '224-', '225-', '226-', '227-= ', '228-', '229-', '230-', '231-', '232-', '233-', '234-', '235-', '236-', = '237-', '238-', '239-', '240-', '241-', '242-', '243-', '244-', '245-', '24= 6-', '247-', '248-', '249-', '250-', '251-', '252-', '253-', '254-', '255-'= , '256-', '257-', '258-', '259-', '260-', '261-', '262-', '263-', '264-', '= 265-', '266-', '267-', '268-', '269-', '270-', '271-', '272-', '273-', '274= -', '275-', '276-', '277-', '278-', '279-', '280-', '281-', '282-', '283-',= '284-', '285-', '286-', '287-', '288-', '289-', '290-', '291-', '292-', '2= 93-', '294-', '295-', '296-', '297-', '298-', '299-', '300-', '301-', '302-= ', '303-', '304-', '305-', '306-', '307-', '308-', '309-', '310-', '311-', = '312-', '313-', '314-', '315-', '316-', '317-', '318-', '319-', '320-', '32= 1-', '322-', '323-', '324-', '325-', '326-', '327-', '328-', '329-', '330-'= , '331-', '332-', '333-', '334-', '335-', '336-', '337-', '338-', '339-', '= 340-', '341-', '342-', '343-', '344-', '345-', '346-', '347-', '348-', '349= -', '350-', '351-', '352-', '353-', '354-', '355-', '356-', '357-', '358-',= '359-', '360-', '361-', '362-', '363-', '364-', '365-', '366-', '367-', '3= 68-', '369-', '370-', '371-', '372-', '373-', '374-', '375-', '376-', '377-= ', '378-', '379-', '380-', '381-', '382-', '383-', '384-', '385-', '386-', = '387-', '388-', '389-', '390-', '391-', '392-', '393-', '394-', '395-', '39= 6-', '397-', '398-', '399-', '400-', '401-', '402-', '403-', '404-', '405-'= , '406-', '407-', '408-', '409-', '410-', '411-', '412-', '413-', '414-', '= 415-', '416-', '417-', '418-', '419-', '420-', '421-', '422-', '423-', '424= -', '425-', '426-', '427-', '428-', '429-', '430-', '431-', '432-', '433-',= '434-', '435-', '436-', '437-', '438-', '439-', '440-', '441-', '442-', '4= 43-', '444-', '445-', '446-', '447-', '448-', '449-', '450-', '451-', '452-= ', '453-', '454-', '455-', '456-', '457-', '458-', '459-', '460-', '461-', = '462-', '463-', '464-', '465-', '466-', '467-', '468-', '469-', '470-', '47= 1-', '472-', '473-', '474-', '475-', '476-', '477-', '478-', '479-', '480-'= , '481-', '482-', '483-', '484-', '485-', '486-', '487-', '488-', '489-', '= 490-', '491-', '492-', '493-', '494-', '495-', '496-', '497-', '498-', '499= -', '500-', '501-', '502-', '503-', '504-', '505-', '506-', '507-', '508-',= '509-', '510-', '511-') The hashed key is the endpoint_id % 511, which is why the splitting is done= this way. I also added 2 secondary indexes, because I need to be able to query based = on meter_key or endpoint_id + sample_point: CREATE INDEX fma_er_keyed_gz_hashed_indexed_endpoint_include_sample_point o= n fma.er_keyed_gz_hashed_indexed_meterkey_immutable (endpoint_id) include (= sample_point) SALT_BUCKETS =3D 256; --256 is max CREATE INDEX fma_er_keyed_gz_hashed_indexed_meterkey on fma.er_keyed_gz_has= hed_indexed_meterkey_immutable (meter_key) SALT_BUCKETS =3D 256; --256 is m= ax This all seemed to work well. Then I use the bulk import from CSV tool and imported about 1 billion rows. This also seemed to work. I can query by hashed_key and get immediate results. If I query by endpoint_id or meter_key, it is MUCH slower (clue 1 that ther= e's a problem), but it gives me results eventually. However, when I try to get a count, it returns 0: > select count(*) from fma.er_keyed_gz_hashed_indexed_meterkey_immutable; +------------+ | COUNT(1) | +------------+ | 0 | +------------+ 1 row selected (0.075 seconds) And when I try to do a RowCounter job, it fails completely (see below). Can anyone help me diagnose what is going on here? This is a cluster with 6 large region servers, that each have about 600 reg= ions (from this and other tables). This table has about 12 columns and 1 billion rows currently. Thanks! # hbase org.apache.hadoop.hbase.mapreduce.RowCounter fma.er_keyed_gz_hashe= d_indexed_meterkey_immutable; 2015-07-25 06:53:36,425 DEBUG [main] util.RegionSizeCalculator: Region size= s calculated 2015-07-25 06:53:36,427 WARN [main] hbase.HBaseConfiguration: Config optio= n "hbase.regionserver.lease.period" is deprecated. Instead, use "hbase.clie= nt.scanner.timeout.period" 2015-07-25 06:53:36,468 WARN [main] client.ConnectionManager$HConnectionIm= plementation: Encountered problems when prefetch hbase:meta table: org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in hbase:me= ta for table: fma.er_keyed_gz_hashed_indexed_meterkey_immutable, row=3Dfma.= er_keyed_gz_hashed_indexed_meterkey_immutable,,99999999999999 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.= java:164) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.prefetchRegionCache(ConnectionManager.java:1222) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.locateRegionInMeta(ConnectionManager.java:1286) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.locateRegion(ConnectionManager.java:1135) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.locateRegion(ConnectionManager.java:1118) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.locateRegion(ConnectionManager.java:1075) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.getRegionLocation(ConnectionManager.java:909) at org.apache.hadoop.hbase.client.HTable.getRegionLocation(HTable.j= ava:528) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits= (TableInputFormatBase.java:165) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmi= tter.java:597) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitte= r.java:614) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSu= bmitter.java:492) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupIn= formation.java:1628) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314) at org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.jav= a:191) 2015-07-25 06:53:36,471 INFO [main] mapreduce.JobSubmitter: Cleaning up th= e staging area /user/root/.staging/job_1437395072897_1775 Exception in thread "main" org.apache.hadoop.hbase.TableNotFoundException: = fma.er_keyed_gz_hashed_indexed_meterkey_immutable at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.locateRegionInMeta(ConnectionManager.java:1319) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.locateRegion(ConnectionManager.java:1135) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.locateRegion(ConnectionManager.java:1118) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.locateRegion(ConnectionManager.java:1075) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.getRegionLocation(ConnectionManager.java:909) at org.apache.hadoop.hbase.client.HTable.getRegionLocation(HTable.j= ava:528) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits= (TableInputFormatBase.java:165) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmi= tter.java:597) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitte= r.java:614) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSu= bmitter.java:492) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupIn= formation.java:1628) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314) at org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.jav= a:191) --_000_578A432DAF7DEC49A4F287996FA20EE118F3CA63DUREXC02sensusn_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

I decided to start from scratch with my table schema= in attempt to get a better distribution across my regions/region servers.<= o:p>

 

So, I created a table like this:

 

CREATE TABLE fma.er_keyed_gz_hashed_indexed_meterkey= _immutable (

    hashed_key varchar not null,=

    meter_key varchar ,

    …

    endpoint_id integer,

    sample_point  integer not nu= ll,

   …

    CONSTRAINT pk_fma_er_keyed_gz_has= hed_indexed_meterkey_immutable PRIMARY KEY (hashed_key, sample_point)<= /o:p>

    )

    COMPRESSION=3D'GZ'

    SPLIT ON ('0-', '1-', '2-', = '3-', '4-', '5-', '6-', '7-', '8-', '9-', '10-', '11-', '12-', '13-', '14-'= , '15-', '16-', '17-', '18-', '19-', '20-', '21-', '22-', '23-', '24-', '25= -', '26-', '27-', '28-', '29-', '30-', '31-', '32-', '33-', '34-', '35-', '36-', '37-', '38-', '39-', '40-', '41-', '42-', '43-', '44-= ', '45-', '46-', '47-', '48-', '49-', '50-', '51-', '52-', '53-', '54-', '5= 5-', '56-', '57-', '58-', '59-', '60-', '61-', '62-', '63-', '64-', '65-', = '66-', '67-', '68-', '69-', '70-', '71-', '72-', '73-', '74-', '75-', '76-', '77-', '78-', '79-', '80-', '81-= ', '82-', '83-', '84-', '85-', '86-', '87-', '88-', '89-', '90-', '91-', '9= 2-', '93-', '94-', '95-', '96-', '97-', '98-', '99-', '100-', '101-', '102-= ', '103-', '104-', '105-', '106-', '107-', '108-', '109-', '110-', '111-', '112-', '113-', '114-', '115-', '1= 16-', '117-', '118-', '119-', '120-', '121-', '122-', '123-', '124-', '125-= ', '126-', '127-', '128-', '129-', '130-', '131-', '132-', '133-', '134-', = '135-', '136-', '137-', '138-', '139-', '140-', '141-', '142-', '143-', '144-', '145-', '146-', '147-', '1= 48-', '149-', '150-', '151-', '152-', '153-', '154-', '155-', '156-', '157-= ', '158-', '159-', '160-', '161-', '162-', '163-', '164-', '165-', '166-', = '167-', '168-', '169-', '170-', '171-', '172-', '173-', '174-', '175-', '176-', '177-', '178-', '179-', '1= 80-', '181-', '182-', '183-', '184-', '185-', '186-', '187-', '188-', '189-= ', '190-', '191-', '192-', '193-', '194-', '195-', '196-', '197-', '198-', = '199-', '200-', '201-', '202-', '203-', '204-', '205-', '206-', '207-', '208-', '209-', '210-', '211-', '2= 12-', '213-', '214-', '215-', '216-', '217-', '218-', '219-', '220-', '221-= ', '222-', '223-', '224-', '225-', '226-', '227-', '228-', '229-', '230-', = '231-', '232-', '233-', '234-', '235-', '236-', '237-', '238-', '239-', '240-', '241-', '242-', '243-', '2= 44-', '245-', '246-', '247-', '248-', '249-', '250-', '251-', '252-', '253-= ', '254-', '255-', '256-', '257-', '258-', '259-', '260-', '261-', '262-', = '263-', '264-', '265-', '266-', '267-', '268-', '269-', '270-', '271-', '272-', '273-', '274-', '275-', '2= 76-', '277-', '278-', '279-', '280-', '281-', '282-', '283-', '284-', '285-= ', '286-', '287-', '288-', '289-', '290-', '291-', '292-', '293-', '294-', = '295-', '296-', '297-', '298-', '299-', '300-', '301-', '302-', '303-', '304-', '305-', '306-', '307-', '3= 08-', '309-', '310-', '311-', '312-', '313-', '314-', '315-', '316-', '317-= ', '318-', '319-', '320-', '321-', '322-', '323-', '324-', '325-', '326-', = '327-', '328-', '329-', '330-', '331-', '332-', '333-', '334-', '335-', '336-', '337-', '338-', '339-', '3= 40-', '341-', '342-', '343-', '344-', '345-', '346-', '347-', '348-', '349-= ', '350-', '351-', '352-', '353-', '354-', '355-', '356-', '357-', '358-', = '359-', '360-', '361-', '362-', '363-', '364-', '365-', '366-', '367-', '368-', '369-', '370-', '371-', '3= 72-', '373-', '374-', '375-', '376-', '377-', '378-', '379-', '380-', '381-= ', '382-', '383-', '384-', '385-', '386-', '387-', '388-', '389-', '390-', = '391-', '392-', '393-', '394-', '395-', '396-', '397-', '398-', '399-', '400-', '401-', '402-', '403-', '4= 04-', '405-', '406-', '407-', '408-', '409-', '410-', '411-', '412-', '413-= ', '414-', '415-', '416-', '417-', '418-', '419-', '420-', '421-', '422-', = '423-', '424-', '425-', '426-', '427-', '428-', '429-', '430-', '431-', '432-', '433-', '434-', '435-', '4= 36-', '437-', '438-', '439-', '440-', '441-', '442-', '443-', '444-', '445-= ', '446-', '447-', '448-', '449-', '450-', '451-', '452-', '453-', '454-', = '455-', '456-', '457-', '458-', '459-', '460-', '461-', '462-', '463-', '464-', '465-', '466-', '467-', '4= 68-', '469-', '470-', '471-', '472-', '473-', '474-', '475-', '476-', '477-= ', '478-', '479-', '480-', '481-', '482-', '483-', '484-', '485-', '486-', = '487-', '488-', '489-', '490-', '491-', '492-', '493-', '494-', '495-', '496-', '497-', '498-', '499-', '5= 00-', '501-', '502-', '503-', '504-', '505-', '506-', '507-', '508-', '509-= ', '510-', '511-')

 

The hashed key is the endpoint_id % 511, which is wh= y the splitting is done this way.

 

I also added 2 secondary indexes, because I need to = be able to query based on meter_key or endpoint_id + sample_point:=

 

CREATE INDEX fma_er_keyed_gz_hashed_indexed_endpoint= _include_sample_point on fma.er_keyed_gz_hashed_indexed_meterkey_immutable = (endpoint_id) include (sample_point) SALT_BUCKETS =3D 256; --256 is max

 

CREATE INDEX fma_er_keyed_gz_hashed_indexed_meterkey= on fma.er_keyed_gz_hashed_indexed_meterkey_immutable (meter_key) SALT_BUCK= ETS =3D 256; --256 is max

 

This all seemed to work well.

 

Then I use the bulk import from CSV tool and importe= d about 1 billion rows.

 

This also seemed to work.

 

I can query by hashed_key and get immediate results.=

 

If I query by endpoint_id or meter_key, it is MUCH s= lower (clue 1 that there’s a problem), but it gives me results eventu= ally.

 

However, when I try to get a count, it returns 0:

 

 

> select count(*) from fma.er_keyed_gz_hashed_ind= exed_meterkey_immutable;

+------------+

|  COUNT(1)  |

+------------+

| 0        &= nbsp; |

+------------+

1 row selected (0.075 seconds)

 

And when I try to do a RowCounter job, it fails comp= letely (see below).

 

Can anyone help me diagnose what is going on here?

 

This is a cluster with 6 large region servers, that = each have about 600 regions (from this and other tables).

 

This table has about 12 columns and 1 billion rows c= urrently.

 

Thanks!

 

#  hbase org.apache.hadoop.hbase.mapreduce.RowC= ounter fma.er_keyed_gz_hashed_indexed_meterkey_immutable;

 

 <= /b>

2015-07-25 06:53:36,425 DEBUG [main] util.RegionSize= Calculator: Region sizes calculated

2015-07-25 06:53:36,427 WARN  [main] hbase.HBas= eConfiguration: Config option "hbase.regionserver.lease.period" i= s deprecated. Instead, use "hbase.client.scanner.timeout.period"<= o:p>

2015-07-25 06:53:36,468 WARN  [main] client.Con= nectionManager$HConnectionImplementation: Encountered problems when prefetc= h hbase:meta table:

org.apache.hadoop.hbase.TableNotFoundException: Cann= ot find row in hbase:meta for table: fma.er_keyed_gz_hashed_indexed_meterke= y_immutable, row=3Dfma.er_keyed_gz_hashed_indexed_meterkey_immutable,,99999= 999999999

        at org.ap= ache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:164)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefet= chRegionCache(ConnectionManager.java:1222)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locate= RegionInMeta(ConnectionManager.java:1286)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locate= Region(ConnectionManager.java:1135)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locate= Region(ConnectionManager.java:1118)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locate= Region(ConnectionManager.java:1075)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getReg= ionLocation(ConnectionManager.java:909)

        at org.ap= ache.hadoop.hbase.client.HTable.getRegionLocation(HTable.java:528)

        at org.ap= ache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormat= Base.java:165)

        at org.ap= ache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597)

        at org.ap= ache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)<= /o:p>

        at org.ap= ache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)=

        at org.ap= ache.hadoop.mapreduce.Job$10.run(Job.java:1296)

        at org.ap= ache.hadoop.mapreduce.Job$10.run(Job.java:1293)

        at java.s= ecurity.AccessController.doPrivileged(Native Method)

        at javax.= security.auth.Subject.doAs(Subject.java:415)

        at org.ap= ache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:16= 28)

        at org.ap= ache.hadoop.mapreduce.Job.submit(Job.java:1293)

        at org.ap= ache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)

        at org.ap= ache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:191)=

2015-07-25 06:53:36,471 INFO  [main] mapreduce.= JobSubmitter: Cleaning up the staging area /user/root/.staging/job_14373950= 72897_1775

Exception in thread "main" org.apache.hado= op.hbase.TableNotFoundException: fma.er_keyed_gz_hashed_indexed_meterkey_im= mutable

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locate= RegionInMeta(ConnectionManager.java:1319)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locate= Region(ConnectionManager.java:1135)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locate= Region(ConnectionManager.java:1118)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locate= Region(ConnectionManager.java:1075)

        at org.ap= ache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getReg= ionLocation(ConnectionManager.java:909)

        at org.ap= ache.hadoop.hbase.client.HTable.getRegionLocation(HTable.java:528)

        at org.ap= ache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormat= Base.java:165)

        at org.ap= ache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597)

        at org.ap= ache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)<= /o:p>

        at org.ap= ache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)=

        at org.ap= ache.hadoop.mapreduce.Job$10.run(Job.java:1296)

        at org.ap= ache.hadoop.mapreduce.Job$10.run(Job.java:1293)

        at java.s= ecurity.AccessController.doPrivileged(Native Method)

        at javax.= security.auth.Subject.doAs(Subject.java:415)

        at org.ap= ache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:16= 28)

        at org.ap= ache.hadoop.mapreduce.Job.submit(Job.java:1293)

        at org.ap= ache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)

        at org.ap= ache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:191)=

--_000_578A432DAF7DEC49A4F287996FA20EE118F3CA63DUREXC02sensusn_--