From user-return-6900-apmail-flume-user-archive=flume.apache.org@flume.apache.org Wed Jul 15 12:12:37 2015 Return-Path: X-Original-To: apmail-flume-user-archive@www.apache.org Delivered-To: apmail-flume-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E002A1830C for ; Wed, 15 Jul 2015 12:12:37 +0000 (UTC) Received: (qmail 47137 invoked by uid 500); 15 Jul 2015 12:12:37 -0000 Delivered-To: apmail-flume-user-archive@flume.apache.org Received: (qmail 47084 invoked by uid 500); 15 Jul 2015 12:12:37 -0000 Mailing-List: contact user-help@flume.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flume.apache.org Delivered-To: mailing list user@flume.apache.org Received: (qmail 47074 invoked by uid 99); 15 Jul 2015 12:12:37 -0000 Received: from Unknown (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 15 Jul 2015 12:12:37 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 0B6F018019F for ; Wed, 15 Jul 2015 12:12:37 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 4.901 X-Spam-Level: **** X-Spam-Status: No, score=4.901 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, KAM_BADIPHTTP=2, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id SMxVRxB1xGDO for ; Wed, 15 Jul 2015 12:12:27 +0000 (UTC) Received: from mail-ig0-f176.google.com (mail-ig0-f176.google.com [209.85.213.176]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id 182B1211D2 for ; Wed, 15 Jul 2015 12:12:26 +0000 (UTC) Received: by iggp10 with SMTP id p10so119540636igg.0 for ; Wed, 15 Jul 2015 05:11:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=VTa8f3mksoMcX0JZ5g1nYL1yy3ZLFX8Xmqs244bl3Qs=; b=qvJgXwW1kojAHBnEcKqQYLyEuWWm8Yk1XYsRVOeXpzxWmVc6Udtq8vqrI0HV/z23E/ 7DZRNTFqFxrTZb+dLF/diKzh02VsYOGya2wf3VcqZIuPDsbnkgePDX/lzjO3pRz/sYck pSmAA5TRHZ8E6s5EsINlZFbbl5biH9tCqB4Y98Ob3oR0f9oe2D7mNfxs8qSpSCiK9i7B HbW5YVMX2UI/Vo+9tUrgCf5IXlA/hGskXZhfzIUgY+eZbIHr/c1r47znbuQj+hrj8MP7 TvbntvT9vW7kY4ZEGWNtwTLebwVh47/oKeMNrZOU8uZWo0x6HwhSGcH340xjMdfl28Wb rdMg== MIME-Version: 1.0 X-Received: by 10.50.56.10 with SMTP id w10mr9122581igp.3.1436962294085; Wed, 15 Jul 2015 05:11:34 -0700 (PDT) Received: by 10.50.90.102 with HTTP; Wed, 15 Jul 2015 05:11:34 -0700 (PDT) Date: Wed, 15 Jul 2015 15:11:34 +0300 Message-ID: Subject: Flume-1.6.0 + HiveSink on Kerberos security cluster From: Oleksiy MapR To: user@flume.apache.org Content-Type: multipart/alternative; boundary=089e0158b064183eea051ae8db62 --089e0158b064183eea051ae8db62 Content-Type: text/plain; charset=UTF-8 Hi team! I have configured flume-1.6.0 with Kerberos and Hive-1.0 with Kerberos too and I want flume to put data to hive table using HiveSink, but it gives me an exception while flume agent was working (Can not connect to metastore). See details below. So my question is: does flume-1.6.0 with Kerberos, HiveSink and Hive-1.0 with Kerberos work together in flume-1.6.0 release? PS: Flume-1.6.0 + HiveSink works fine on non secure cluster. Also when I start beeline on kerberized cluster it works fine and can connect to hive metastore: hive --service beeline beeline> !connect jdbc:hive2://127.0.0.1:10000/default;principal=/< cluster.name>@MYCOMPANY.COM.UA scan complete in 6ms Connecting to jdbc:hive2://127.0.0.1:10000/default;principal=/< cluster.name>@MYCOMPANY.COM.UA Enter username for jdbc:hive2://127.0.0.1:10000/default;principal=/< cluster.name>@MYCOMPANY.COM.UA: Enter password for jdbc:hive2://127.0.0.1:10000/default;principal=/< cluster.name>@MYCOMPANY.COM.UA: Connected to: Apache Hive (***) Driver: Hive JDBC (***) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://127.0.0.1:10000/default> Oleksiy Configuration details: flume-hivesink.conf -------------------------- agent1.sources = source1 agent1.channels = channel1 agent1.sinks = sink1 agent1.sources.source1.type = exec agent1.sources.source1.command = cat /path/to/flume_test.data agent1.sinks.sink1.type = hive agent1.sinks.sink1.channel = channel1 agent1.sinks.sink1.hive.metastore = thrift://127.0.0.1:9083 agent1.sinks.sink1.hive.database = default agent1.sinks.sink1.hive.table = flume_test agent1.sinks.sink1.hive.txnsPerBatchAsk = 2 agent1.sinks.sink1.batchSize = 4 agent1.sinks.sink1.useLocalTimeStamp = false agent1.sinks.sink1.round = true agent1.sinks.sink1.roundValue = 10 agent1.sinks.sink1.roundUnit = minute agent1.sinks.sink1.serializer = DELIMITED agent1.sinks.sink1.serializer.delimiter ="," agent1.sinks.sink1.serializer.serdeSeparator =',' agent1.sinks.sink1.serializer.fieldnames = id,message agent1.channels.channel1.type = FILE agent1.channels.channel1.transactionCapacity = 1000000 agent1.channels.channel1.checkpointInterval 30000 agent1.channels.channel1.maxFileSize = 2146435071 agent1.channels.channel1.capacity 10000000 agent1.sources.source1.channels = channel1 hbase-agent.sinks.sink1.hdfs.kerberosPrincipal = /@ MYCOMPANY.COM.UA hbase-agent.sinks.sink1.hdfs.kerberosKeytab = /path/to/file.keytab hive-site.xml ----------------- javax.jdo.option.ConnectionURL jdbc:mysql://localhost/metastore the URL of the MySQL database javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver javax.jdo.option.ConnectionUserName root javax.jdo.option.ConnectionPassword ****** datanucleus.autoCreateSchema false datanucleus.fixedDatastore true hive.metastore.uris thrift://127.0.0.1:9083 IP address (or fully-qualified domain name) and port of the metastore host hive.txn.manager org.apache.hadoop.hive.ql.lockmgr.DbTxnManager hive.compactor.initiator.on true hive.compactor.worker.threads 5 hive.compactor.check.interval 10 hive.compactor.delta.num.threshold 2 hive.metastore.sasl.enabled true if true, the metastore thrift interface will be secured with SASL. Clients must authenticate with Kerberos. hive.metastore.kerberos.keytab.file /path/to/file.keytab The path to the Kerberos Keytab file containing the metastore thrift server's service principal. hive.metastore.kerberos.principal /@MYCOMPANY.COM.UA The service principal for the metastore thrift server. The special string _HOST will be replaced automatically with the correct hostname. hive.server2.authentication KERBEROS authentication type hive.server2.authentication.kerberos.principal /@MYCOMPANY.COM.UA HiveServer2 principal. If _HOST is used as the FQDN portion, it will be replaced with the actual hostname of the running instance. hive.server2.authentication.kerberos.keytab /path/to/file.keytab Keytab file for HiveServer2 principal hive.server2.thrift.sasl.qop auth-conf Sasl QOP value; one of 'auth', 'auth-int' and 'auth-conf' Hive table flume_test: ---------------------------- hive> show create table flume_test; OK CREATE TABLE `flume_test`( `id` string, `message` string) CLUSTERED BY ( message) INTO 5 BUCKETS ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs:/user/hive/warehouse/flume_test' TBLPROPERTIES ( 'orc.compress'='NONE', 'transient_lastDdlTime'='1436874059') Time taken: 0.125 seconds, Fetched: 17 row(s) Exception ------------- 15/07/15 14:57:33 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: source1: Successfully registered new MBean. 15/07/15 14:57:33 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: source1 started 15/07/15 14:57:33 INFO hive.HiveSink: sink1: Creating Writer to Hive end point : {metaStoreUri='thrift://127.0.0.1:9083', database='default', table='flume_test', partitionVals=[] } 15/07/15 14:57:33 INFO source.ExecSource: Command [cat /path/to/flume_test.data] exited with 0 15/07/15 14:57:34 INFO hive.metastore: Trying to connect to metastore with URI thrift://127.0.0.1:9083 15/07/15 14:57:34 INFO thrift.HadoopThriftAuthBridge25Sasl: Sasl client AuthenticationMethod: KERBEROS 15/07/15 14:57:34 WARN hive.metastore: Failed to connect to the MetaStore Server... 15/07/15 14:57:34 INFO hive.metastore: Waiting 1 seconds before next connection attempt. 15/07/15 14:57:35 INFO hive.metastore: Trying to connect to metastore with URI thrift://127.0.0.1:9083 15/07/15 14:57:35 INFO thrift.HadoopThriftAuthBridge25Sasl: Sasl client AuthenticationMethod: KERBEROS 15/07/15 14:57:35 WARN hive.metastore: Failed to connect to the MetaStore Server... 15/07/15 14:57:35 INFO hive.metastore: Waiting 1 seconds before next connection attempt. 15/07/15 14:57:36 INFO hive.metastore: Trying to connect to metastore with URI thrift://127.0.0.1:9083 15/07/15 14:57:36 INFO thrift.HadoopThriftAuthBridge25Sasl: Sasl client AuthenticationMethod: KERBEROS 15/07/15 14:57:36 WARN hive.metastore: Failed to connect to the MetaStore Server... 15/07/15 14:57:36 INFO hive.metastore: Waiting 1 seconds before next connection attempt. 15/07/15 14:57:37 WARN hive.HiveSink: sink1 : Failed connecting to EndPoint {metaStoreUri='thrift://127.0.0.1:9083', database='default', table='flume_test', partitionVals=[] } org.apache.flume.sink.hive.HiveWriter$ConnectException: Failed connecting to EndPoint {metaStoreUri='thrift://127.0.0.1:9083', database='default', table='flume_test', partitionVals=[] } at org.apache.flume.sink.hive.HiveWriter.(HiveWriter.java:98) at org.apache.flume.sink.hive.HiveSink.getOrCreateWriter(HiveSink.java:343) at org.apache.flume.sink.hive.HiveSink.drainOneBatch(HiveSink.java:296) at org.apache.flume.sink.hive.HiveSink.process(HiveSink.java:254) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.flume.sink.hive.HiveWriter$ConnectException: Failed connecting to EndPoint {metaStoreUri='thrift://127.0.0.1:9083', database='default', table='flume_test', partitionVals=[] } at org.apache.flume.sink.hive.HiveWriter.newConnection(HiveWriter.java:320) at org.apache.flume.sink.hive.HiveWriter.(HiveWriter.java:86) ... 6 more Caused by: org.apache.hive.hcatalog.streaming.ConnectionError: Error connecting to Hive Metastore URI: thrift://127.0.0.1:9083 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:450) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:274) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:243) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.java:180) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:157) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:110) at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:316) at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:313) at org.apache.flume.sink.hive.HiveWriter$9.call(HiveWriter.java:366) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ... 1 more Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: Peer indicated failure: Unsupported mechanism type GSSAPI at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:190) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:258) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:373) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:221) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:167) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:448) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:274) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:243) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.java:180) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:157) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:110) at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:316) at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:313) at org.apache.flume.sink.hive.HiveWriter$9.call(HiveWriter.java:366) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:419) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:221) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:167) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:448) ... 12 more 15/07/15 14:57:37 ERROR flume.SinkRunner: Unable to deliver event. Exception follows. org.apache.flume.EventDeliveryException: org.apache.flume.sink.hive.HiveWriter$ConnectException: Failed connecting to EndPoint {metaStoreUri='thrift://127.0.0.1:9083', database='default', table='flume_test', partitionVals=[] } at org.apache.flume.sink.hive.HiveSink.process(HiveSink.java:268) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.flume.sink.hive.HiveWriter$ConnectException: Failed connecting to EndPoint {metaStoreUri='thrift://127.0.0.1:9083', database='default', table='flume_test', partitionVals=[] } at org.apache.flume.sink.hive.HiveWriter.(HiveWriter.java:98) at org.apache.flume.sink.hive.HiveSink.getOrCreateWriter(HiveSink.java:343) at org.apache.flume.sink.hive.HiveSink.drainOneBatch(HiveSink.java:296) at org.apache.flume.sink.hive.HiveSink.process(HiveSink.java:254) ... 3 more Caused by: org.apache.flume.sink.hive.HiveWriter$ConnectException: Failed connecting to EndPoint {metaStoreUri='thrift://127.0.0.1:9083', database='default', table='flume_test', partitionVals=[] } at org.apache.flume.sink.hive.HiveWriter.newConnection(HiveWriter.java:320) at org.apache.flume.sink.hive.HiveWriter.(HiveWriter.java:86) ... 6 more Caused by: org.apache.hive.hcatalog.streaming.ConnectionError: Error connecting to Hive Metastore URI: thrift://127.0.0.1:9083 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:450) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:274) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:243) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.java:180) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:157) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:110) at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:316) at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:313) at org.apache.flume.sink.hive.HiveWriter$9.call(HiveWriter.java:366) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ... 1 more Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: Peer indicated failure: Unsupported mechanism type GSSAPI at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:190) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:258) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:373) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:221) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:167) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:448) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:274) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:243) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.java:180) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:157) at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:110) at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:316) at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:313) at org.apache.flume.sink.hive.HiveWriter$9.call(HiveWriter.java:366) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:419) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:221) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:167) at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:448) ... 12 more --089e0158b064183eea051ae8db62 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi team!

I have configured flume-1.6.0 with Ke= rberos and Hive-1.0 with Kerberos too and I want flume to put data to hive = table using HiveSink, but it gives me an exception while flume agent was wo= rking (Can not connect to metastore). See details below.

So my quest= ion is: does flume-1.6.0 with Kerberos, HiveSink and Hive-1.0 with Kerberos= =C2=A0 work together in flume-1.6.0 release?

PS: Flume-1.6.0 + HiveS= ink works fine on non secure cluster. Also when I start beeline on
kerberized cluster it works fine and can connect to hive metastore:
hive --service beeline
beeline>=C2=A0 !connect jdbc:hive2://127.0.0.1:10000/defau= lt;principal=3D<user>/<cluster= .name>@MYCOMPANY.COM.UA
s= can complete in 6ms
Connecting to jdbc:hive2://127.0.0.1:10000/default;principal=3D<= user>/<cluster.name>@MYCOMPANY.COM.UA
Enter username for jdb= c:hive2://127.0.0.1= :10000/default;principal=3D<user>/<cluster.name>@MYCOMPANY.COM= .UA:
Enter password for jdbc:hive2://127.0.0.1:10000/default;principal=3D<user&= gt;/<cluster.name>@MYCOMPANY.COM.UA:
Connected to: Apache Hive (= ***)
Driver: Hive JDBC (***)
Transaction isolation: TRANSACTION_REPEA= TABLE_READ
0: jdbc:hive2://12= 7.0.0.1:10000/default>


Oleksiy

Configuration deta= ils:

flume-hivesink.conf
--------------------------

agent1= .sources =3D source1
agent1.channels =3D channel1
agent1.sinks =3D si= nk1

agent1.sources.source1.type =3D exec
agent1.sources.source1.c= ommand =3D cat /path/to/flume_test.data

agent1.sinks.sink1.type =3D = hive
agent1.sinks.sink1.channel =3D channel1
agent1.sinks.sink1.hive.= metastore =3D thrift://127.0.0.1:9083=
agent1.sinks.sink1.hive.database =3D default
agent1.sinks.sink1.hive= .table =3D flume_test
agent1.sinks.sink1.hive.txnsPerBatchAsk =3D 2
a= gent1.sinks.sink1.batchSize =3D 4
agent1.sinks.sink1.useLocalTimeStamp = =3D false
agent1.sinks.sink1.round =3D true
agent1.sinks.sink1.roundV= alue =3D 10
agent1.sinks.sink1.roundUnit =3D minute
agent1.sinks.sink= 1.serializer =3D DELIMITED
agent1.sinks.sink1.serializer.delimiter =3D&q= uot;,"
agent1.sinks.sink1.serializer.serdeSeparator =3D','<= br>
agent1.sinks.sink1.serializer.fieldnames =3D id,message

agent= 1.channels.channel1.type =3D FILE
agent1.channels.channel1.transactionCa= pacity =3D 1000000
agent1.channels.channel1.checkpointInterval 30000
= agent1.channels.channel1.maxFileSize =3D 2146435071
agent1.channels.chan= nel1.capacity 10000000
agent1.sources.source1.channels =3D channel1
<= br>hbase-agent.sinks.sink1.hdfs.kerberosPrincipal =3D <user>/<cluster.name>@MYCOMPANY.COM.UA
hbase-agent.sinks.sink1.hdfs.kerberosKeyt= ab =3D /path/to/file.keytab


hive-site.xml
-----------------
<configuration>

<!-- MYSQL -->

<property= >
=C2=A0 <name>javax.jdo.option.ConnectionURL</name>
= =C2=A0 <value>jdbc:mysql://localhost/metastore</value>
=C2= =A0 <description>the URL of the MySQL database</description></property>

<property>
=C2=A0 <name>javax.jdo.= option.ConnectionDriverName</name>
=C2=A0 <value>com.mysql.j= dbc.Driver</value>
</property>

<property>
= =C2=A0 <name>javax.jdo.option.ConnectionUserName</name>
=C2= =A0 <value>root</value>
</property>

<propert= y>
=C2=A0 <name>javax.jdo.option.ConnectionPassword</name>= ;
=C2=A0 <value>******</value>
</property>

&= lt;property>
=C2=A0 <name>datanucleus.autoCreateSchema</name= >
=C2=A0 <value>false</value>
</property>
<property>
=C2=A0 <name>datanucleus.fixedDatastore</name= >
=C2=A0 <value>true</value>
</property>

= <property>
=C2=A0 <name>hive.metastore.uris</name>
= =C2=A0 <value>thrift://127.0.0.1:90= 83</value>
=C2=A0 <description>IP address (or fully-qual= ified domain name) and port of the metastore host</description>
&l= t;/property>


<!-- Compactor configuration -->
<pr= operty>
=C2=A0 <name>hive.txn.manager</name>
=C2=A0 &l= t;value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>
= </property>

<property>
=C2=A0 <name>hive.compac= tor.initiator.on</name>
=C2=A0 <value>true</value>
= </property>

<property>
=C2=A0 <name>hive.compac= tor.worker.threads</name>
=C2=A0 <value>5</value>
&= lt;/property>

<property>
=C2=A0 <name>hive.compact= or.check.interval</name>
=C2=A0 <value>10</value>
&= lt;/property>

<property>
=C2=A0 <name>hive.compact= or.delta.num.threshold</name>
=C2=A0 <value>2</value><= br></property>

<!-- KERBEROS -->

<property>=
=C2=A0 <name>hive.metastore.sasl.enabled</name>=C2=A0=C2=A0= =C2=A0=C2=A0
=C2=A0 <value>true</value>=C2=A0=C2=A0=C2=A0 <= br>=C2=A0 <description>
=C2=A0=C2=A0 if true, the metastore thrift= interface will be secured with SASL. Clients must authenticate with Kerber= os.
=C2=A0 </description>
</property>
=C2=A0
<pr= operty>=C2=A0
=C2=A0 <name>hive.metastore.kerberos.keytab.file= </name>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
=C2=A0 <valu= e>/path/to/file.keytab</value>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
=C2=A0 <description>= ;
=C2=A0=C2=A0=C2=A0 The path to the Kerberos Keytab file containing the= metastore thrift server's service principal.
=C2=A0 </descriptio= n>
</property>=C2=A0
=C2=A0
<property>=C2=A0
= =C2=A0 <name>hive.metastore.kerberos.principal</name>
=C2=A0= <value><user>/<cluster.name= >@MYCOMPANY.COM.UA</value= >
=C2=A0 <description>
=C2=A0=C2=A0=C2=A0 The service princi= pal for the metastore thrift server. The special string _HOST will be repla= ced automatically with the correct hostname.
=C2=A0 </description>= =C2=A0=C2=A0=C2=A0
</property>

<property>
=C2=A0 = <name>hive.server2.authentication</name>
=C2=A0 <value>= ;KERBEROS</value>
=C2=A0 <description>authentication type<= ;/description>
</property>
=C2=A0
<property>
=C2= =A0 <name>hive.server2.authentication.kerberos.principal</name>=
=C2=A0 <value><user>/<cl= uster.name>@MYCOMPANY.COM.UA= </value>
=C2=A0 <description>HiveServer2 principal. If _HOST= is used as the FQDN portion, it will be replaced with the actual hostname = of the running instance.</description>
</property>
=C2=A0=
<property>
=C2=A0 <name>hive.server2.authentication.kerb= eros.keytab</name>
=C2=A0 <value>/path/to/file.keytab</va= lue>
=C2=A0 <description>Keytab file for HiveServer2 principal&= lt;/description>
</property>
=C2=A0
<property>
= =C2=A0 <name>hive.server2.thrift.sasl.qop</name>
=C2=A0 <= value>auth-conf</value>
=C2=A0 <description>Sasl QOP valu= e; one of 'auth', 'auth-int' and 'auth-conf'</de= scription>
</property>


</configuration>


Hive table flume_test:
----------------------------

hive= > show create table flume_test;
OK
CREATE TABLE `flume_test`(
= =C2=A0 `id` string,
=C2=A0 `message` string)
CLUSTERED BY (
=C2=A0= message)
INTO 5 BUCKETS
ROW FORMAT SERDE
=C2=A0 'org.apache.h= adoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
=C2=A0 '= org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
=C2= =A0 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION<= br>=C2=A0 'hdfs:/user/hive/warehouse/flume_test'
TBLPROPERTIES (=
=C2=A0 'orc.compress'=3D'NONE',
=C2=A0 'transien= t_lastDdlTime'=3D'1436874059')
Time taken: 0.125 seconds, Fe= tched: 17 row(s)


Exception
-------------

15/07/15 14:5= 7:33 INFO instrumentation.MonitoredCounterGroup: Monitored counter group fo= r type: SOURCE, name: source1: Successfully registered new MBean.
15/07/= 15 14:57:33 INFO instrumentation.MonitoredCounterGroup: Component type: SOU= RCE, name: source1 started
15/07/15 14:57:33 INFO hive.HiveSink: sink1: = Creating Writer to Hive end point : {metaStoreUri=3D'thrift://127.0.0.1:9083', database=3D'default= ', table=3D'flume_test', partitionVals=3D[] }
15/07/15 14:57= :33 INFO source.ExecSource: Command [cat /path/to/flume_test.data] exited w= ith 0
15/07/15 14:57:34 INFO hive.metastore: Trying to connect to metast= ore with URI thrift://127.0.0.1:9083<= br>15/07/15 14:57:34 INFO thrift.HadoopThriftAuthBridge25Sasl: Sasl client = AuthenticationMethod: KERBEROS
15/07/15 14:57:34 WARN hive.metastore: Fa= iled to connect to the MetaStore Server...
15/07/15 14:57:34 INFO hive.m= etastore: Waiting 1 seconds before next connection attempt.
15/07/15 14:= 57:35 INFO hive.metastore: Trying to connect to metastore with URI thrift:/= /127.0.0.1:9083
15/07/15 14:57:35 = INFO thrift.HadoopThriftAuthBridge25Sasl: Sasl client AuthenticationMethod:= KERBEROS
15/07/15 14:57:35 WARN hive.metastore: Failed to connect to th= e MetaStore Server...
15/07/15 14:57:35 INFO hive.metastore: Waiting 1 s= econds before next connection attempt.
15/07/15 14:57:36 INFO hive.metas= tore: Trying to connect to metastore with URI thrift://127.0.0.1:9083
15/07/15 14:57:36 INFO thrift.HadoopThr= iftAuthBridge25Sasl: Sasl client AuthenticationMethod: KERBEROS
15/07/15= 14:57:36 WARN hive.metastore: Failed to connect to the MetaStore Server...=
15/07/15 14:57:36 INFO hive.metastore: Waiting 1 seconds before next co= nnection attempt.
15/07/15 14:57:37 WARN hive.HiveSink: sink1 : Failed c= onnecting to EndPoint {metaStoreUri=3D'thrift://127.0.0.1:9083', database=3D'default', table=3D&= #39;flume_test', partitionVals=3D[] }
org.apache.flume.sink.hive.Hiv= eWriter$ConnectException: Failed connecting to EndPoint {metaStoreUri=3D= 9;thrift://127.0.0.1:9083', datab= ase=3D'default', table=3D'flume_test', partitionVals=3D[] }=
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter.<init>= ;(HiveWriter.java:98)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.H= iveSink.getOrCreateWriter(HiveSink.java:343)
=C2=A0=C2=A0=C2=A0 at org.a= pache.flume.sink.hive.HiveSink.drainOneBatch(HiveSink.java:296)
=C2=A0= =C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveSink.process(HiveSink.java:2= 54)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.DefaultSinkProcessor.pro= cess(DefaultSinkProcessor.java:68)
=C2=A0=C2=A0=C2=A0 at org.apache.flum= e.SinkRunner$PollingRunner.run(SinkRunner.java:147)
=C2=A0=C2=A0=C2=A0 a= t java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.flume.sink= .hive.HiveWriter$ConnectException: Failed connecting to EndPoint {metaStore= Uri=3D'thrift://127.0.0.1:9083= 9;, database=3D'default', table=3D'flume_test', partitionVa= ls=3D[] }
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter.ne= wConnection(HiveWriter.java:320)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.= sink.hive.HiveWriter.<init>(HiveWriter.java:86)
=C2=A0=C2=A0=C2=A0= ... 6 more
Caused by: org.apache.hive.hcatalog.streaming.ConnectionErro= r: Error connecting to Hive Metastore URI: thrift://127.0.0.1:9083
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcat= alog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint.= java:450)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveE= ndPoint$ConnectionImpl.<init>(HiveEndPoint.java:274)
=C2=A0=C2=A0= =C2=A0 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.&l= t;init>(HiveEndPoint.java:243)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.= hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.java:180)=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveEndPoint.newC= onnection(HiveEndPoint.java:157)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.h= catalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:110)
=C2= =A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.= java:316)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter$6.= call(HiveWriter.java:313)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hi= ve.HiveWriter$9.call(HiveWriter.java:366)
=C2=A0=C2=A0=C2=A0 at java.uti= l.concurrent.FutureTask.run(FutureTask.java:262)
=C2=A0=C2=A0=C2=A0 at j= ava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11= 45)
=C2=A0=C2=A0=C2=A0 at java.util.concurrent.ThreadPoolExecutor$Worker= .run(ThreadPoolExecutor.java:615)
=C2=A0=C2=A0=C2=A0 ... 1 more
Cause= d by: MetaException(message:Could not connect to meta store using any of th= e URIs provided. Most recent failure: org.apache.thrift.transport.TTranspor= tException: Peer indicated failure: Unsupported mechanism type GSSAPI
= =C2=A0=C2=A0=C2=A0 at org.apache.thrift.transport.TSaslTransport.receiveSas= lMessage(TSaslTransport.java:190)
=C2=A0=C2=A0=C2=A0 at org.apache.thrif= t.transport.TSaslTransport.open(TSaslTransport.java:258)
=C2=A0=C2=A0=C2= =A0 at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTra= nsport.java:37)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.thrift.clie= nt.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
=C2=A0=C2= =A0=C2=A0 at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.r= un(TUGIAssumingTransport.java:49)
=C2=A0=C2=A0=C2=A0 at java.security.Ac= cessController.doPrivileged(Native Method)
=C2=A0=C2=A0=C2=A0 at javax.s= ecurity.auth.Subject.doAs(Subject.java:415)
=C2=A0=C2=A0=C2=A0 at org.ap= ache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:15= 95)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.thrift.client.TUGIAssum= ingTransport.open(TUGIAssumingTransport.java:49)
=C2=A0=C2=A0=C2=A0 at o= rg.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClien= t.java:373)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.metastore.HiveM= etaStoreClient.<init>(HiveMetaStoreClient.java:221)
=C2=A0=C2=A0= =C2=A0 at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>= (HiveMetaStoreClient.java:167)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hca= talog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint= .java:448)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.Hive= EndPoint$ConnectionImpl.<init>(HiveEndPoint.java:274)
=C2=A0=C2=A0= =C2=A0 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.&l= t;init>(HiveEndPoint.java:243)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.= hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.java:180)=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveEndPoint.newC= onnection(HiveEndPoint.java:157)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.h= catalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:110)
=C2= =A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.= java:316)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter$6.= call(HiveWriter.java:313)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hi= ve.HiveWriter$9.call(HiveWriter.java:366)
=C2=A0=C2=A0=C2=A0 at java.uti= l.concurrent.FutureTask.run(FutureTask.java:262)
=C2=A0=C2=A0=C2=A0 at j= ava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11= 45)
=C2=A0=C2=A0=C2=A0 at java.util.concurrent.ThreadPoolExecutor$Worker= .run(ThreadPoolExecutor.java:615)
=C2=A0=C2=A0=C2=A0 at java.lang.Thread= .run(Thread.java:745)
)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.= metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:419)
=C2=A0= =C2=A0=C2=A0 at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<in= it>(HiveMetaStoreClient.java:221)
=C2=A0=C2=A0=C2=A0 at org.apache.ha= doop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.ja= va:167)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveEnd= Point$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:448)
=C2=A0=C2= =A0=C2=A0 ... 12 more
15/07/15 14:57:37 ERROR flume.SinkRunner: Unable t= o deliver event. Exception follows.
org.apache.flume.EventDeliveryExcept= ion: org.apache.flume.sink.hive.HiveWriter$ConnectException: Failed connect= ing to EndPoint {metaStoreUri=3D'thrift://127.0.0.1:9083', database=3D'default', table=3D'fl= ume_test', partitionVals=3D[] }
=C2=A0=C2=A0=C2=A0 at org.apache.flu= me.sink.hive.HiveSink.process(HiveSink.java:268)
=C2=A0=C2=A0=C2=A0 at o= rg.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java= :68)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.SinkRunner$PollingRunner.run= (SinkRunner.java:147)
=C2=A0=C2=A0=C2=A0 at java.lang.Thread.run(Thread.= java:745)
Caused by: org.apache.flume.sink.hive.HiveWriter$ConnectExcept= ion: Failed connecting to EndPoint {metaStoreUri=3D'thrift://127.0.0.1:9083', database=3D'default= 9;, table=3D'flume_test', partitionVals=3D[] }
=C2=A0=C2=A0=C2= =A0 at org.apache.flume.sink.hive.HiveWriter.<init>(HiveWriter.java:9= 8)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveSink.getOrCreate= Writer(HiveSink.java:343)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hi= ve.HiveSink.drainOneBatch(HiveSink.java:296)
=C2=A0=C2=A0=C2=A0 at org.a= pache.flume.sink.hive.HiveSink.process(HiveSink.java:254)
=C2=A0=C2=A0= =C2=A0 ... 3 more
Caused by: org.apache.flume.sink.hive.HiveWriter$Conne= ctException: Failed connecting to EndPoint {metaStoreUri=3D'thrift://127.0.0.1:9083', database=3D'de= fault', table=3D'flume_test', partitionVals=3D[] }
=C2=A0=C2= =A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter.newConnection(HiveWriter= .java:320)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter.&= lt;init>(HiveWriter.java:86)
=C2=A0=C2=A0=C2=A0 ... 6 more
Caused = by: org.apache.hive.hcatalog.streaming.ConnectionError: Error connecting to= Hive Metastore URI: thrift://127.0.0.1:9= 083
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveEnd= Point$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:450)
=C2=A0=C2= =A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl= .<init>(HiveEndPoint.java:274)
=C2=A0=C2=A0=C2=A0 at org.apache.hi= ve.hcatalog.streaming.HiveEndPoint$ConnectionImpl.<init>(HiveEndPoint= .java:243)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.Hive= EndPoint.newConnectionImpl(HiveEndPoint.java:180)
=C2=A0=C2=A0=C2=A0 at = org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.= java:157)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveE= ndPoint.newConnection(HiveEndPoint.java:110)
=C2=A0=C2=A0=C2=A0 at org.a= pache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:316)
=C2=A0=C2= =A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:3= 13)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter$9.call(H= iveWriter.java:366)
=C2=A0=C2=A0=C2=A0 at java.util.concurrent.FutureTas= k.run(FutureTask.java:262)
=C2=A0=C2=A0=C2=A0 at java.util.concurrent.Th= readPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
=C2=A0=C2=A0=C2= =A0 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecuto= r.java:615)
=C2=A0=C2=A0=C2=A0 ... 1 more
Caused by: MetaException(me= ssage:Could not connect to meta store using any of the URIs provided. Most = recent failure: org.apache.thrift.transport.TTransportException: Peer indic= ated failure: Unsupported mechanism type GSSAPI
=C2=A0=C2=A0=C2=A0 at or= g.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.= java:190)
=C2=A0=C2=A0=C2=A0 at org.apache.thrift.transport.TSaslTranspo= rt.open(TSaslTransport.java:258)
=C2=A0=C2=A0=C2=A0 at org.apache.thrift= .transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
=C2= =A0=C2=A0=C2=A0 at org.apache.hadoop.hive.thrift.client.TUGIAssumingTranspo= rt$1.run(TUGIAssumingTransport.java:52)
=C2=A0=C2=A0=C2=A0 at org.apache= .hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTranspor= t.java:49)
=C2=A0=C2=A0=C2=A0 at java.security.AccessController.doPrivil= eged(Native Method)
=C2=A0=C2=A0=C2=A0 at javax.security.auth.Subject.do= As(Subject.java:415)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.security.Us= erGroupInformation.doAs(UserGroupInformation.java:1595)
=C2=A0=C2=A0=C2= =A0 at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGI= AssumingTransport.java:49)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.= metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:373)
=C2=A0= =C2=A0=C2=A0 at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<in= it>(HiveMetaStoreClient.java:221)
=C2=A0=C2=A0=C2=A0 at org.apache.ha= doop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.ja= va:167)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveEnd= Point$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:448)
=C2=A0=C2= =A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl= .<init>(HiveEndPoint.java:274)
=C2=A0=C2=A0=C2=A0 at org.apache.hi= ve.hcatalog.streaming.HiveEndPoint$ConnectionImpl.<init>(HiveEndPoint= .java:243)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.Hive= EndPoint.newConnectionImpl(HiveEndPoint.java:180)
=C2=A0=C2=A0=C2=A0 at = org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.= java:157)
=C2=A0=C2=A0=C2=A0 at org.apache.hive.hcatalog.streaming.HiveE= ndPoint.newConnection(HiveEndPoint.java:110)
=C2=A0=C2=A0=C2=A0 at org.a= pache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:316)
=C2=A0=C2= =A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter$6.call(HiveWriter.java:3= 13)
=C2=A0=C2=A0=C2=A0 at org.apache.flume.sink.hive.HiveWriter$9.call(H= iveWriter.java:366)
=C2=A0=C2=A0=C2=A0 at java.util.concurrent.FutureTas= k.run(FutureTask.java:262)
=C2=A0=C2=A0=C2=A0 at java.util.concurrent.Th= readPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
=C2=A0=C2=A0=C2= =A0 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecuto= r.java:615)
=C2=A0=C2=A0=C2=A0 at java.lang.Thread.run(Thread.java:745)<= br>)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.metastore.HiveMetaStor= eClient.open(HiveMetaStoreClient.java:419)
=C2=A0=C2=A0=C2=A0 at org.apa= che.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreCli= ent.java:221)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.metastore.Hiv= eMetaStoreClient.<init>(HiveMetaStoreClient.java:167)
=C2=A0=C2=A0= =C2=A0 at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.ge= tMetaStoreClient(HiveEndPoint.java:448)
=C2=A0=C2=A0=C2=A0 ... 12 more
--089e0158b064183eea051ae8db62--