kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject [kafka] branch trunk updated: KAFKA-5604: Remove the redundant TODO marker on the Streams side (#8313)
Date Wed, 18 Mar 2020 21:23:28 GMT
This is an automated email from the ASF dual-hosted git repository.

guozhang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git

The following commit(s) were added to refs/heads/trunk by this push:
     new b1999ba  KAFKA-5604: Remove the redundant TODO marker on the Streams side (#8313)
b1999ba is described below

commit b1999ba22dbf83b662591baf64e3d68e0f69e818
Author: Guozhang Wang <wangguoz@gmail.com>
AuthorDate: Wed Mar 18 14:22:53 2020 -0700

    KAFKA-5604: Remove the redundant TODO marker on the Streams side (#8313)
    The issue itself has been fixed a while ago on the producer side, so we can just remove
this TODO marker now (we've removed the isZombie flag already anyways).
    Reviewers: John Roesler <vvcephei@apache.org>
 .../streams/processor/internals/StreamsProducer.java  | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsProducer.java
index 0324bf2..0aa4de6 100644
--- a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsProducer.java
+++ b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsProducer.java
@@ -193,15 +193,16 @@ public class StreamsProducer {
         if (transactionInFlight) {
             try {
-            } catch (final ProducerFencedException ignore) {
-                /* TODO
-                 * this should actually never happen atm as we guard the call to #abortTransaction
-                 * -> the reason for the guard is a "bug" in the Producer -- it throws
-                 * instead of ProducerFencedException atm. We can remove the isZombie flag
after KAFKA-5604 got
-                 * fixed and fall-back to this catch-and-swallow code
-                 */
-                // can be ignored: transaction got already aborted by brokers/transactional-coordinator
if this happens
+            } catch (final ProducerFencedException error) {
+                // The producer is aborting the txn when there's still an ongoing one,
+                // which means that we did not commit the task while closing it, which
+                // means that it is a dirty close. Therefore it is possible that the dirty
+                // close is due to an fenced exception already thrown previously, and hence
+                // when calling abortTxn here the same exception would be thrown again.
+                // Even if the dirty close was not due to an observed fencing exception but
+                // something else (e.g. task corrupted) we can still ignore the exception
+                // since transaction already got aborted by brokers/transactional-coordinator
if this happens
+                log.debug("Encountered {} while aborting the transaction; this is expected
and hence swallowed", error.getMessage());
             } catch (final KafkaException error) {
                 throw new StreamsException(
                     formatException("Producer encounter unexpected error trying to abort
a transaction"),

View raw message