kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lind...@apache.org
Subject [kafka] branch 2.0 updated: KAFKA-7180; Fixing the flaky test testHWCheckpointWithFailuresSingleLogSegment
Date Mon, 30 Jul 2018 04:07:06 GMT
This is an automated email from the ASF dual-hosted git repository.

lindong pushed a commit to branch 2.0
in repository https://gitbox.apache.org/repos/asf/kafka.git

The following commit(s) were added to refs/heads/2.0 by this push:
     new 084f4d2  KAFKA-7180; Fixing the flaky test testHWCheckpointWithFailuresSingleLogSegment
084f4d2 is described below

commit 084f4d2674085157131a2bcce161e906a127fd1f
Author: Lucas Wang <luwang@linkedin.com>
AuthorDate: Sun Jul 29 21:06:18 2018 -0700

    KAFKA-7180; Fixing the flaky test testHWCheckpointWithFailuresSingleLogSegment
    By waiting until server1 has joined the ISR before shutting down server2
    Rerun the test method many times after the code change, and there is no flakiness any
    Author: Lucas Wang <luwang@linkedin.com>
    Reviewers: Mayuresh Gharat <gharatmayuresh15@gmail.com>, Dong Lin <lindong28@gmail.com>
    Closes #5387 from gitlw/fixing_flacky_logrecevorytest
    (cherry picked from commit 96bc0b882d0c51d9b58c9f87654e6d133fd9ef34)
    Signed-off-by: Dong Lin <lindong28@gmail.com>
 core/src/test/scala/unit/kafka/server/LogRecoveryTest.scala | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/core/src/test/scala/unit/kafka/server/LogRecoveryTest.scala b/core/src/test/scala/unit/kafka/server/LogRecoveryTest.scala
index 880950a..1bd15f7 100755
--- a/core/src/test/scala/unit/kafka/server/LogRecoveryTest.scala
+++ b/core/src/test/scala/unit/kafka/server/LogRecoveryTest.scala
@@ -143,6 +143,15 @@ class LogRecoveryTest extends ZooKeeperTestHarness {
       leader == 0 || leader == 1)
     assertEquals(hw, hwFile1.read.getOrElse(topicPartition, 0L))
+    /** We plan to shutdown server2 and transfer the leadership to server1.
+      * With unclean leader election turned off, a prerequisite for the successful leadership
+      * is that server1 has caught up on the topicPartition, and has joined the ISR.
+      * In the line below, we wait until the condition is met before shutting down server2
+      */
+    waitUntilTrue(() => server2.replicaManager.getPartition(topicPartition).get.inSyncReplicas.size
== 2,
+      "Server 1 is not able to join the ISR after restart")
     // since server 2 was never shut down, the hw value of 30 is probably not checkpointed
to disk yet
     assertEquals(hw, hwFile2.read.getOrElse(topicPartition, 0L))

View raw message