kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ij...@apache.org
Subject [kafka] branch trunk updated: KAFKA-9853: Improve performance of Log.fetchOffsetByTimestamp (#8474)
Date Tue, 14 Apr 2020 00:09:42 GMT
This is an automated email from the ASF dual-hosted git repository.

ijuma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new 6216c88  KAFKA-9853: Improve performance of Log.fetchOffsetByTimestamp (#8474)
6216c88 is described below

commit 6216c886de1fcfdc404620d13baae3863fb9d2d0
Author: Eric Bolinger <boli@pobox.com>
AuthorDate: Mon Apr 13 18:09:12 2020 -0600

    KAFKA-9853: Improve performance of Log.fetchOffsetByTimestamp (#8474)
    
    The previous code did not use the collection produced by `takeWhile()`.
    It only used the length of that collection to select the next element.
    
    Reviewers: Ismael Juma <ismael@juma.me.uk>
---
 core/src/main/scala/kafka/log/Log.scala | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/core/src/main/scala/kafka/log/Log.scala b/core/src/main/scala/kafka/log/Log.scala
index 0f756bb..b1a0f05 100644
--- a/core/src/main/scala/kafka/log/Log.scala
+++ b/core/src/main/scala/kafka/log/Log.scala
@@ -1615,16 +1615,8 @@ class Log(@volatile private var _dir: File,
         return Some(new TimestampAndOffset(RecordBatch.NO_TIMESTAMP, logEndOffset, epochOptional))
       }
 
-      val targetSeg = {
-        // Get all the segments whose largest timestamp is smaller than target timestamp
-        val earlierSegs = segmentsCopy.takeWhile(_.largestTimestamp < targetTimestamp)
-        // We need to search the first segment whose largest timestamp is greater than the
target timestamp if there is one.
-        if (earlierSegs.length < segmentsCopy.length)
-          Some(segmentsCopy(earlierSegs.length))
-        else
-          None
-      }
-
+      // We need to search the first segment whose largest timestamp is >= the target
timestamp if there is one.
+      val targetSeg = segmentsCopy.find(_.largestTimestamp >= targetTimestamp)
       targetSeg.flatMap(_.findOffsetByTimestamp(targetTimestamp, logStartOffset))
     }
   }


Mime
View raw message