kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gwens...@apache.org
Subject kafka git commit: TRIVIAL: Replace "it's" with "its" where appropriate
Date Wed, 02 Sep 2015 17:13:36 GMT
Repository: kafka
Updated Branches:
  refs/heads/trunk e582447ad -> ff189fa05


TRIVIAL: Replace "it's" with "its" where appropriate

No Jira ticket created, as the Contributing Code Changes doc says it's not necessary for javadoc
typo fixes.

Author: Magnus Reftel <magnus.reftel@skatteetaten.no>

Reviewers: Gwen Shapira

Closes #186 from magnusr/feature/its


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/ff189fa0
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/ff189fa0
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/ff189fa0

Branch: refs/heads/trunk
Commit: ff189fa05ccdacac100f3d15d167dcbe561f57a6
Parents: e582447
Author: Magnus Reftel <magnus.reftel@skatteetaten.no>
Authored: Wed Sep 2 10:13:23 2015 -0700
Committer: Gwen Shapira <cshapi@gmail.com>
Committed: Wed Sep 2 10:13:23 2015 -0700

----------------------------------------------------------------------
 .../consumer/ConsumerRebalanceListener.java     |  2 +-
 .../kafka/clients/consumer/KafkaConsumer.java   | 20 ++++++++++----------
 .../apache/kafka/common/config/ConfigDef.java   |  6 +++---
 .../common/errors/CorruptRecordException.java   |  4 ++--
 .../kafka/copycat/data/CopycatSchema.java       |  4 ++--
 .../controller/PartitionStateMachine.scala      |  2 +-
 core/src/main/scala/kafka/log/OffsetIndex.scala |  2 +-
 .../scala/kafka/server/ClientQuotaManager.scala |  2 +-
 8 files changed, 21 insertions(+), 21 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/ff189fa0/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.java
----------------------------------------------------------------------
diff --git a/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.java
b/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.java
index 2f2591c..68939cb 100644
--- a/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.java
+++ b/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.java
@@ -36,7 +36,7 @@ import org.apache.kafka.common.TopicPartition;
  * consider a case where the consumer is subscribed to a topic containing user page views,
and the goal is to count the
  * number of page views per users for each five minute window. Let's say the topic is partitioned
by the user id so that
  * all events for a particular user will go to a single consumer instance. The consumer can
keep in memory a running
- * tally of actions per user and only flush these out to a remote data store when it's cache
gets to big. However if a
+ * tally of actions per user and only flush these out to a remote data store when its cache
gets to big. However if a
  * partition is reassigned it may want to automatically trigger a flush of this cache, before
the new owner takes over
  * consumption.
  * <p>

http://git-wip-us.apache.org/repos/asf/kafka/blob/ff189fa0/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
----------------------------------------------------------------------
diff --git a/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java b/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
index 8cd285c..73237e4 100644
--- a/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
+++ b/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
@@ -111,7 +111,7 @@ import java.util.concurrent.atomic.AtomicReference;
  * a queue in a traditional messaging system all processes would be part of a single consumer
group and hence record
  * delivery would be balanced over the group like with a queue. Unlike a traditional messaging
system, though, you can
  * have multiple such groups. To get semantics similar to pub-sub in a traditional messaging
system each process would
- * have it's own consumer group, so each process would subscribe to all the records published
to the topic.
+ * have its own consumer group, so each process would subscribe to all the records published
to the topic.
  * <p>
  * In addition, when offsets are committed they are always committed for a given consumer
group.
  * <p>
@@ -158,7 +158,7 @@ import java.util.concurrent.atomic.AtomicReference;
  * consumer will automatically ping the cluster periodically, which let's the cluster know
that it is alive. As long as
  * the consumer is able to do this it is considered alive and retains the right to consume
from the partitions assigned
  * to it. If it stops heartbeating for a period of time longer than <code>session.timeout.ms</code>
then it will be
- * considered dead and it's partitions will be assigned to another process.
+ * considered dead and its partitions will be assigned to another process.
  * <p>
  * The deserializer settings specify how to turn bytes into objects. For example, by specifying
string deserializers, we
  * are saying that our record's key and value will just be simple strings.
@@ -241,7 +241,7 @@ import java.util.concurrent.atomic.AtomicReference;
  *
  * <h4>Managing Your Own Offsets</h4>
  *
- * The consumer application need not use Kafka's built-in offset storage, it can store offsets
in a store of it's own
+ * The consumer application need not use Kafka's built-in offset storage, it can store offsets
in a store of its own
  * choosing. The primary use case for this is allowing the application to store both the
offset and the results of the
  * consumption in the same system in a way that both the results and offsets are stored atomically.
This is not always
  * possible, but when it is it will make the consumption fully atomic and give "exactly once"
semantics that are
@@ -261,7 +261,7 @@ import java.util.concurrent.atomic.AtomicReference;
  * from what it has ensuring that no updates are lost.
  * </ul>
  *
- * Each record comes with it's own offset, so to manage your own offset you just need to
do the following:
+ * Each record comes with its own offset, so to manage your own offset you just need to do
the following:
  * <ol>
  * <li>Configure <code>enable.auto.commit=false</code>
  * <li>Use the offset provided with each {@link ConsumerRecord} to save your position.
@@ -283,8 +283,8 @@ import java.util.concurrent.atomic.AtomicReference;
  *
  * <h4>Controlling The Consumer's Position</h4>
  *
- * In most use cases the consumer will simply consume records from beginning to end, periodically
committing it's
- * position (either automatically or manually). However Kafka allows the consumer to manually
control it's position,
+ * In most use cases the consumer will simply consume records from beginning to end, periodically
committing its
+ * position (either automatically or manually). However Kafka allows the consumer to manually
control its position,
  * moving forward or backwards in a partition at will. This means a consumer can re-consume
older records, or skip to
  * the most recent records without actually consuming the intermediate records.
  * <p>
@@ -294,7 +294,7 @@ import java.util.concurrent.atomic.AtomicReference;
  * attempt to catch up processing all records, but rather just skip to the most recent records.
  * <p>
  * Another use case is for a system that maintains local state as described in the previous
section. In such a system
- * the consumer will want to initialize it's position on start-up to whatever is contained
in the local store. Likewise
+ * the consumer will want to initialize its position on start-up to whatever is contained
in the local store. Likewise
  * if the local state is destroyed (say because the disk is lost) the state may be recreated
on a new machine by
  * reconsuming all the data and recreating the state (assuming that Kafka is retaining sufficient
history).
  *
@@ -357,7 +357,7 @@ import java.util.concurrent.atomic.AtomicReference;
  *
  * <h4>1. One Consumer Per Thread</h4>
  *
- * A simple option is to give each thread it's own consumer instance. Here are the pros and
cons of this approach:
+ * A simple option is to give each thread its own consumer instance. Here are the pros and
cons of this approach:
  * <ul>
  * <li><b>PRO</b>: It is the easiest to implement
  * <li><b>PRO</b>: It is often the fastest as no inter-thread co-ordination
is needed
@@ -387,7 +387,7 @@ import java.util.concurrent.atomic.AtomicReference;
  * that processing is complete for that partition.
  * </ul>
  *
- * There are many possible variations on this approach. For example each processor thread
can have it's own queue, and
+ * There are many possible variations on this approach. For example each processor thread
can have its own queue, and
  * the consumer threads can hash into these queues using the TopicPartition to ensure in-order
consumption and simplify
  * commit.
  *
@@ -955,7 +955,7 @@ public class KafkaConsumer<K, V> implements Consumer<K, V>
{
      * another). This offset will be used as the position for the consumer in the event of
a failure.
      * <p>
      * This call may block to do a remote call if the partition in question isn't assigned
to this consumer or if the
-     * consumer hasn't yet initialized it's cache of committed offsets.
+     * consumer hasn't yet initialized its cache of committed offsets.
      *
      * @param partition The partition to check
      * @return The last committed offset

http://git-wip-us.apache.org/repos/asf/kafka/blob/ff189fa0/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
----------------------------------------------------------------------
diff --git a/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java b/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
index 4170bcc..168990f 100644
--- a/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
+++ b/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
@@ -170,13 +170,13 @@ public class ConfigDef {
             // props map contains setting - assign ConfigKey value
             if (props.containsKey(key.name))
                 value = parseType(key.name, props.get(key.name), key.type);
-                // props map doesn't contain setting, the key is required and no default
value specified - it's an error
+                // props map doesn't contain setting, the key is required and no default
value specified - its an error
             else if (key.defaultValue == NO_DEFAULT_VALUE && key.required)
                 throw new ConfigException("Missing required configuration \"" + key.name
+ "\" which has no default value.");
                 // props map doesn't contain setting, no default value specified and the
key is not required - assign it to null
             else if (!key.hasDefault() && !key.required)
                 value = null;
-                // otherwise assign setting it's default value
+                // otherwise assign setting its default value
             else
                 value = key.defaultValue;
             if (key.validator != null)
@@ -444,4 +444,4 @@ public class ConfigDef {
         b.append("</table>");
         return b.toString();
     }
-}
\ No newline at end of file
+}

http://git-wip-us.apache.org/repos/asf/kafka/blob/ff189fa0/clients/src/main/java/org/apache/kafka/common/errors/CorruptRecordException.java
----------------------------------------------------------------------
diff --git a/clients/src/main/java/org/apache/kafka/common/errors/CorruptRecordException.java
b/clients/src/main/java/org/apache/kafka/common/errors/CorruptRecordException.java
index eaccf27..c742580 100644
--- a/clients/src/main/java/org/apache/kafka/common/errors/CorruptRecordException.java
+++ b/clients/src/main/java/org/apache/kafka/common/errors/CorruptRecordException.java
@@ -13,7 +13,7 @@
 package org.apache.kafka.common.errors;
 
 /**
- * This exception indicates a record has failed it's internal CRC check, this generally indicates
network or disk
+ * This exception indicates a record has failed its internal CRC check, this generally indicates
network or disk
  * corruption.
  */
 public class CorruptRecordException extends RetriableException {
@@ -21,7 +21,7 @@ public class CorruptRecordException extends RetriableException {
     private static final long serialVersionUID = 1L;
 
     public CorruptRecordException() {
-        super("This message has failed it's CRC checksum or is otherwise corrupt.");
+        super("This message has failed its CRC checksum or is otherwise corrupt.");
     }
 
     public CorruptRecordException(String message) {

http://git-wip-us.apache.org/repos/asf/kafka/blob/ff189fa0/copycat/api/src/main/java/org/apache/kafka/copycat/data/CopycatSchema.java
----------------------------------------------------------------------
diff --git a/copycat/api/src/main/java/org/apache/kafka/copycat/data/CopycatSchema.java b/copycat/api/src/main/java/org/apache/kafka/copycat/data/CopycatSchema.java
index 809496a..8c9de17 100644
--- a/copycat/api/src/main/java/org/apache/kafka/copycat/data/CopycatSchema.java
+++ b/copycat/api/src/main/java/org/apache/kafka/copycat/data/CopycatSchema.java
@@ -162,7 +162,7 @@ public class CopycatSchema implements Schema {
 
 
     /**
-     * Validate that the value can be used with the schema, i.e. that it's type matches the
schema type and nullability
+     * Validate that the value can be used with the schema, i.e. that its type matches the
schema type and nullability
      * requirements. Throws a DataException if the value is invalid. Returns
      * @param schema Schema to test
      * @param value value to test
@@ -212,7 +212,7 @@ public class CopycatSchema implements Schema {
     }
 
     /**
-     * Validate that the value can be used for this schema, i.e. that it's type matches the
schema type and optional
+     * Validate that the value can be used for this schema, i.e. that its type matches the
schema type and optional
      * requirements. Throws a DataException if the value is invalid.
      * @param value the value to validate
      */

http://git-wip-us.apache.org/repos/asf/kafka/blob/ff189fa0/core/src/main/scala/kafka/controller/PartitionStateMachine.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/kafka/controller/PartitionStateMachine.scala b/core/src/main/scala/kafka/controller/PartitionStateMachine.scala
index b4e7c88..5b616f3 100755
--- a/core/src/main/scala/kafka/controller/PartitionStateMachine.scala
+++ b/core/src/main/scala/kafka/controller/PartitionStateMachine.scala
@@ -266,7 +266,7 @@ class PartitionStateMachine(controller: KafkaController) extends Logging
{
 
   /**
    * Invoked on the NewPartition->OnlinePartition state change. When a partition is in
the New state, it does not have
-   * a leader and isr path in zookeeper. Once the partition moves to the OnlinePartition
state, it's leader and isr
+   * a leader and isr path in zookeeper. Once the partition moves to the OnlinePartition
state, its leader and isr
    * path gets initialized and it never goes back to the NewPartition state. From here, it
can only go to the
    * OfflinePartition state.
    * @param topicAndPartition   The topic/partition whose leader and isr path is to be initialized

http://git-wip-us.apache.org/repos/asf/kafka/blob/ff189fa0/core/src/main/scala/kafka/log/OffsetIndex.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/kafka/log/OffsetIndex.scala b/core/src/main/scala/kafka/log/OffsetIndex.scala
index 332d5e2..84d18bd 100755
--- a/core/src/main/scala/kafka/log/OffsetIndex.scala
+++ b/core/src/main/scala/kafka/log/OffsetIndex.scala
@@ -114,7 +114,7 @@ class OffsetIndex(@volatile var file: File, val baseOffset: Long, val
maxIndexSi
 
   /**
    * Find the largest offset less than or equal to the given targetOffset 
-   * and return a pair holding this offset and it's corresponding physical file position.
+   * and return a pair holding this offset and its corresponding physical file position.
    * 
    * @param targetOffset The offset to look up.
    * 

http://git-wip-us.apache.org/repos/asf/kafka/blob/ff189fa0/core/src/main/scala/kafka/server/ClientQuotaManager.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/kafka/server/ClientQuotaManager.scala b/core/src/main/scala/kafka/server/ClientQuotaManager.scala
index 016caaf..39dd65a 100644
--- a/core/src/main/scala/kafka/server/ClientQuotaManager.scala
+++ b/core/src/main/scala/kafka/server/ClientQuotaManager.scala
@@ -169,7 +169,7 @@ class ClientQuotaManager(private val config: ClientQuotaManagerConfig,
      * will acquire the write lock and prevent the sensors from being read while they are
being created.
      * It should be sufficient to simply check if the sensor is null without acquiring a
read lock but the
      * sensor being present doesn't mean that it is fully initialized i.e. all the Metrics
may not have been added.
-     * This read lock waits until the writer thread has released it's lock i.e. fully initialized
the sensor
+     * This read lock waits until the writer thread has released its lock i.e. fully initialized
the sensor
      * at which point it is safe to read
      */
     lock.readLock().lock()


Mime
View raw message