> On July 26, 2017, 2:08 a.m., Benjamin Mahler wrote:
> > 3rdparty/libprocess/src/event_queue.hpp
> > Lines 127-136 (patched)
> > <https://reviews.apache.org/r/61058/diff/2/?file=1781185#file1781185line127>
> >
> > Hm.. if a thread is checking `empty()` after the sequence is incremented and
before its decremented back in the decomissioned case, it will think incorrectly think it's
not empty?
> >
> > Shouldn't `empty()` check `decomissioned` first, since once set the queue is
empty or being actively emptied by `decomission()`?
This falls into the semantics of the queue being single consumer. The consumer SHOULD NOT
be checking if the queue is empty after it has called `decomission()`, hence there shouldn't
be a race here. I added quite a few comments to cover this.
> On July 26, 2017, 2:08 a.m., Benjamin Mahler wrote:
> > 3rdparty/libprocess/src/event_queue.hpp
> > Lines 138-145 (patched)
> > <https://reviews.apache.org/r/61058/diff/2/?file=1781185#file1781185line138>
> >
> > Interesting that you took a different strategy here compared to the other queue,
might be worth a comment saying we have to spin due to the nature of `try_dequeue`, and that's
ok since the caller *must* check empty before calling this?
Implemented the TODO I had in the locking implementation and also added more comments to the
top of the `EventQueue` class capturing this semantic.
> On July 26, 2017, 2:08 a.m., Benjamin Mahler wrote:
> > 3rdparty/libprocess/src/event_queue.hpp
> > Lines 147-150 (patched)
> > <https://reviews.apache.org/r/61058/diff/2/?file=1781185#file1781185line147>
> >
> > There seems to be a requirement that only the single consumer is the one that
calls decomission and that once the consumer calls decomission, no more empty->dequeue
looping will occur. Otherwise, the consumer can loop:
> >
> > (1) decomission, but not drained yet
> > (2) consumer checks empty, returns true since not yet drained by decomission
> > (3) consumer then calls dequeue
> > (4) decomission completes the drain
> > (5) consumer loops in dequeue
> >
> > Or:
> >
> > (1) decomission, drained
> > (2) producer calls enqueue, sequence is incremented
> > (3) consumer checks empty, returns true since sequence was incremented
> > (5 optional) producer then finishes call to enqueue by decrementing sequence
> > (4) consumer calls dequeue, loops
You got it!
> On July 26, 2017, 2:08 a.m., Benjamin Mahler wrote:
> > 3rdparty/libprocess/src/event_queue.hpp
> > Lines 186-187 (patched)
> > <https://reviews.apache.org/r/61058/diff/2/?file=1781185#file1781185line186>
> >
> > Why 256 instead of unbounded dequeue?
I don't know how to do that, I have to pass a value into `try_dequeue_bulk`.
> On July 26, 2017, 2:08 a.m., Benjamin Mahler wrote:
> > 3rdparty/libprocess/src/event_queue.hpp
> > Lines 209-211 (patched)
> > <https://reviews.apache.org/r/61058/diff/2/?file=1781185#file1781185line209>
> >
> > Shouldn't an item smaller than the next entry we expect be something we CHECK
against rather than silently drop?
I added a handful of comments to explain what's going on here!
- Benjamin
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/61058/#review181297
-----------------------------------------------------------
On July 22, 2017, 1:16 a.m., Benjamin Hindman wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/61058/
> -----------------------------------------------------------
>
> (Updated July 22, 2017, 1:16 a.m.)
>
>
> Review request for mesos and Benjamin Mahler.
>
>
> Bugs: MESOS-7798
> https://issues.apache.org/jira/browse/MESOS-7798
>
>
> Repository: mesos
>
>
> Description
> -------
>
> Added a lock-free event queue.
>
>
> Diffs
> -----
>
> 3rdparty/libprocess/configure.ac cb2cf4f32be5cbdf9df1e32f9aaf2bbba0a5ae03
> 3rdparty/libprocess/src/event_queue.hpp PRE-CREATION
>
>
> Diff: https://reviews.apache.org/r/61058/diff/2/
>
>
> Testing
> -------
>
> make check
>
>
> Thanks,
>
> Benjamin Hindman
>
>
|