mesos-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Meng Zhu <>
Subject Re: Review Request 66984: Added the ability to exhaustively allocate declined resources.
Date Tue, 08 May 2018 16:55:25 GMT

This is an automatically generated e-mail. To reply, visit:

(Updated May 8, 2018, 9:55 a.m.)

Review request for mesos, Benjamin Mahler, Kapil Arya, Till Toenshoff, and Vinod Kone.


Slip fix.

Repository: mesos


Schedulers that are below their fair share will continue to get
allocated resources even if they don't need those resources. The
expected behavior here is that schedulers will decline those resources
for a long time (or forever). Not all schedulers do this, however,
which means that some schedulers might get _starved_ of
resources. Technically these schedulers are already at or above their
fair share, but some operators feel that this is keeping the cluster

Rather than guarantee that all schedulers will decline with large
timeouts this patch ensures that all other schedulers will see
resources from the declined agent before they see resources from that
agent again, even if there are now more resources available on that

To this end, each agent maintains an exclusion set of frameworks
that declined its allocation earlier and skips them during the
allocation. Note, a framework here is associated with each of
its roles. If a framework declined resources allocated to one
of its roles, it can still get allocations for its other roles.

Note, an agent's exclusion set will be cleared if its attribute

Diffs (updated)

  src/master/allocator/mesos/hierarchical.hpp 955ae3e6a9e3c790fb260311ec4b4ef725a826d3 
  src/master/allocator/mesos/hierarchical.cpp 1000968be6a2935a4cac571414d7f06d7df7acf0 




make check
Dedicated test in #66994


Meng Zhu

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message