mesos-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Guangya Liu" <gyliu...@gmail.com>
Subject Re: Review Request 41791: Updated allocation slack when dynamic reserve new resources (1/3).
Date Thu, 21 Jan 2016 07:50:36 GMT


> On 一月 20, 2016, 10:54 p.m., Joseph Wu wrote:
> > src/master/allocator/mesos/hierarchical.cpp, line 662
> > <https://reviews.apache.org/r/41791/diff/8/?file=1196632#file1196632line662>
> >
> >     Have you considered modifying `Resources::apply` to "create" allocation slack
upon a reservation?  (And to "destroy" allocation slack upon unreserve?)

I do not want to update `Resources::apply` as it will be called when `reserve` and `unreserve`
resources, when `reserve` resources, we can simply add some `allocation slack`; when `unreserve`
resources, life become complex, we cannot simply decrease the `allocation slack` as if the
`allocation slack` decreased, there will be new `unreserved` resources which can be send as
offer and launch task because the `allocation slack` still running task and there might be
resource confilict for this, that's why I have another following two patches handlding `unreserve`
separately.

Take a case:
1) agent: cpus(r1):100;cpus(*){ALLOCATION_SLACK}:100
2) framework use up all cpus(*){ALLOCATION_SLACK}:100 resources.
3) unreserve 30 cpus.
4) The resources would become:
Total: cpus(*):30;cpus(r1):70;cpus(*){ALLOCATION_SLACK}:100 NOTE: We do not decrease the total
allocation slack here.
Free: cpus(*):30;cpus(r1):70
Used: cpus(*){ALLOCATION_SLACK}:100
5) At this time, we cannot send cpus(*):30 out as offer because there are still 30 over committed
allocation slack in use.
6) recover 20 allocaiton slack, we can update the total resources as this:
Total: cpus(*):30;cpus(r1):70;cpus(*){ALLOCATION_SLACK}:80 NOTE: decrease the total allocation
slack when recover resource but not when `unreserve` resources in such case.
Used: cpus(*){ALLOCATION_SLACK}:80
Free: cpus(*):30;cpus(r1):70
The allocate can send out offer with cpus(*):20 but not the whole 30 resources.
7) recover another 10 allocation slack.
Total: cpus(*):30;cpus(r1):70;cpus(*){ALLOCATION_SLACK}:70 NOTE: from now on, the allocatioin
slack is same as reserved resources.
Used: cpus(*)20;cpus(*){ALLOCATION_SLACK}:70
Free: cpus(*):10;cpus(r1):70
The allocater can send out offer with cpus(*):10.


- Guangya


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/41791/#review115497
-----------------------------------------------------------


On 一月 20, 2016, 6:38 a.m., Guangya Liu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/41791/
> -----------------------------------------------------------
> 
> (Updated 一月 20, 2016, 6:38 a.m.)
> 
> 
> Review request for mesos, Ben Mahler, Artem Harutyunyan, Joris Van Remoortere, Joseph
Wu, Klaus Ma, and Jian Qiu.
> 
> 
> Bugs: MESOS-4145
>     https://issues.apache.org/jira/browse/MESOS-4145
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Update allocation slack resources if reserve some new
> stateless reserved resources. For such case, the allocation
> slack resources will be increased.
> 
> This patch updates both `updateAvailable` and `updateAllocation`
> when reserving some new resources.
> 
> There are three patches handling dynamic reservation and oo phase 1.
> 1) https://reviews.apache.org/r/41791 Reserve new resources via `updateAllocation` and
`updateAvailable`. (1/3).
> 2) https://reviews.apache.org/r/42113 Unreserve resources via `updateAllocation` (2/3).
> 3) https://reviews.apache.org/r/42194 Unreserve resources via `updateAvailable` (3/3).
> 
> 
> Diffs
> -----
> 
>   src/master/allocator/mesos/hierarchical.cpp d541bfa3f4190865c65d35c9d1ffdb8a3f194056

>   src/tests/hierarchical_allocator_tests.cpp e044f832c2c16e53e663c6ced5452649bb0dcb59

> 
> Diff: https://reviews.apache.org/r/41791/diff/
> 
> 
> Testing
> -------
> 
> make
> make check
> 
> 
> Thanks,
> 
> Guangya Liu
> 
>


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message