mesos-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guangya Liu <gyliu...@gmail.com>
Subject Re: Review Request 49617: Add benchmark for failover of many frameworks.
Date Fri, 09 Sep 2016 06:57:37 GMT


> On 八月 24, 2016, 9:12 a.m., Guangya Liu wrote:
> > src/tests/hierarchical_allocator_tests.cpp, line 4030
> > <https://reviews.apache.org/r/49617/diff/7/?file=1476268#file1476268line4030>
> >
> >     How about adding `ports` resources to the allocation here.
> >     
> >     ```
> >     // Each agent has a portion of it's resources allocated to a single
> >     // framework. We round-robin through the frameworks when allocating.
> >     Resources allocation = Resources::parse("cpus:16;mem:1024;disk:1024").get();
> >     
> >     Try<::mesos::Value::Ranges> ranges = fragment(createRange(31000, 32000),
16);
> >     ASSERT_SOME(ranges);
> >     ASSERT_EQ(16, ranges->range_size());
> >     
> >     allocation += createPorts(ranges.get());
> >     ```
> 
> Jacob Janco wrote:
>     I took this out to simplify the test.
> 
> Guangya Liu wrote:
>     I think that we should simulate a full allocate scernario in the benchmark test to
include `ports` as well just like other benchmark test, as we actually have some special logic
to handle non-scalar resources in sorter when allocate resources, comments?
> 
> Jiang Yan Xu wrote:
>     What's the special logic that you were referring to? I am not aganist adding to ports
here but once this test is committed let's do some refactor to pull common elements into helpers
or member variables so new benchmark tests don't need to copy a lot of code.

+1 to add some common helpers, I was proposing adding `ports` here is mainly because now all
benchmark test include `ports` resources and also the `sorter` have some logic for non-scalar
resources, such as `createStrippedScalarQuantity` etc which will filter out the non-scalar
resources, so adding `ports` here would simulate a real use cases here.


- Guangya


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49617/#review146625
-----------------------------------------------------------


On 八月 17, 2016, 2:26 a.m., Jacob Janco wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49617/
> -----------------------------------------------------------
> 
> (Updated 八月 17, 2016, 2:26 a.m.)
> 
> 
> Review request for mesos, Joris Van Remoortere and Jiang Yan Xu.
> 
> 
> Bugs: MESOS-5780
>     https://issues.apache.org/jira/browse/MESOS-5780
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> - This benchmark measures latency to stability of
>   the allocator following disconnection and
>   reconnection of all frameworks.
> - In this scenario, frameworks are offered resources
>   and suppressed in batches.
> 
> 
> Diffs
> -----
> 
>   src/tests/hierarchical_allocator_tests.cpp cbed333f497016fe2811f755028796012b41db77

> 
> Diff: https://reviews.apache.org/r/49617/diff/
> 
> 
> Testing
> -------
> 
> MESOS_BENCHMARK=1 GTEST_FILTER="*BENCHMARK_Test.FrameworkFailover*" make check
> 
> Sample Output:
> [ RUN      ] SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.FrameworkFailover/23
> Using 10000 agents and 6000 frameworks
> Added 6000 frameworks in 113410us
> Added 10000 agents in 6.83980663333333mins
> allocator settled after  3.28683733333333mins
> [       OK ] SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.FrameworkFailover/23
(609255 ms)[ RUN      ] SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.FrameworkFailover/24
> Using 20000 agents and 1 frameworks
> Added 1 frameworks in 190us
> Added 20000 agents in 4.752954secs
> allocator settled after  7us
> [       OK ] SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.FrameworkFailover/24
(6332 ms)
> 
> 
> Thanks,
> 
> Jacob Janco
> 
>


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message