mesos-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From haosdent huang <haosd...@gmail.com>
Subject Re: Review Request 51783: Refactored `UserCgroupsIsolatorTest`.
Date Tue, 13 Sep 2016 05:27:22 GMT


> On Sept. 13, 2016, 4:18 a.m., Jie Yu wrote:
> > src/tests/containerizer/cgroups_isolator_tests.cpp, line 112
> > <https://reviews.apache.org/r/51783/diff/7/?file=1496255#file1496255line112>
> >
> >     'nobody' does not work on my box because it cannot access my home dir, and 'mesos-containerizer
launch' binary needs to be accessible to fork-exec the task.
> >     
> >     I think creating a new user does not work either. It previously worked because
we don't use any executable under build dir.
> >     
> >     I adjust the test to use an image to bypass the problem, but ran into another
problem that the cgroup owner support only works for custom executor, not command tasks.
> >     
> >     For command tasks, since executor is running under root, this test will fail.
> 
> haosdent huang wrote:
>     What's the operate system of your box? I test this in Ubuntu 14.04 before.
> 
> Jie Yu wrote:
>     CentOS 7
> 
> Jie Yu wrote:
>     su - nobody does not work on centos 7
> 
> Jie Yu wrote:
>     https://reviews.apache.org/r/51835/
> 
> Jie Yu wrote:
>     My current patch that works (with https://reviews.apache.org/r/51835/)
>     
>     ```
>     // This test starts the agent with cgroups isolation and launches a
>     // task with an unprivileged user. Then verifies that the unprivileged
>     // user has write permission under the corresponding cgroups which are
>     // prepared for the container to run the task.
>     TEST_F(CgroupsIsolatorTest, ROOT_CGROUPS_PERF_NET_CLS_INTERNET_CURL_UserCgroup)
>     {
>       Try<Owned<cluster::Master>> master = StartMaster();
>       ASSERT_SOME(master);
>     
>       slave::Flags flags = CreateSlaveFlags();
>       flags.perf_events = "cpu-cycles"; // Needed for `PerfEventSubsystem`.
>       flags.image_providers = "docker";
>       flags.isolation =
>         "cgroups/cpu,"
>         "cgroups/devices,"
>         "cgroups/mem,"
>         "cgroups/net_cls,"
>         "cgroups/perf_event,"
>         "docker/runtime,"
>         "filesystem/linux";
>     
>       Fetcher fetcher;
>       Try<MesosContainerizer*> _containerizer =
>         MesosContainerizer::create(flags, true, &fetcher);
>     
>       CHECK_SOME(_containerizer);
>       Owned<MesosContainerizer> containerizer(_containerizer.get());
>     
>       Owned<MasterDetector> detector = master.get()->createDetector();
>     
>       Try<Owned<cluster::Slave>> slave =
>         StartSlave(detector.get(), containerizer.get());
>     
>       ASSERT_SOME(slave);
>     
>       MockScheduler sched;
>     
>       MesosSchedulerDriver driver(
>           &sched,
>           DEFAULT_FRAMEWORK_INFO,
>           master.get()->pid,
>           DEFAULT_CREDENTIAL);
>     
>       EXPECT_CALL(sched, registered(&driver, _, _))
>         .Times(1);
>     
>       Future<vector<Offer>> offers;
>       EXPECT_CALL(sched, resourceOffers(&driver, _))
>         .WillOnce(FutureArg<1>(&offers))
>         .WillRepeatedly(Return()); // Ignore subsequent offers.
>     
>       driver.start();
>     
>       AWAIT_READY(offers);
>       EXPECT_NE(0u, offers.get().size());
>     
>       // Launch a task with the command executor.
>       CommandInfo command;
>       command.set_shell(false);
>       command.set_value("/bin/sleep");
>       command.add_arguments("sleep");
>       command.add_arguments("120");
>       command.set_user("nobody");
>     
>       TaskInfo task = createTask(
>           offers.get()[0].slave_id(),
>           offers.get()[0].resources(),
>           command);
>     
>       Image image;
>       image.set_type(Image::DOCKER);
>       image.mutable_docker()->set_name("library/alpine");
>     
>       ContainerInfo* container = task.mutable_container();
>       container->set_type(ContainerInfo::MESOS);
>       container->mutable_mesos()->mutable_image()->CopyFrom(image);
>     
>       Future<TaskStatus> statusRunning;
>       EXPECT_CALL(sched, statusUpdate(&driver, _))
>         .WillOnce(FutureArg<1>(&statusRunning));
>     
>       driver.launchTasks(offers.get()[0].id(), {task});
>     
>       AWAIT_READY_FOR(statusRunning, Seconds(60));
>       EXPECT_EQ(TASK_RUNNING, statusRunning.get().state());
>     
>       vector<string> subsystems = {
>         CGROUP_SUBSYSTEM_CPU_NAME,
>         CGROUP_SUBSYSTEM_CPUACCT_NAME,
>         CGROUP_SUBSYSTEM_DEVICES_NAME,
>         CGROUP_SUBSYSTEM_MEMORY_NAME,
>         CGROUP_SUBSYSTEM_NET_CLS_NAME,
>         CGROUP_SUBSYSTEM_PERF_EVENT_NAME,
>       };
>     
>       Future<hashset<ContainerID>> containers = containerizer->containers();
>       AWAIT_READY(containers);
>       EXPECT_EQ(1u, containers.get().size());
>     
>       ContainerID containerId = *(containers.get().begin());
>     
>       foreach (const string& subsystem, subsystems) {
>         string hierarchy = path::join(flags.cgroups_hierarchy, subsystem);
>         string cgroup = path::join(flags.cgroups_root, containerId.value());
>     
>         // Verify that the user cannot manipulate the container's cgroup
>         // control files as their owner is root.
>         EXPECT_NE(0, os::system(strings::format(
>             "sudo -u nobody -s sh -c 'echo $$ > %s'",
>             path::join(hierarchy, cgroup, "cgroup.procs")).get()));
>     
>         // Verify that the user can create a cgroup under the container's
>         // cgroup as the isolator changes the owner of the cgroup.
>         string userCgroup = path::join(cgroup, "user");
>     
>         EXPECT_EQ(0, os::system(strings::format(
>             "sudo -u nobody mkdir %s",
>             path::join(hierarchy, userCgroup)).get()));
>     
>         // Verify that the user can manipulate control files in the
>         // created cgroup as it's owned by the user.
>         EXPECT_EQ(0, os::system(strings::format(
>             "sudo -u nobody -s sh -c 'echo $$ > %s'",
>             path::join(hierarchy, userCgroup, "cgroup.procs")).get()));
>     
>         // Clear up the folder.
>         AWAIT_READY(cgroups::destroy(hierarchy, userCgroup));
>       }
>     
>       driver.stop();
>       driver.join();
>     }
>     ```

Thanks a lot! verify my patch doesn't work at CentOS 7, let me update to yours.


- haosdent


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51783/#review148646
-----------------------------------------------------------


On Sept. 12, 2016, 8:19 a.m., haosdent huang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51783/
> -----------------------------------------------------------
> 
> (Updated Sept. 12, 2016, 8:19 a.m.)
> 
> 
> Review request for mesos, Gilbert Song, Jie Yu, and Qian Zhang.
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Refactor `UserCgroupsIsolatorTest.ROOT_CGROUPS_PERF_UserCgroup` and
> rename to `CgroupsIsolatorTest.ROOT_CGROUPS_PERF_NET_CLS_UserCgroup`.
> 
> 
> Diffs
> -----
> 
>   src/tests/containerizer/cgroups_isolator_tests.cpp c4e467c8227f9e4129b05d173812592f39a04e06

> 
> Diff: https://reviews.apache.org/r/51783/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> haosdent huang
> 
>


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message