lucenenet-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <>
Subject [GitHub] [lucenenet] NightOwl888 commented on pull request #349: Add separate project to run against code and not nuget packages
Date Fri, 25 Sep 2020 13:14:38 GMT

NightOwl888 commented on pull request #349:

   Thanks for putting together this PR. We definitely need benchmarks in the master branch.
However, what we ought to shoot for are benchmarks that can be run by developers, added to
a nightly builds and release builds, and also plan for future expansion to benchmark individual
components of individual assemblies. I like the precedent set forth in [NodaTime.Benchmarks](
   A few things to consider:
   1. Since we don't have separate directories for `src` and `tests`, the build scripts rely
on naming conventions to determine which assemblies are test projects. Specifically, it relies
on the convention `Lucene.Net.Tests.<name>`. We shouldn't use that convention here,
as these will be handled differently in the build.
   2. We should have a top level directory for `benchmarks` (a sibling of `src`) so we can
expand to include benchmarks on each separate assembly (including the test framework). This
organization makes it possible to have `Directory.Build.props` and `Directory.Build.targets`
files that apply specifically to benchmarks.
   3. Within the `benchmarks` folder, we should have a separate `.sln` file that includes
all of the main assemblies and benchmarks, but not the tests.
   4. Some of the "tests" in Lucene are actually benchmarks as they only record the amount
of time the operation takes but don't actually have any asserts that can fail. These are wasting
time (about 5% of the total) during normal testing with little benefit. We should aim to remove
these from the tests and refactor them as benchmarks that run nightly and during releases.
Some examples:
      - [TestWordDelimiterFilter::TestPerformance()](
      - [TestTeeSinkTokenFilter::Performance()](
      - [CacheSubSequencePerformanceTest](
      - [PhoneticEnginePerformanceTest](
      - [TestTermdocPerf](
   Of course, a key part of getting this into the build is to set up the instrumentation so
the benchmark results are easily available for viewing (ideally without having to download
the artifacts). This was dirt simple in TeamCity by including an HTML file as a tab in the
portal, but we haven't explored the options yet in Azure DevOps.
   We are running into a realistic limit of number of projects Visual Studio can load in a
reasonable timeframe, so ideally we would keep them in a separate `.sln`. This is the approach
that Microsoft takes to manage large projects.
   You don't need to include all of these changes in this PR, this is just to outline the
plan. But could you please set a precedent by putting the benchmarks in a top level `benchmarks`
directory and a `Lucene.Net.Benchmarks.sln` file to manage them?
   > If someone with experience with benchmarks in CI sees a flaw in this layout, please
do provide your valuable input.

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:

View raw message