lucenenet-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From NightOwl888 <...@git.apache.org>
Subject [GitHub] lucenenet issue #188: Fixed 64 Failing Facet Tests and Finished Facet Implem...
Date Sun, 02 Oct 2016 15:46:17 GMT
Github user NightOwl888 commented on the issue:

    https://github.com/apache/lucenenet/pull/188
  
    @synhershko 
    
    > I say let's first release, then see who uses this. Once we see a lot of usage, we
may reconsider the implementation. WDYT?
    
    In that case this is ready to go. But if we can spend 2-3 hours copying over an existing
implementation that is almost a drop-in replacement for what we need, why not do one better?
    
    Personally, I have a stake in faceted search - heck I have a whole open source project
[BoboBrowse.Net](https://github.com/NightOwl888/BoboBrowse.Net), and I would like to have
alternatives.
    
    Also, Facet and Suggest are both brand new things to Lucene.Net. If you want to sway people's
attention to this project, give them features they don't already have in Lucene.Net 3.0.3
(and make them as good as they can be). I have seen a few questions on the user mailing list
from people who were trying to get this to work. Why not give them something useful?
    
    > Let's have this discussion in the dev@ mailing list please. The list Shad provided
may be correct, but our priorities are different - the spatial module needs work, and there
are still failing tests at core. We can also skip Analysis.Kuromoji and Analysis.SmartCNand
completely now - they aren't worth our efforts now. So, Let's have that discussion in the
right place so other people could chime in as well.
    
    Actually, I am nearly done with Analysis.Stempel now. I know it is not that critical,
but it looked to be less than a day of work to finish and I started it before I knew whether
these pull requests were going to be merged.
    
    Also, nearly all of the "failures" in the core are due to the test runners screwing up.
Many tests are in abstract classes that are meant to be run for multiple test subclasses.
Those subclasses provide the setup that is necessary for the tests to run successfully. But,
depending on what test runner you use, they are either being run without the setup *and* are
being skipped or are being run *one additional time* in the abstract class without any setup,
causing them to fail in the out-of-context run.
    
    I know @conniey is currently working on setting up xUnit and I am hoping that its test
runner(s) and/or specialized attributes are also a fix for this issue. If not, we need to
find another solution that works regardless of the test runner used. The only thing I have
come up with so far is to make those tests virtual, override them in the sub classes and only
put the `[Test]` attribute on the stub methods in the sub classes, not in the abstract class.
In fact, I did that in the Suggest tests before I realized how widespread this issue is (and
that it also affects the QueryParser tests). It works, but creates a lot of duplicate stub
methods that do nothing but call their base class.
    
    And yes, I will continue this on the mailing list...
    
    > On that note, @NightOwl888 I have no idea where you are located, but if I happen
to be in your neighborhood during my travels do let me know and I'll buy you beers.
    
    I am in Jomtien Beach, Thailand. And I'll take them draft, thank you :).
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message