lucenenet-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wyatt Barnett <wyatt.barn...@gmail.com>
Subject Re: Reproducing random test failures
Date Sat, 16 May 2015 20:34:34 GMT
Sorry about the blank one -- getting used to google inbox here and I
misclicked.

Anyhow, I have a repro or at least a rhyme and reason -- TeamCity is
running in release mode and I think we have difffering behavior there. If
you switch your copy of visual studio to release mode you will get the same
failures we are seeing in TeamCity. Does that help narrow it down a bit?

On Sat, May 16, 2015 at 4:26 PM Wyatt Barnett <wyatt.barnett@gmail.com>
wrote:

>
>
> On Sat, May 16, 2015 at 3:22 PM Wyatt Barnett <wyatt.barnett@gmail.com>
> wrote:
>
>> I agree with Itamar -- it feels environmental. I'll do some digging into
>> the teamcity output but I think the plan of setting up some extra verbose
>> logging here would make sense. I can set you up with a separate build
>> pointed at your fork if that helps -- it will keep the feedback cycle
>> tighter. The other thing we could do is categorize the tests and focus that
>> build at running only that category so you don't need to wait on the whole
>> suite to get responses. Let me know if you want me to proceed there.
>>
>>
>>
>> On Sat, May 16, 2015 at 3:10 PM, Itamar Syn-Hershko <itamar@code972.com>
>> wrote:
>>
>>> Yes, that would be the best way to do this. On Java Lucene, the
>>> randomized
>>> tests framework allows you to re-use the random seed associated with the
>>> failure, but we are not there yet. Either way, I suspect this to be an
>>> environment issue rather than a code path one.
>>>
>>> --
>>>
>>> Itamar Syn-Hershko
>>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>>> Freelance Developer & Consultant
>>> Lucene.NET committer and PMC member
>>>
>>> On Sat, May 16, 2015 at 10:06 PM, Laimonas Simutis <laimis@gmail.com>
>>> wrote:
>>>
>>> > There are three tests that consistently fail on TC but no matter how
>>> many
>>> > times I try, I can't reproduce it locally. These tests are:
>>> >
>>> > TestFuzzyQuery.TestTieBreaker
>>> >
>>> >
>>> http://teamcity.codebetter.com/viewLog.html?buildId=191298&tab=buildResultsDiv&buildTypeId=LuceneNet_Core#testNameId-6371662534320583798
>>> >
>>> > TestSimpleExplanations.TestDMQ8
>>> >
>>> >
>>> http://teamcity.codebetter.com/viewLog.html?buildId=191298&tab=buildResultsDiv&buildTypeId=LuceneNet_Core#testNameId5725706748293106127
>>> >
>>> > TestTopDocsMerge.TestSort_2
>>> >
>>> >
>>> http://teamcity.codebetter.com/viewLog.html?buildId=191298&tab=buildResultsDiv&buildTypeId=LuceneNet_Core#testNameId-8365680837810961892
>>> >
>>> > I would fix them if I could reproduce it -- and I am running out of
>>> ideas
>>> > how to do it. Even if I put them in a loop running hundreds of times, I
>>> > can't trigger the failure.
>>> >
>>> > Anyone have any ideas how to go about reproducing it? I am thinking to
>>> push
>>> > very verbose code in a separate branch that logs the input values /
>>> random
>>> > values that are used and see what happens. Checking if anyone has any
>>> other
>>> > suggestions.
>>> >
>>> >
>>> > Thanks,
>>> >
>>> > Laimis
>>> >
>>>
>>
>>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message