lucenenet-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrei Iliev (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENENET-257) TestCheckIndex.TestDeletedDocs
Date Wed, 18 Nov 2009 12:33:39 GMT

    [ https://issues.apache.org/jira/browse/LUCENENET-257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12779428#action_12779428
] 

Andrei Iliev commented on LUCENENET-257:
----------------------------------------

Thanks Michael, 
Something still bothering me. 
LogByteSizeMergePolicy has a public method SetMaxMergeMB:
public virtual void  SetMaxMergeMB(double mb)
		{
			maxMergeSize = (long) (mb * 1024 * 1024); // <-- there is a chance to get negative maxMergeSize

		}

If we started to fix Lucene java's  bug, would it be better to declare maxMergeSize as double?

> TestCheckIndex.TestDeletedDocs 
> -------------------------------
>
>                 Key: LUCENENET-257
>                 URL: https://issues.apache.org/jira/browse/LUCENENET-257
>             Project: Lucene.Net
>          Issue Type: Bug
>            Reporter: Andrei Iliev
>            Assignee: Digy
>         Attachments: LUCENENET-257.patch
>
>
> Setting writer.SetMaxBufferedDocs(2) couse flushing buffers after adding every 2 docs.
That  results in total 10 segments in the index and failing TestDeletedDocs (test case assumes
that there is only 1 segment file).  So question arise:
> 1) does performing a flush has to start a new segment file?
> 2) if so, in order  to  run TestDeletedDocs smoothly either change  writer.SetMaxBufferedDocs(2)
to, say,    writer.SetMaxBufferedDocs(20) or call writer.optimize(1) before closing writer.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message