lucenenet-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "TJ Kolev (JIRA)" <j...@apache.org>
Subject [jira] Updated: (LUCENENET-106) Lucene.NET (Revision: 603121) is leaking memory
Date Wed, 14 Jan 2009 20:38:59 GMT

     [ https://issues.apache.org/jira/browse/LUCENENET-106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

TJ Kolev updated LUCENENET-106:
-------------------------------

    Attachment: WeakHashTable_tj.zip

Here's another WeakHashTable implementation for consideration.  The whole nunit test solution
runs green. The zip also has a console application with the code that Digy proposed here on
26/Jun/08 (with tiny mods), and the nunit test code digy propsed on the dev list (with formatting
changes only). The code is a drop in replacement, not a patch (yet).

Now my general comments on the issue.  I agree with Eyal Post's comments on the mailing list.
Hash codes must be the same for two objects only if Equals() returns true for them. But you
can have the same hash code for objects that are !Equals(). The table just won't be as efficient.
In any case, has collisions are handled, and that's a core hash table functionality.

In Java's WeakHashMap the keys are the weak references, and this is what I did in my code.
The trick that made it work was keeping the hash code of the caller's key object. That way
the internal hash table (and its cleanup) is able to work correctly, even when the Target
of the weak reference has been GC collected.

I have only provided the minimum functionality needed by Lucene. I don't believe it is reasonable
to assume someone will pull out the code, and try to use it as a general purpose weak hash
table. I can certainly make the implementation complete. But from Lucene's usage, the weak
hash table is just a transient storage for looking up resources, which may or may not be there.
If they are still there - it is a nice benefit, if not - the caller creates them again. (Iterating
and counting elements in such storage brings additional problems - like handling cases when
the count does not match the number of iterated elements, objects not being there when at
some point they were, etc.)

Now, I may have certainly missed something, and the tests I ran did not catch it. I would
much appreciate any feedback and comments, especially test code to go along with them.

tjk :)

> Lucene.NET (Revision: 603121) is leaking memory
> -----------------------------------------------
>
>                 Key: LUCENENET-106
>                 URL: https://issues.apache.org/jira/browse/LUCENENET-106
>             Project: Lucene.Net
>          Issue Type: Bug
>         Environment: .NET 2.0
>            Reporter: Anton K.
>            Assignee: Digy
>            Priority: Critical
>         Attachments: DIGY-FieldCacheImpl.patch, Digy.rar, luceneSrc_memUsage.patch, Paches
for v2.3.1.rar, WeakHashTable v2.patch, WeakHashTable v2.patch, WeakHashTable+FieldCacheImpl.rar,
WeakHashTable_tj.zip, WeakReferences.rar
>
>
> readerCache Hashtable field (see FieldCacheImpl.cs) never releases some hash items that
have closed IndexReader object as a key. So a lot of Term instances are never released.
> Java version of Lucene uses WeakHashMap and therefore doesn't have this problem.
> This bug can be reproduced only when Sort functionality used during search. 
> See following link for additional information.
> http://www.gossamer-threads.com/lists/lucene/java-user/55681
> Steps to reproduce:
> 1)Create index
> 2) Modify index by IndexWiter; Close IndexWriter
> 3) Use IndexSearcher for searching with Sort; Close InexSearcher
> 4) Go to step 2
> You'll get OutOfMemoryException after some time of running this algorithm.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message