for a while I've been experiencing memory issues when marshalling large files.
I've monitored this usage with some crude profiling:
25,000 locations: 72MB
50,000 locations: 140MB
So I've looked for ways to reduce this problem. One approach is to marshal the file in chunks instead of all at once. Here are some useful links:
Example:
JAXBContext context = JAXBContext.newInstance(type);
Marshaller
m = context.createMarshaller();
m.setProperty(Marshaller.JAXB_
FRAGMENT, Boolean.TRUE);
java.io.StringWriter sw = new java.io.StringWriter();
XMLStreamWriter xmlOut = XMLOutputFactory.newFactory(). createXMLStreamWriter(sw);
xmlOut.writeStartDocument(" UTF-8", "1.0");
xmlOut.writeStartElement("kml" );
xmlOut.writeDefaultNamespace(" http://www.opengis.net/kml/2.2 ");
xmlOut.writeNamespace("atom", "http://www.w3.org/2005/Atom") ;
xmlOut.writeNamespace("kml", "http://www.opengis.net/kml/2. 2");
xmlOut.writeNamespace("gx", "http://www.google.com/kml/ ext/2.2");
xmlOut.writeNamespace("xal", "urn:oasis:names:tc:ciq: xsdschema:xAL:2.0");
xmlOut.writeStartElement(" Document");
// iterate through your placemarks here
Placemark placemark = new Placemark()
...
m.marshal(placemark, xmlOut);
xmlOut.writeEndElement(); // Document
xmlOut.writeEndElement(); // kml
xmlOut.close();
This is an intermediate solution that sacrifices elegance but this way I've been able to reduce memory usage at least by 60%:
25,000 locations: 20MB
50,000 locations: 40MB
I believe a similar approach can be used when parsing large documents.
I hope someone finds this useful
No comments:
Post a Comment
Note: only a member of this blog may post a comment.