Linking API and Sources to your IDE’s JARs (Part 2)

I tend to upload my complete project into version control. This includes the sources, tests, Jars and also the nbproject directory where NetBeans stores the project configuration. By doing so, I can check out the project on a different machine and start quickly without having to configure the project.

Sources and API Docs of external libraries are not commited as they’re are not requried for compiling. I usually keep sources and docs in a separate place outside my project (let’s say <userdir>javaLibs...).

When I check out the project on a different machine I can do coding but I do have neither the sources nor the API docs. Even worse: as I’ve commited the whole project including the configuration, I have also commited the nbproject/project.properties file which stores the pathes to the source and docs. Which is not a problem if the pathes on all the machines are the same. But when a new contributer wants to join in, (s)he either has to use the same directory structure (and possibly the same OS) or he has to overwrite the settings. Both not very desirable.

Continue reading Linking API and Sources to your IDE’s JARs (Part 2)

Finished my Posters for ICIP and MICCAI

Finally finished the posters for my publications:

F. Graf, H.-P. Kriegel, M. Schubert, S. Poelsterl, A. Cavallaro
2D Image Registration in CT Images using Radial Image Descriptors
In Medical Image Computing and Computer-Assisted Intervention (MICCAI), Toronto, Canada, 2011.

and

F. Graf, H.-P. Kriegel, M. Weiler
Robust Segmentation of Relevant Regions in Low Depth of Field Images
In Proceedings of the IEEE International Conference on Image Processing (ICIP), Brussels, Belgium, 2011.

The Java7 bug … does it really affect you?

I was really happy about Java7 being finally delivered by Oracle – and really disappointed that there are 3 severe bugs that can either crash the JVM (ok, bad but – well) or produce wrong results silently (ohoh!). To excuse Oracle – the bugs were found very short before the release so they had no time to fix it – okay, really bad luck. Well – I don’t have Java 7 in production now, so I’m fine.

But as there’s so much fuss about it I dug a bit into the topic: The three evil bugs are the bugs with ID 7070134, 7044738 and 7068051.
All three bugs are in the states: Fix-Available and Fix-Delivered. So we just need to wait for the next Java update, right?

Wait: But all three bugs are “only” concerning the server VM. Of course, this is bad for the people who want to use Java 7 on their server right now. But if you just work on the client side and don’t use the server VM – well then you just don’t have to care about it.

Topics related to the bug:

Linking API and Sources to your IDE’s JARs

For productive programming, I think it is absolutely crucial to also have both the API documentation and the source code of the according libraries available and integrated in the IDE in order to gain maximum productivity. Integrating the API and sources is pretty easy in NetBeans (as well as in other IDEs):

Right click the Project > Properties > Libraries > select the JAR for which you want to link source and API and hit the edit button on the right.
Now you can select a folder, Zip file or Jar file for the API and sources, hit OK and you’re done.

Whenever you’re using a class from this library, you now can step into this class (by Ctrl-Clicking for example) or quickly jump to the API by pressing ALT+F1 when the curser is at the corresponding class/method.

If you are annoyed by swithing between IDE and Browser or if you just forget the Alt+F1 key combo that opens the browser with the correct API page, just enable the NetBeans inline Java-Doc viewer by selecting:
Window > Other > JavaDoc
This brings up a new panel which shows the JavaDoc comment of the class/method which is curently selected by the cursor. And you don’t even need to press any key for updating the view as it is updated automatically.

If it doesn’t work, I usually experience the following two errors:

  1. JavaDoc doesn’t work: If I perform Alt+F1, the browser doesn’t open and the status bar on the bottom of the NetBeans window shoes a “Cannot perform Show Javadoc here”. Well – check the Path then. It should end in a directory that also contains the index.html, package-list, allclasses-frame.html etc.
  2. The source is not displayed – even though the path to the Jar/Zip is correct! In that case, The Zip/Jar often contains all the source code in src/mypackage/foo.java. NB expects only packagis in the Zip, so that the content list should look like: maypackage/foo.java. So simply build another src.zip with the contents of “src/” (in this case) and you’re done.

How to create Memory Leaks by using Inner Classes.

The most recent Java Specialists Newsletter finally convinced me to start this post that I was having in mind for quite some time.

One of the really huge advantages of Java is that you almost do not have to care about cleaning up your memory as the Garbage Collector usually does this for you as soon as Objects are no more referenced. Usually this works really really well so that you really don’t have to care about annything! But maybe once in a while you may be observing something like a memory leak. Some people then call the Garbage explicitly – which is usually just a bad idea and possibly also doesn’t help either so that the “leak” remains. The better solution in this case would be profiling so that you can see why some classes are not cleaned up.

A nice source for memory leaks can be the use of anonymous inner classes. Assume the following class where you want to compute s.th and return a Result-Object which derives from an Interface:

interface Result{}
class Outer {
    int[] data = null;
    public Outer(int s) { data = new int[s]; }
    Object getResult() { return new Result(){}; }
}

So if you call new Outer(1).getResult(), you will still have an instance of Outer in memory even though you did not keep an explicit reference. As explained in the Java Specialists Newsletter, each instance of an anoymous inner class always keeps a reference to the outer class! This is not a big deal as long as

  • you don’t keep a lot of data in the Outer instance or
  • if the lifetime of the Result object is not long or
  • if you won’t create a lot of results anyways.

Let’s have an example. If you execute

ArrayList l = new ArrayList();
int i = 0;
while(true){
    l.add(new Outer(0).getResult());
    System.out.println(i++);
}

with the above classes without memory constraints (-Xmx), this will run for quite some time because you are only holding 2 class references (Outer, Result) 1 field (the emtpty data array) and an implicit reference from Result to Outer. Which makes a total of 48 bytes on my Win7 64bit machine (according to this measurement).

Now change the parameter in the constructor of Outer from 0 to 100000 and execute the code again. In my case I am getting an OutOfMemoryException after a bit more than 2000 created instances as now suddenly each iteration consumes 400.048 bytes (48 bytes as before + 100.000*4 bytes for the int-array) even though we only keep the explicit reference to the Result objects!

So – if you are creating an inner class the next time – you might have a brief look at the outer class as well and think about memory consumption and lifetime.

Maximum Gain Round Trips with Cost Constraints

The idea is the following: Finding the shortest/fastes path from A to B is rather exploited. But if you start a hike, knowing that you want to spend 4 hours and then come back to the starting point. Then the problem suddenly starts to become a bit complex (NP-hard to be honest if you do not add any constraints).

We propose a solution to do this kind of search a bit more efficient. but don’t expect linear search time 😉 And – in contrast to quite some other research – we are operating on REAL data obtained from OpenStreetMap.

Abstract:

Searching for optimal ways in a network is an important task in multiple application areas such as social networks, co-citation graphs or road networks. In the majority of applications, each edge in a network is associated with a certain cost and an optimal way minimizes the cost while fulfilling a certain property, e.g connecting a start and a destination node. In this paper, we want to extend pure cost networks to so-called cost-gain networks. In this type of network, each edge is additionally associated with a certain gain. Thus, a way having a certain cost additionally provides a certain gain. In the following, we will discuss the problem of finding ways providing maximal gain while costing less than a certain budget. An application for this type of problem is the round trip problem of a traveler: Given a certain amount of time, which is the best round trip traversing the most scenic landscape or visiting the most important sights? In the following, we distinguish two cases of the problem. The first does not control any redundant edges and the second allows a more sophisticated handling of edges occurring more than once. To answer the maximum round trip queries on a given graph data set, we propose unidirectional and bidirectional search algorithms. Both types of algorithms are tested for the use case named above on real world spatial networks.

Documents

At our project site you can find:

Bibtex

@TECHREPORT{GraKriSchu11,
  AUTHOR      = {F. Graf and H.-P. Kriegel and M. Schubert},
  TITLE       = {Maximum Gain Round Trips with Cost Constraints},
  INSTITUTION = {Institute for Informatics, Ludwig-Maximilians-University, Munich, Germany},
  YEAR        = {2011},
  LINK        = {http://arxiv.org/abs/1105.0830v1}
}

MARiO: Multi Attribute Routing in Open Street Map

Yeah, I got a new Publication accepted at Symposium on Spatial and Temporal Databases (SSTD) 2011 that is dealing with OpenStreetMap Data (using the JXMapKit and JXMapViewer).

MARiO: Multi Attribute Routing in Open Street Map

Franz Graf, Hans-Peter Kriegel, Matthias Schubert, Matthias Renz

Published at Symposium on Spatial and Temporal Databases (SSTD) 2011
Conference Date: August 24th – 26th, 2011
Conference Location: Minneapolis, MN, USA.

Abstract:

In recent years, the Open Street Map (OSM) project collected a large repository of spatial network data containing a rich variety of information about traffic lights, road types, points of interest etc.. Formally, this network can be described as a multi-attribute graph, i.e. a graph considering multiple attributes when describing the traversal of an edge. In this demo, we present our framework for Multi Attribute Routing in Open Street Map (MARiO). MARiO includes methods for preprocessing OSM data by deriving attribute information and integrating additional data from external sources. There are several routing algorithms already available and additional methods can be easily added by using a plugin mechanism. Since routing in a multi-attribute environment often results in large sets of potentially interesting routes, our graphical fronted allows various views to interactively explore query results.

Documents:

Bibtex

@INPROCEEDINGS{GraKriRenSch11,
  AUTHOR      = {F. Graf and H.-P. Kriegel and M. Renz and M. Schubert},
  TITLE       = {{MARiO}: Multi Attribute Routing in Open Street Map},
  BOOKTITLE   = {Proceedings of the 12th International Symposium on Spatial and Temporal Databases (SSTD), Minneapolis, MN, USA},
  YEAR        = {2011}
}

Robust Segmentation of Relevant Regions in Low Depth of Field Images

Great, we got accepted (as a poster) on the ICIP 2011 with the paper “Robust Segmentation of Relevant Regions in Low Depth of Field Images”:

Low depth of field (DOF) is an important technique to emphasize the object of interest (OOI) within an image. When viewing a low depth of field image, the viewer implicitly segments the image into region of interest and non regions of interest which has major impact on the perception of the image. Thus, robust algorithms for the detection of the OOI in low DOF images provide valuable information for subsequent image processing and image retrieval. In this paper we propose a robust and parameterless algorithm for the fully automatic segmentation of low depth of field images. We compare our method with three similar methods and show the superior robustness even though our algorithm does not require any parameters to be set by hand. The experiments are conducted on a real world data set with high and low depth of field images. (Abstract from the paper)

The work is a result of a collaboration with Michael Weiler. We extended his Diploma thesis and produced an improved segmentation algorithm for Low Depth Of Field images. Compared to the other 3 competing algorithms, ours is a bit slower but at least it works. The other algorithms turned out to be extremely unstable and/or sensitive to parameters.

On the project site you can find

  • an online demo
  • the test images,
  • the masks
  • the NetBeans project including the full Java source code for our algorithm and the reimplementation of the comparison partners (of course we had to re-implement as we didn’t even get binaries – as usual)

So if you plan to do some image segmentation, just go there download the stuff and cite our work 😉

Fully automatic detection of the vertebrae in 2D CT images – the Talk

Yea finally I gave the talk for my Publication “Fully automatic detection of the vertebrae in 2D CT images” Paper 7962-11 at SPIE Medical Imaging 2011, Conference 7962 Image Processing (see index) in front of about 200 people.

Everything went fine. Just some nice questions right after the talk and some hints afterwards. Hey – some guys even remembered the talk 2 days later! 🙂

Thanks, SPIE Medical Imaging.

Bibliography Extension for MediaWiki

MediaWiki > Skins > Extension … a use case story

In mid 2009 we were asked to redesign our homepage to fit the corporate design of the LMU. During this time, I introduced and established MediaWiki as content management system for our website as it provided the needed flexibility, freedom and usability for our group.

One major issue was the list of publications which is very important for every researcher. In the old system, everyone maintained his own publication list manually. In parallel, all the publications were maintained in a central BibTex-file stored in CVS. Nice example for ‘Redundancy‘ – hey we’re a data base group – there shouldn’t be redundancies. So I wrote an extension in my spare time which parses a bibliography-file into a more convenient format. The result of such an automatically generated list can be seen for example on my publications list. – Well, finally I just put it online. Maybe you are a researcher or just maintinaing the website of some researcher(s) who already have a bib-File of their own publications and want to put your publications list online.

What this Extension can do / Features

The extension in action
The extension in action

This extension allows processing a central bibliography file in BibTeX format in order to create personalized publication pages for authors, projects or keywords. The BibTeX data can be stored in a file in the filesystem or in a special wiki article.

Implemented Features

  • The bibliography can be stored in a filesystem file or in a separate article of the wiki.
  • Multiple authors can share a single BibTeX source and have individual publication lists on their personal pages (=articles)
  • Filtering can be done on all attributes of the bibtex file/article.
  • Filters can be combined: for example, all papers of year=2010 AND author=xy AND keyword=xyz
  • Supports optional bibtex entries (“pages =100–110” may be entered in one bibtex entry but not in another – but if the entry is present, it should be formated as “p. 100-110”. If the attribute is not there, “p. ” should also not appear)
  • Different bibliography-types (article, book, inproceeding, …) use different styles according to the mandatory and optional fields.
  • @unpublished entries will be ignored
  • Supports @String-replacement as it is can be done by BibTeX
  • Automatically adds a separator between entries of different years
  • Provide additional links for each BibTeX entry (like PDFs or links to articles with further informations)
  • Author names can be linked automatically to pre defined wiki articles

Download

Just visit my How-To page, to see what the extension can do.
You can download the extension by using the link in the download section.

Maybe you can just drop me a line if you find the extension useful.