Monday, February 8, 2010

Troubleshooting OutofMemory

Most java.lang.OutOfMemoryErrors are the result of a program simply creating and using more objects than can fit in the maximum allowable heap space. The most common resolution to this type of error is:

1. Increasing the maximum heap size using the appropriate JVM command line option (e.g. -mx512m).

Beyond this, you will need to take steps to learn more about your JVM's heap usage. The easiest and best way to do this is with the verbose gc option (Usually specified -verbosegc but sometimes as -verbose:gc or -Xverbose:gc). This setting will usually output a single line for each major and minor garbage collection that takes place. The format is specific to each JVM, but generally each line shows "heap in use", "amount freed", and "time spent in garbage collection".

Some SUN JVM users have received the OutOfMemoryError as the result of permanent generation limitations. The java heap is comprised of several segments; the permanent generation being one of those. Therefore, if the java.lang.OutOfMemoryError is issued when the java heap has not been completely used (as shown with the verbose gc option), the most common resolution will be:

2. Increasing the size of the permanent generation space using the appropriate JVM command option.

e.g. BEA Solaris platform recommendation:
If you have problems with OutOfMemory errors and the JVM crashing with
JDK 1.3, try setting: -XX:MaxPermSize=128m.

There is currently an open bug on Sun's bug parade that describes this problem. See,
http://developer.java.sun.com/developer/bugParade/bugs/4390238.html

When the above conditions and remedies do not help, the problem is often thought to be a memory leak. Real leaks are actually rare because Java Applications are not responsible for freeing memory; the JVM is. Still, when an application allocates java objects but never releases (de-references) them, this condition is very similar to a traditional memory leak seen commonly in C and C++ applications. This type of memory issue generally appears in the verbose GC output as a slow and steady loss of free heap space. Eventually the JVM's Full or Major Garbage Collection task runs more and more frequently trying to reclaim heap space. Eventually, it will not keep up with demand and the java.lang.OutOfMemoryError message is output. At this point the JVM is unable to execute java code and any subsequent results are unpredictable.

To diagnose and resolve this type of problem, you will likely need to obtain a Java Heap Profiling tool. The following procedure should help:

3a. First make sure that you have conducted your tests without JVM JIT optimization. This can be done by adding
"-Djava.compiler=none" to your JVM startup command. This test should be done to avoid any JVM bugs which may exist with the optimization of your
java code. This step is also required for use of the available JVM debugging tools and it will be helpful to establish that the problem you are trying to locate is not a JVM bug but is instead created
by java code.
3b. Use the JProbe (http://www.jprobe.com or http://www.sitraka.com/software/jprobe) utility to inspect your JVM heap in order to determine which object class instances are being accumulated.

3c. Another similar product is OptimizeIt (http://www.optimizeit.com or http://www.borland.com/optimizeit)

3d. The JVM itself has the ability to dump its heap contents upon process termination. Therefore, you may find it helpful to supply the following JVM options:

-Xrunhprof:heap=dump,format=a (Use java -Xrunhprof:help for details)

and invoke the following code within your JVM when you wish to inspect the current heap contents:

System.gc(); // Request a Full Garbage Collection
Thread.sleep(5000); // Wait for completion
System.exit(); // Terminate the JVM process

The resulting java.hprof.txt file can be inspected to determine if any application changes can or should be made to
reduce the number of active objects. More information on this Java API can be found at:

http://java.sun.com/j2se/1.3/docs/guide/jvmpi/jvmpi.html#hprof-heap

There is still one more possibility or concept to explore. The amount of heap being used is often driven by the multi-threaded nature of a server JVM (such as WebLogic). If you allocate 100 threads for handling server requests, then it is possible for all 100 to be running in parallel. Under load, this configuration can use approximately 10 times the amount of Java Heap Space as one with only 10 threads. Therefore, if your verbose gc output shows a rapid or sudden climb in heap usage until none is available (free), you may simply have too many simultaneous activities for the amount of available heap. In this case, the resolution will be to:

4. Make your java heap as large as is possible (for the physical machine configuration) and then reduce server thread counts until your server application can stay within it limits.

The above suggestions should adequately address most Java Heap related memory problems. However, it is still possible to encounter system limitations with memory and/or JVM memory leaks.

Most UNIX operating systems allow limits to be placed on various process resources. Such limits may prevent creating Java Threads and other objects which need to allocate native process components such as stacks which are part of the total process size.

5. Carefully inspect the reported error message to make sure that an operating system process limitation is not at play.

JVM memory leaks are very rare but still possible. Therefore:

6. If your process size continues to grow until system resources are exhausted or limits exceeded, you may wish to use native O/S tools to determine which process segments are responsible. Continuous growth in the JVMs native components should be reported to the JVM vendor. Remember, when the JVM heap size reaches its maximum (-mx), the process size will not grow as a result of Java Heap allocation. Therefore, escalating process size is generally a result of native code (e.g. The JVM or JNI libraries).

No comments: