Monday, August 25, 2008

Benchmarking javac vs. ecj on Array Access

One of the projects I'm currently working on is a fast linear algebra matrix library for Java. I know that there already exist some alternatives, but I usually found that they were either no longer actively maintained or their performance was much slower than what, for example, matlab would give you.

So in principle, all I wanted is a matrix library which directly interfaces with some fast BLAS or LAPACK library, for example ATLAS, but hides all of the Fortran greatness behind a nice object-oriented wrapper. Turns out that something like that does not seem to exist, so we started to work on our own little library called jBLAS which is going to be released soon.

Anyway, while working on that library, I got a bit worried about the performance of the stuff I'm doing in Java. These are mainly element-wise operations like multiplying all elements of a matrix with the corresponding elements of another matrix.

Since we're interfacing to C (actually Fortran) code a lot, all of the matrix data is stored in direct buffers, in particular java.nio.DoubleBuffer, and I wondered how their put and get access methods compared to array access and of course to C. So I wrote a little program in eclipse which did just that and compared the outcome with what I got in C, and to my suprise, Java was very much on par, and the performance differences between accessing arrays and direct buffers was very small.

But when I tried to repeat the results at home, I got very different numbers, both for arrays and for direct buffers, which suddenly took five times longer than arrays. At first I suspected that the different processor maker was the issue, but after some more tweaking I found out that again to my great surprise, the issue was the compiler.

It turns out that the eclipse compiler ecj is much better at compiling my benchmark than Sun's own javac. To see this for yourself, here is my benchmark:
import java.nio.*;

class TicToc {
private long savedTime;

public void tic(String msg) {
System.out.print(msg);
System.out.flush();

saveTime();
}

public void tic(String fmt, Object... args) {
System.out.printf(fmt, args);

saveTime();
}

private void saveTime() {
savedTime = System.currentTimeMillis();
}

public void toc() {
long elapsedTime = System.currentTimeMillis() - savedTime;

System.out.printf(" (%.3fs)\n", elapsedTime / 1000.0);
}
}


public class Main {
private static void copyArray(int size, int iters) {
double[] source = new double[size];
double[] target = new double[size];
for (int i = 0; i < iters; i++)
for (int j = 0; j < size; j++)
target[j] = source[j];
}

private static DoubleBuffer createBuffer(int size, ByteOrder bo) {
return ByteBuffer.allocateDirect(Double.SIZE / Byte.SIZE * size)
.order(bo).asDoubleBuffer();
}

private static void copyDoubleBuffer(int size, int iters, ByteOrder bo) {
DoubleBuffer source = createBuffer(size, bo);
DoubleBuffer target = createBuffer(size, bo);
for (int i = 0; i < iters; i++)
for (int j = 0; j < size; j++)
target.put(j, source.get(j));
}

public static void main(String[] args) {
TicToc t = new TicToc();

final int SIZE = 1000;
final int ITERS = 1000000;

t.tic("copying array of size %d %d times...", SIZE, ITERS);
copyArray(SIZE, ITERS);
t.toc();

t.tic("copying DoubleBuffer of size %d %d times... (LITTLE_ENDIAN)", SIZE, ITERS);
copyDoubleBuffer(SIZE, ITERS, ByteOrder.LITTLE_ENDIAN);
t.toc();
}
}

So let's compare the run-times of these programs using the latest Java JDK 6 update 7 and the stand-alone eclipse compiler 3.4.

Then, I compile the source with
javac Main.java
and run it with java -cp . Main:
copying array of size 1000 1000000 times... (2.978s)
copying DoubleBuffer of size 1000 1000000 times... (LITTLE_ENDIAN) (4.291s)
Now the same with the eclipse compiler: I compile it with
java -jar ..path_to_jar../ecj-3.4.jar -1.6 Main.java
and get the following results:
copying array of size 1000 1000000 times... (0.547s)
copying DoubleBuffer of size 1000 1000000 times... (LITTLE_ENDIAN) (0.633s)
Note that both files only differ in the bytecode, but the eclipse compiler manages to produce code which is about 6 times faster, and which is only slightly slower for the direct buffer.

I'll have to look into this quite some more, but if this is true, this is pretty amazing!

6 comments:

dleskov said...

The supposedly more optimized way to copy an array is via System.arraycopy:

private static void sysCopyArray(int size, int iters) {
 double[] source = new double[size];
 double[] target = new double[size];
 for (int i = 0; i < iters; i++) {
  System.arraycopy(source, 0, target, 0, size);
 }
}

On my system sysCopyArray() is about 2 times faster than copyArray() on HotSpot 1.6.0_03:

copying array of size 1000 1000000 times... (3.031s)
syscopying array of size 1000 1000000 times... (1.515s)
copying DoubleBuffer of size 1000 1000000 times... (LITTLE_ENDIAN) (17.688s)

However, on the Excelsior JET JVM there is almost no difference:

copying array of size 1000 1000000 times... (1.219s)
syscopying array of size 1000 1000000 times... (1.109s)
copying DoubleBuffer of size 1000 1000000 times... (LITTLE_ENDIAN) (15.422s)

I am most interested in finding out what difference the Eclipse compiler makes. here. I do not have Eclipse, though, and do not want to install and figure it out just for this small experiment. Can you email me the class files please?

Mikio Braun said...

Hi dleskov,

actually, you don't have to install eclipse, you can also just download the standalone compiler using the link in the post which is simply a jar file.

I could email the class file, but actually, I don't know your email address. I've put the class file here

Anyway, with sysCopyArray I get the following numbers:

javac
copying array of size 1000 1000000 times... (3.072s)
syscopying array of size 1000 1000000 times using... (0.575s)
copying DoubleBuffer of size 1000 1000000 times... (LITTLE_ENDIAN) (4.311s)
ecj: running
copying array of size 1000 1000000 times... (0.551s)
syscopying array of size 1000 1000000 times using... (0.586s)
copying DoubleBuffer of size 1000 1000000 times... (LITTLE_ENDIAN) (0.657s)

So basically, ecj is achieving the performance of System.arraycopy().

Actually, I think I should also add that I'm doing these tests on Linux (32bit) with a Intel Core 2 T7200 at 2 Ghz. I redid some of these experiments on AMD processors and under Windows and again got again quite different numbers... .

dleskov said...

Ah, I thought the email address of a comment author is exposed to the blog owner. Sorry.

I ran your Main.class and the result on HotSpot Client VM is the same as with the one produced by javac.

However, on HotSpot Server VM the result is:

copying array of size 1000 1000000 times... (1.860s)
syscopying array of size 1000 1000000 times using... (1.313s)
copying DoubleBuffer of size 1000 1000000 times... (LITTLE_ENDIAN) (2.312s)

I have compared the decompiled classes and I see no major difference. Are you sure you run both classes on the same JVM?

Mikio Braun said...

Yes, I'm pretty sure that I'm using the same VM. I'm running the program just with "java -cp . Main" both times.

You're right about Client VM vs. Server VM. It seems that the default VM is the Server VM on my installation. When I run the benchmarks with "java -client ..." they both have the same runtimes.

I know that the differences in the two bytecodes are pretty marginal. So somehow, it seems that the ecj compiler manages to produce bytecode leading to better optimization when compiled to native code than the one by the javac.

I'm not quite sure what to make of this as well. It might be possible that we're missing something, or that the ecj compiler does not produce equivalent code, but currently, I don't see why.

foobarz said...

I just independently noticed the exact same thing in one of my own benchmarks. I was comparing java to natively compiled code at accessing arrays of different primitive types. The test involved summing up the values contained in the arrays.

The ecj compiled code ran much faster than sun's 1.6 javac, and I did use the exact same virtual machine to run them. In fact the ecj code was comparable to the native g++ -O2 code. Have you made any progress on finding out why this is?

I'm going to try compiling some less artificial code and see if I see the same performance boost.

Also, any progress on jBLAS? I have had the same problems with high performance java matrix libraries. I'm currently using Matrix Toolkits for Java (MTJ) as a lesser evil...

Mikio Braun said...

Hello foobarz,

no, I haven't been able to find out why there is this difference in performance. I guess there is some mismatch between what the hotspot guys optimize and the javac guys produce... .

I also tried compiling jblas with ecj, but then the differences vanished, so I think that my benchmark from the post was rather artificial.

Concerning jblas, version 0.1 has been released. See http://jblas.org. I also posted some benchmarking numbers http://mikiobraun.blogspot.com/2009/04/some-benchmark-numbers-for-jblas.html which show that MTJ is more or less equal to jblas except for matrix multiplication.

Currently, I'm working on compiling the native libraries for different kinds of platforms and build a "fat-jar" so that you can use jblas without going through the admittedly complex compilation process.