When presenting
Ateji PX to an audience with HPC or simulation background, I often hear definitive opinions of the kind "We'll never consider Java, it's too slow".
This appears to be more a cultural bias than an objective statement referring to actual benchmarks. To better understand the state of Java for high-performance computing, let us browse through three recent publications:
Java is used happily for huge HPC projectsAt the European Space Agency, Java has been chosen for large CPU and data handling needs in the order of 10^21 flops and 10^15 bytes. That's a lot of zeroes, there is no doubt here we're talking here about high-performance computing.
"We are happy with the decision made and haven’t (yet) faced any major drawback due to the choice of language" [1].
"HPC developers and users usually want to use Java in their projects" [2]. Indeed, Java has many advantages over traditional HPC languages:
- faster development
- higher code reliability
- portability
- adaptative run-time optimization
There is also a lesser known but very interesting side-effect. Since the language is cleaner, it is easier for developers to concentrate on performance optimization (rather than, say, chasing memory-allocation bugs):
"In Mare Nostrum the Java version runs about four times faster than the C version [...]. Obviously, an optimisation of the C code should make it much more efficient, to at least the level of the Java code. However, this shows how the same developer did a quicker and better job in Java (a language that, unlike C, he was unfamiliar with)" [1].
Java code is no longer slow [2].All benchmarks show that Java is not any slower than C++. This was the case ten years ago, and the deceptive results of the JavaGrande initiative for promoting Java for HPC gave it a bad reputation. But recent JVMs (Java Virtual Machines) do an impressive job of aggressive runtime optimization, adapting to the specific hardware it is running on and dynamically optimizing critical code fragments.
However, performance varies greatly, depending on the JVM used (version and vendor) and the kind of computation performed [1]. One lesson to be remembered is that you should always test and benchmark your application with several recent JVMs from different vendors, as well as different command line arguments (compare -client and -server).
Obviously, there are some caveats.There are still performance penalties in Java communications: pure Java libraries are not well suited for HPC ("the speed and scalability of the Java ProActive implementation are, as of today still lower than MPI" [3]), and wrapping native libraries with JNI is slow.
The situation is improving recently with projects such as
Java Fast Sockets,
Fast-MPJ,
MPJ Express, that aim at providing fast message-passing without JNI overhead.
HPC users also lack the quality and sophistication of those available in C or Fortran (but this seems to be improving). Obviously, this is a matter of having a large enough community, and the critical mass seems to have been reached.
The decision processI cannot resist citing the whole part about the decision process that took place at ESA:
FORTRAN was somewhat favoured by the scientific community but was quickly discarded; the type of system to develop would have been unmaintainable, and even not feasible in some cases. For this purpose the choice of an object-oriented approach was deemed advisable. The choice was narrowed to C++ and Java.
The C++ versus Java debate lasted longer. “Orthodox” thinking stated that C++ should be used for High Performance Computing for performance reasons. “Heterodox” thinking suggested that the disadvantage of Java in performance was outweighted by faster development and higher code reliability.
However, when JIT Java VMs were released we did some benchmarks to compare C++ vs Java performances (linear algebra, FFTs, etc.) . The results showed that the Java performance had become quite reasonable, even comparable to C++ code (and likely to improve!). Additionally, Java offered 100% portability and I/O was likely to be the main limiting factor rather than raw computation performance.
Java was finally chosen as the development language for DPAC. Since then hundreds of thousands of code lines have been written for the reduction system. We are happy with the decision made and haven’t (yet) faced any major drawback due to the choice of language.
And now comes Ateji PXSo there's a growing consensus about using Java for HPC applications, in industries such as space, particle physics, bioinformatics, finance.
However, Java does not make parallel programming easy. It relies on libraries providing explicit threads (a concepts very ill-suited as a programming abstraction, see
"The Problem with Threads") or relying on Runnable-like interfaces for distributed programming, leaving all the splitting and interfacing work to the programmer.
What we've tried to provide with Ateji PX is a simple and easy way to do parallel high-performance computing in Java. Basically, the only thing you need to learn is the parallel bar '||'. Here is the parallel 'Hello World' in Ateji PX:
[
|| System.out.println("Hello");
|| System.out.println("World");
]
This code contains two parallel branches, introduced by the '||' symbol. It will print either
Hello
World
or
World
Hello
depending on which branch gets scheduled first.
Data parallelism is obtained by quantifiying parallel branches. For instance, the code below increments all N elements of array in parallel:
[
|| (int i: n) array[i]++;
]
Communication is also part of language, and is mapped by the compiler to any available communication library, such as sockets or MPI, or via shared memory when possible. Here are two communicating branches, the first one sends a value on a channel, the second one reads the value and prints it:
Chan c = new Chan();
[
|| c ! 123;
|| c ? int value; System.out.println(value);
]
There much more, have a look at the
Ateji PX whitepaper for a detailed survey.
Ateji PX has proven to be efficient, demonstrating a
12.5x speedup on a 16-core server with a single '||'.
It has also proven to be easy to learn and use. We ran a 12-month beta testing program that demonstrated that most Java developers, given the Ateji PX documentation can write, compile and run their first parallel program within a couple of hours.
An interesting aspect is that the source code does not depend on the actual hardware architecture being used, be it shared-memory, cluster, grid, cloud or GPU accelerator. It is the compiler that performs the boring task of mapping syntactic constructs to specific architectures and libraries.
A free evaluation version is available for download
here. Have fun!