Tuesday, November 24, 2009

100-cores by next year

Once again, hardware is far ahead of software. Tilera has announced a 100-cores processor for 2010.

Unlike standard multi-core processors, Tilera's TILE-Gx is architectured around a 2D grid network rather than a single shared bus. This is a way to jump over the "memory wall", and feed enough data to keep all cores busy.

This design provides a lot of raw computing power, but also better efficiency (more computing power per watt).

But this beast is supposed to be coded in standard C/C++. It already requires black magic in order to write a 2-threads program that behaves as expected, what about 100's of threads ?

The TILE-Gx is a perfect match for Ateji Parallel Extensions : data parallelism handles large scientific computations task parallelism handles server-like applications, and message-passing leverages the hardware's packet network interconnection mechanism. High-performance code can be arranged in a data-flow or streaming architecture, reducing accesses to shared memory.

Wednesday, November 18, 2009

Session Evaluation

I just received my session evaluation from the TSS Java Symposium Europe : an impressive 4.58/5.0 with the comment "Great Session !".

I am less proud of the speaker evaluation, at 4.21/5.0. If you attended the session, I'd be happy to hear from you about what could be improved.

Sunday, November 1, 2009

Parallelism at the language level - Part 1: Hello World

The major contribution of Ateji Parallel Extensions is to add parallelism at the language level.

What does this change? Today's mainstream programming languages have been designed with sequential processing in mind, they simply have no idea about what is parallelism. Consider how you'd run two tasks in parallel in Java:

  Thread otherThread = new Thread() {
    void run() {
    println("Hello"); // print Hello in the other thread
   }
  }.start();
  println("World"); // print World in this thread
  otherThread.join(); // wait until code1 has terminated

Not to mention how unreadable and unmaintainable this code is, you'll notice that there is a fair amount of black magic involved here: just because you called a method whose name happens to be start(), the whole behaviour of your program has changed. But the compiler is not aware of this change, it thinks it is just calling an ordinary library method.

With Ateji Parallel Extensions, two tasks are run in parallel by composing them using the || operator :

  println("Hello"); || println("World");

How could it be simpler?

Not only is this much more concise and understandable, it also makes it easier for the developer to "think" parallel and to catch potential errors early.

And since the very idea of parallelism is present in the language, the compiler is able to understand the actual meaning of the code and to perform tricks such as high-level code optimization or better verification.

Read more on parallelism at the language level.