It is hard to find programmers really good at writing efficient code.
This difficulty increases even more if we are looking for programmers
able to write efficient parallel code.
I believe that compilers can be better than human at producing efficient parallel code
from sequential code.
However, there is a lot of room for improvements in this area,
because it involves problems that are really hard to solve.
Thus, automatic parallelization is a field of research that fascinates me.
Modern programming languages provide developers with a plethora of high-level abstractions that increase
their productivity. However, these abstractions might have a cost in terms of efficiency, for they
put the language a bit too far away from the hardware that executes it.
In this case, the compiler may be used to reduce this overhead whenever possible, by generating
This kind of optimization is something that always catches my attention.
A compiler usually changes a programming language gradually from
higher to lower-levels of representation. In this process, usually we
lose some information, originally available in the source code.
This loss becomes more evident once we start talking about
Domain-Specific Languages, which are very good at staying close to
developer's needs, and distant from the hardware. Recovering
this "lost" information from lower level languages such as the
LLVM IR is an important and challenging task that interests me deeply.