TransCoder: The Artificial Intelligence tool for “translate” code from one programming language to another

Facebook AI Research TransCoder announced, a system that uses unsupervised deep learning to convert code from one programming language to another.

TransCoder received training in more than 2.8 millions of open source projects and outperforms existing code translation systems using rule-based methods.

The team described the system in an article posted on arXiv. TransCoder is inspired by other neural machine translation systems (NMT) who use deep learning to translate text from one natural language to another and are trained only on data from monolingual sources.

To compare the performance of the model, the Facebook team compiled a validation set of 852 functions and associated unit tests in each of the target languages ​​of the system: Java, Python y C ++.

Compared to existing systems, TransCoder performed better in this validation suite than existing commercial solutions: until 33 percentage points compared to j2py, a Java to Python translator.

Although the team restricted their work to only those three languages, affirm that “can be easily extended to most programming languages”.

Automated tools to translate source code from one language to another, also known as source-to-source compilers, transcompilers or transpilers, existsince the decade of 1970 . 

Most of these tools work similar to a standard code compiler: analyze the source code in aabstract syntax tree (AST). 

The AST is converted back to source code in a different language, generally applying rewriting rules.

Transpilers are useful in various settings. For example, some languages, WhatCoffeeScript YTypeScript, they are intentionally designed to use a transpiler to convert from a more developer friendly language to a more compatible one.

Sometimes it is useful to transpile entire code bases from source languages ​​that are obsolete or deprecated; for example, the2to3 transpiler tool used to port Python code from version 2 obsolete to version 3.

Nevertheless, the transpilers arefar from perfect , and creating one requiresa significant development effort (and oftenpersonalization ).


TransCoder builds on advances in natural language processing (PNL), in particular unsupervised NMT. The model uses atransformer-based sequence-to-sequence architecture consisting of an attention-based encoder and decoder.

Since obtaining a data set for supervised learning would be difficult — would require many pairs of equivalent code samples in both source and target languages — the team chose to use monolingual data sets for unsupervised learning, using three strategies.

First, the model is trained on input sequences that have random tokens masked; the model must learn to predict the correct value for the masked tokens.

Later, the model is trained on sequences that have been corrupted by masking, shuffling or removing tiles randomly; the model must learn to generate the corrected sequence.

Finally, two versions of these models are trained in parallel to makea reverse translation ; a model learns to translate from source to target language, and the other learns to translate back to the source.

TransCoder Pre-workout

Image source: https://arxiv.org/abs/2006.03511

To train the models, the team drew samples from more than 2.8 millions of open source repositories ofGitHub . 

From that, selected files in their languages ​​of choice (Java, C ++ and Python) and they extracted individual functions. They chose to work at the function level for two reasons: function definitions are small enough to be contained in a single training input batch, and the translation functions allow you to evaluate the model using unit tests.

Although many NLP systems use the BLEU to evaluate their translation results, Facebook researchers point out that this metric may be a poor choice for evaluating transpilers: results that are syntactically similar may have a high BLEU score but “could lead to very different compilation and calculation results”, whereas programs with different implementations that produce the same results may have a low BLEU score.

Therefore, the team chose to evaluate the results of their transpiler using a set of unit tests. The tests were obtained from the GeeksforGeeks site by collecting problems that contained solutions written in the three target languages; This resulted in a set of 852 functions.

The team compared the performance of TransCoder in this test set with two other existing transpiler solutions: the j2py Java to Python converter andTangible Software SolutionsC ++ – converter to Java. TransCoder “significantly” surpassed both, with a score of 74.8% Y 68.7% and C ++ – to Java and Java to Python respectively, in comparison with 61% Y 38.3% for commercial solutions.

In adiscussion on Reddit , One commenter compared this idea to GraalVM's strategy of providing a single runtime that supports multiple languages. Another commenter opined:

[TransCoder] it's a fun idea, but I think translating the syntax is the easy part. What about memory management, runtime differences, library differences, etc.?

In the TransCoder document, Facebook's AI research team notes that they intend to launch the “code and trained models”, but they haven't at this time.

Source: arxiv

Leave a Reply

Your email address will not be published. Required fields are marked *

thirteen − 3 =