I don't know whether they have a specific French <--> Chinese model. They might, they might not.
It's hard to train for all n^2 language pairs, so MT systems usually back off to English as a pivot language. i.e., they'll translate French --> English --> Chinese.
New neural machine translation architectures are experimenting with pairs of neural encoders / decoders, one pair for each language and a shared language independent vector space for the meaning of all words:
- the vocabulary and topics covered in the bible is quite different from today's written and spoken text, especially phone discussions or social network messages.
- other aligned corpora such as http://www.statmt.org/europarl/ are much larger than the bible (several millions of tokens for most pairs vs less than 1 million for the Bible)
> so MT systems usually back off to English as a pivot language
That's an interesting choice, because English lacks features some other languages might have, and thus you end up distorting through English. I remember considerable work from different sources a ways back toward a constructing artificial languages for this purpose so to mitigate the introduction of ambiguity by using an existing natural language as a pivot language, I'm surprised that natural language as the pivot is the state of the art (though I'm not surprised that English is the pivot language given that.)
It's hard to train for all n^2 language pairs, so MT systems usually back off to English as a pivot language. i.e., they'll translate French --> English --> Chinese.