Aw man! That 1994 paper by Sakakibara is a piece of history! It concludes by
saying that it's "part of the work in the major R&D of the Fifth Generation
Computer Project conducted under the program set up by MITI" [1]. Plus, Sakakibara
is one of my grammar induction heroes :0
However- his algorithm learns CFGs from structural data, which is to say,
derivation trees (think parse trees). So it's completely irrelevant to the
example in the article, that attempts to learn a^nb^n from examples of its
derivations -which remains impossible.
As to the other paper, by Chen, Tseng and Chen, that's about learning a CFG
that reproduces the strings in a corpus- so learning a CFG of a corpus as
opposed to the grammar of a context-free language (therefore, a context-free
grammar) which, again, remains impossible.
However- his algorithm learns CFGs from structural data, which is to say, derivation trees (think parse trees). So it's completely irrelevant to the example in the article, that attempts to learn a^nb^n from examples of its derivations -which remains impossible.
As to the other paper, by Chen, Tseng and Chen, that's about learning a CFG that reproduces the strings in a corpus- so learning a CFG of a corpus as opposed to the grammar of a context-free language (therefore, a context-free grammar) which, again, remains impossible.
_____________
[1] https://en.wikipedia.org/wiki/Fifth_generation_computer