I agree with much of this, especially api simplicity. I usually reach for openCSV for the same reason.
Definitely applaud the effort, and it would be good to extend the test corpus in terms of record length and escape complexity. I do think 3M records is on the low side. Good to see scale tests for 1OM, 100M, 1BN records too.
They're mostly operating on streams, so at some point (based mostly on how GC is managing, I imagine) the speed will be constant per-row regardless of the record count.
Definitely applaud the effort, and it would be good to extend the test corpus in terms of record length and escape complexity. I do think 3M records is on the low side. Good to see scale tests for 1OM, 100M, 1BN records too.