Eww. This is basically screen-scraping command output to enable structure-oriented commands. That means you need a scraper for every output, the scrapers need to be aware of all the options, handle filenames with whitespace correctly (not sure that's even possible in general).
And once you've done that, you have this new batch of commands that are structure-oriented, so you are using bash (for example) as a boot loader for your new shell. Why not just use the new shell?
I have a project in this space: https://marceltheshell.org. It is based on Python, so Python values (strings, numbers, tuples, lists, etc.) are piped between commands. No sublanguages (as in awk); instead you write Python functions.
Marcel has an ls command, but instead of producing a list of strings, it generates a stream of File objects which can be piped downstream to other commands that operate on Files.
Example: sum of file sizes under directory foobar:
ls -f foobar | (f: f.size) | red +
-f means files only. In parens is a Python function (you can omit "lambda") that maps a file to its size. red + means reduce using addition, i.e., add up all the file sizes.
Or to compute size by file extension:
ls -f foobar | (f: (f.suffix, f.size)) | red . +
f.suffix is the extension. "red . +" means to group by the first field of the tuple (the extension) and add up the sizes within each group.
That's an interesting project to attempt. An issue that should be considered is that pipes can only pass plain text. So every command should import/export structured text which may impact performance and limit capabilities when compared to a shell that can pipe objects.
The structure can be done with ASCII: csv and tsv manage to pass data in plain text (the later while remaining readable)
Relational data would have the extra advantage over json-like property-based data of being both easier to present on screen (tables) and to interact with (SQL like: where + order + limit would cover most usecases)