The offset approach is what most people do. I've used it too. It makes the storage simpler for rectangle shaped maps but the algorithms more complicated (and slower). I wrote this page to present the alternative coordinate systems. The symmetry of the cube system greatly simplifies many of the algorithms, so it's often easier to convert offset or axial to cube, run the algorithm, and then convert the answer back to offset or axial. The main downside of axial is that map storage in an array is a little more complicated; http://www.redblobgames.com/grids/hexagons/#map-storage
Why are we storing in an array at all? Why keep that last vestige of rectangular representation?
How about storing the map directly in cubic coordinates? The storage structure would be a dictionary with a three-value tuple as the key. Iterating along any of the three axes is perfectly simple, and all directions work the same way. Algorithms like A* work directly on a list of neighboring hexes with no coordinate conversion required. Serialization to a savegame format or network communication would look like JSON with key-value pairs of the 3-tuple (key) to the tile data (value).
The advantage of the 2D array structure is only a bit of performance, that the coordinates of each tile are implicitly defined by the array indices. I'd venture to say that our modern languages and platforms don't need that optimization any more. We insert another layer of abstraction in decoupling the tile's coordinates away from its storage location in memory.
Put another way, the only conversion occurring is between the 3-tuple key and the physical location in memory, and that's abstracted away from us in whatever mechanism (hashtable or whatever) that the dictionary uses internally. Everything at the application level works directly with cubic coordinates. By getting rid of the 2D array storage, we get rid of the last remnants of anything rectangularly-related, with never any need at all to convert to the rectangular vestiges of axial or offset coordinates.
Have you actually tried implementing something like A*? The inner loop is very tight, and putting several easily avoided dictionary lookups in there is just silly.
Given how simple it is to write a solid abstraction that implements the coordinate-based lookup with an array storage, and given how much better such an implementation behaves in terms of instruction counts and memory layout (think cache locality), you'd be silly not to use an array.
I understand the concerns, but the same "have you tried implementing" concern applies there too. This is premature optimization, until you do it both ways and profile it and see if the runtime difference matters. Cache locality is a thing, but so is prematurely guessing about cache locality.
Not everything using A* runs millions of pathfinding queries in tight loops. Maybe it's an offline turn-based game that runs A* once per move and saving 10 μs just won't ever matter as compared to the complexity cost of implementing and maintaining the extra abstraction.
While I appreciate your concern about premature optimization, having 2D arrays as fundamental data structures in this kind of thing is so common that it just seems bizarre that you even call it an optimization. The resulting code is just as simple and natural, after all.
Perhaps we need a countering meme about gratuitous de-optimization...
a regular 2d array, but with every-other row logically offset by 50% of the cells width, thus you end up with:
super simple data structure (normal 2d array), and pathfinding isn't difficult (use hex algos). I wonder why nobody ever writes about this....Edit: ahh, the article kind-of describes this in the "Offset coordinates" section, just no explicit mention of able to use an array for storage.