Completely agreed - and it's something we've had to work very hard at.
A rule of thumb that's worked for us is that the more information you show, the more tolerant your users are. However, that's also directly in opposition to the fact that the more information you show, the less user-friendly your product becomes.
Our approach has been to pyramid up the information we have. When we integrate a new source of data, we initially run customer engagement and have internal sanity checking algorithms to make sure we have a decent confidence about the data that we present. At this stage however, we give the user more information than less - so a full cost breakdown of a port call, instead of indicative total estimate, for example.
User engagement and testing will then help us understand which pieces are being used the most, and how we can best condense the data down in a way that best helps the user. This also gives us more time to refine our backend so that we have a higher degree of confidence about our methodology and results.
This often results in big changes - a twenty paragraph statement of restrictions may be condensed into a traffic light on whether a certain nationality is accepted into a port, or a fuel table may be condensed into a single estimate of cost based on estimated weight of the ship. This approach has worked well for us, and it might help in similar fields.
We've been using PostGIS' internal storage mechanism for routes and tiles so far. We've attempted to build our own indexing (through quadtrees indexed similar to OSM's), but it's never crossed the performance we get by default from PostGIS.
It's not a bottleneck yet, but I'm sure we'll be revisiting this soon. Thanks for the link! I hadn't found this one yet - super helpful.
A rule of thumb that's worked for us is that the more information you show, the more tolerant your users are. However, that's also directly in opposition to the fact that the more information you show, the less user-friendly your product becomes.
Our approach has been to pyramid up the information we have. When we integrate a new source of data, we initially run customer engagement and have internal sanity checking algorithms to make sure we have a decent confidence about the data that we present. At this stage however, we give the user more information than less - so a full cost breakdown of a port call, instead of indicative total estimate, for example.
User engagement and testing will then help us understand which pieces are being used the most, and how we can best condense the data down in a way that best helps the user. This also gives us more time to refine our backend so that we have a higher degree of confidence about our methodology and results.
This often results in big changes - a twenty paragraph statement of restrictions may be condensed into a traffic light on whether a certain nationality is accepted into a port, or a fuel table may be condensed into a single estimate of cost based on estimated weight of the ship. This approach has worked well for us, and it might help in similar fields.