The proposed improvements would obsolete a bunch of problems such as broken polygons [1] which happen regularly. They would also make processing OSM more accessible without needing to randomly seek over GBs of node locations just to assemble geometries which takes a significant runtime percentage of osm2pgsql.
For me Steve Coast lost his credibility when he joined the closed and proprietary what3words.
There's around 5.1e14 meters squared on the surface of earth. It takes 34 bits to address this uniquely. If we use one of EFF's dice words style short word lists (6^4 words), we need 5 words to describe any point on earth with 1 meter precision.
If we use a projection like say S2 (though plenty of other options exist), these 5 word locators will show strong hierarchical locality. In any specific area for example, there's likely only 3 distinct top level words. Likewise, the last word is useful but probably unnecessary precision for "find the building" day to day use. So the middle 3 words will be sufficient to be unambiguous in most cases, and if people used this system they'll naturally become familiar with the phrases typical to their locale.
All of this can be done with an algorithm a freshman cs student can understand, with a trivial amount of reference data. It can run on any mobile device made in the last 15 years without an internet connection.
I designed a scheme like this for fun years ago, just because it was a natural outflow of some stuff I was doing with dicewords for default credentials in a consulting context, and I just find spatial subdivision structures neat.
It's hard to interpret what3word's scheme as anything but craven rent seeking. They want to keep the mapping obscure, and fundamentally sacrifice usability in the interests of this. That what3words markets this specifically as a solution for low income nations, and dupes NGOs that are not tech savvy in the service of this is utterly #$%@$#ing revolting.
Imagine trying to rent seek on selling poor people their own street addresses, if you'll let me be slightly hyperbolic.
There is no reason a scheme like this can't simply be a standard from some appropriate body, and a few open source reference implementations.
This comment thread is the first time I hear about w3w. It hurts my brain trying to come up a reasoning how such concept is not some kind of parody one-off project intended to be posted on HN or reddit for the lolz. Instead, it is actually being used by the emergency service?
> There is no reason a scheme like this can't simply be a standard from some appropriate body, and a few open source reference implementations.
Yet no-one did this and I think that's the point here.
World is full of rent seeking in the form of stuff that is dead simple to do but no-one does without a financial incentive.
In w3w the hard part is not the system itself, but getting people to use it, which must be done because the value of the system comes from the network effect.
But you don't need to complicate the storage format to fix a problem like that. You can build validation tools that will check whether the stored data conforms to the correct specified geometry, and only emit valid polygons to later tools in the pipeline when they do.
"Be liberal in what you accept and strict in what you send" is still a good principle. The problem with rejecting invalid structures at the data storage format instead of a later validation step is that it hurts flexibility and extensibility. If later on you need a different type of polygon that would be rejected by the specification, you'll need to create a new version of the file format and update all tools reading it even if they won't handle the new type, instead of just having old tools silently ignoring the new format that they don't understand.
> "Be liberal in what you accept and strict in what you send" is still a good principle.
No, it is a terrible principle which produces brittle software and impossible to implement standards. The problem is that no one actually follows the “be strict in what you send” part, and just goes with whatever cobbled together mess the other existing software seems to accept. Before long, a spec compliant implementation can’t actually understand any of the messages that are being sent
> just having old tools silently ignoring the new format that they don't understand.
This sounds like another headache. I don’t want my tools silently breaking.
> This sounds like another headache. I don’t want my tools silently breaking.
Yet here you are, posting your comment through a web browser on a web page. And the new standard that was intended to make web pages display catastrophic failure and stop processing with each error (XHTML) was never widely adopted. Makes you wonder why? Maybe the nature of an open data platform for human consumption has something inherent to it so that it's better to accept a certain degree of inaccuracy and inconsistencies in its stored data?
> Maybe the nature of an open data platform for human consumption has something inherent to it so that it's better to accept a certain degree of inaccuracy and inconsistencies in its stored data?
That's complete nonsense. The only reason that web browsers accept malformed webpages is because there were already orders of magnitude too many webpages that violated the relevant specs when XHTML was introduced. If web browsers had enforced XHTML from the start, then everyone would have damn well followed it.
> You can build validation tools that will check whether the stored data conforms to the correct specified geometry, and only emit valid polygons to later tools in the pipeline when they do.
It is not helping at all when the problem is that important areas disappeared.
It is also not helping at all other mappers or confused newbie.
Why do you need to change the data format to make it faster (at the cost of making it harder to work with to end users)? The data is the same as it was at the beginning, it doesn't justify a technical redesign. Why not just create accelerators based on an intermediat format?
Properly normalized data isn't just faster, it's also easier to work with for the end user. There are much less exceptions, edge cases and snafus to work around and test for. If you're talking about the transition period between formats, well yeah, you're gonna see things breaking. But these were already broken, just not in apparent ways. In the end, everybody wins.
> Properly normalized data isn't just faster, it's also easier to work with for the end user.
Extracting and reusing data, yes. Getting it into the tool in the first place, no way. Tools that won't even allow you to save your data and make it persistent until you conform to every single integrity requirement are a nightmare for end users.
People doing mapping tasks will use an editor and not really see the change.
People consuming the data will also mostly use tools, tools that likely run much faster.
I've written some code to chop up overlapping gis areas into ways and relations (to match the current data model of references to shared nodes). The input to that code is pretty close to the proposed data model, so not going to be more difficult to do that processing (as an example of a task that doesn't just use 3rd party tools).
The best thing is not to allow invalid geometries to begin with. Any validation would need to be done in an off-line fashion for a number of reasons (such as needing to retrieve any referred OSM elements), and by that time you can't automatically revert offending changes as any revert carries a chance of an object version conflict.
> The best thing is not to allow invalid geometries to begin with.
The best thing for whom? The developer? Certainly not for the end user, who needs to have invalid geometries while the drawing is being made and the data is still incomplete. Having a file format that won't admit that temporary state means that either the user can't save incomplete draft work, or that an entirely different format will be needed to represent such in-process work.
The article is rightfuly critizising that such incomplete way of thinking, that
doesn't take into account the full picture nor the systemic effects of a change, is pushed forwards only because they seem "the right thing" from an incomplete understanding of all the concerns and the needs from all stakeholders.
The right technical decision *must* include them to be correct, and the best design might involve a solution other than "update the file format so that it doesn't accept inconsistent geometry (acording to the set of rules that we understand as of today)". But to assess what the right decision is, you need to know how people is using the system in real use-cases beyond classic comp-sci concerns of data storage and model consistency; and to learn those, you need to talk to end users and perform field research to inform your decisions and designs.
> Having a file format that won't admit that temporary state means that either the user can't save incomplete draft work, or that an entirely different format will be needed to represent such in-process work.
Saving such temporary state is very rarely needed in OSM and should be never uploaded to the OSM database.
In addition, in almost all cases it can be simply saved as area of shape that is not yet matching intended one.
> Saving such temporary state is very rarely needed in OSM and should be never uploaded to the OSM database.
Maybe, but you're missing the other use case - that in the future you'll need an extension requiring geometries that are considered invalid by the current set of rules, forcing you to update all tools processing the file format to acommodate the new extension.
Keeping storage and validation as two separate steps is a more flexible design, preferable on platforms where data is entered by a large number of users in a complex domain that is not easy to model inambiguously.
Think of Wikipedia and what would have happened if its text format had only supported grammatically correct expressions without spelling mistakes, and without letting you save templates with any errors. The project would never have attracted the volume of editors it took to create the initial version with millions of articles, and the product would never have taken off. In an open project with data provided by the general public, keeping user data validation in the same layer as the automatic processing model is a design mistake.
I think that noone serious proposes to include rules like
> You could have rules that say you can’t link Finland to Barbados.
in the data model. That is a red herring.
But rules like "area must be a valid area" are a good idea, in the same way as Wikipedia is requiring article code to be a text and is not allowing saving binary data there.
> Maybe, but you're missing the other use case - that in the future you'll need an extension requiring geometries that are considered invalid by the current set of rules, forcing you to update all tools processing the file format to acommodate the new extension.
I think the way to go is to define several layers of correctness. A data set might then be partially valid. In such cases a tool might, for example, support transitions from a complete valid state A to a complete valid state C by an intermediate partially valid state B. (As databases with referential integrity may allow intermediate states in a transaction where referential integrity is broken.)
>
I think the way to go is to define several layers of correctness. A data set might then be partially valid
Thanks, that summarizes what I was aiming for. An open platform will be more flexible and and allow for different use cases the fewer assumptions about how it should be used it includes.
> Maybe, but you're missing the other use case - that in the future you'll need an extension requiring geometries that are considered invalid by the current set of rules, forcing you to update all tools processing the file format to acommodate the new extension.
As someone else in this subtree mentioned, apparently this flexibility wasn't needed for the last 20 years.
I don't have horribly strong opinions here, but the argument feels circular to me:
- The format should be kept simple to encourage more people to build tools on top of it, and users will be more likely to work with it.
- We should deal with the emergent complexity of bad validation by making tools more complicated and having them detect errors on their end.
If users are going to use a validation tool to work with data, then they can also use a helper tool to generate data. And if the goal is to make it easier to build on top of data, import it, etc... allowing developers to do less work validating everything makes it easier for them to build things.
I'm going over the various threads on this page, and half of the critics here are saying that user data should be user facing, and the other half are saying that separate tools/validators should be used when submitting data. I don't know how to reconcile those two ideas; particularly a few comments that I'm seeing that validation should be primarily clientside embedded in tools.
Again, no strong opinions, and I'll freely admit I'm not familiar enough with OSM's data model to really have an opinion on whether simplification is necessary. But one of the good things about user facing data should be that you can confidently manipulate it without requiring a validator. If you need a validator, then why not also just use a tool to generate/translate the data?
To me, "just use a tool" doesn't seem like a convincing argument for making a data structure more error prone, at least not if the idea is that people should be able to work directly with that data structure.
----
> you'll need to create a new version of the file format and update all tools reading it even if they won't handle the new type, instead of just having old tools silently ignoring the new format that they don't understand.
Again, not sure that I understand the full scope of the problem here, and I'm not trying to make a strong claim, but extensible/backwards-compatible file formats exist. And again, I don't really see how validation solves this problem, you're just as likely to end up with a validator in your pipeline that rejects extensions as invalid, or a renderer that doesn't know how to handle a data extension that used to be invalid or impossible.
Wouldn't be nicer to have a clear definition of what's possible that everyone is aware of and can reason about without inspecting the entire validation stack? Wouldn't it be nice to not finish a big mapping project and then only find out that it has errors when you submit it? Or to know that if your viewer supports vWhatever of the spec that it is guaranteed to actually work, and not fall over when it encounters a novel extension to the data format that it doesn't understand or that it didn't think was possible? Personally, I'd rather be able to know right off the bat what a program supports rather than have to intuit it by seeing how it behaves and looking around for missing data.
Part of what's nice about trying to do extensions explicitly rather than implicitly through assumptions about data shape, is that it's easier to explicitly identify what is and isn't an extension.
> If users are going to use a validation tool to work with data, then they can also use a helper tool to generate data. And if the goal is to make it easier to build on top of data, import it, etc... allowing developers to do less work validating everything makes it easier for them to build things.
That's good thinking for cases where you have a single toolset, in which tools can be kept in sync to collaborate with one another.
But in an open distributed data platform, where several possibly incompatible toolsets will be used, forcing a type of validation on the data itself based on the expectation of one group of tools can make some other applications impossible. In these cases, making the data format simple will make it easier to developers to build new tools, and the difficulties of synchronizing different tools may be dealt with in a different layer.
> That's good thinking for cases where you have a single toolset, in which tools can be kept in sync to collaborate with one another.
This is interesting. I would actually kind of argue the exact opposite, that more rigorously defined formats are more important the more diverse your toolsets get, and less important the less diverse they are.
The whole point of having a rigorously defined data format that blocks certain validation errors at the data level is that it's easier for diverse toolsets to work with that data, because they don't need to all implement their own validators, and they don't need to worry as much about other tools accidentally sending them malformed/broken data.
> making the data format simple
I think where we might be disagreeing is that I argue more specific data formats that inherently block validation errors are simpler than vague formats where there are restrictions and errors you can make, but those restrictions aren't clearly documented and aren't obvious until after you try to import the data.
I would point to something like the Matrix specification -- they have put comparatively more work into making sure that the Matrix specification (while flexible) is consistent, they don't want clients randomly making a bunch of changes or assumptions about the data format. That's partially inspired by looking back at standards like Jabber and seeing that having a lack of consensus about data formats caused tools to become extremely fragmented and hard to coordinate with each other. See https://news.ycombinator.com/item?id=17064616 for more information on that.
My feeling is that when you introduce validation layers, you have not actually gotten rid of restrictions between user applications, and you have not actually made coordination simpler, because different tools are going to break when they see pieces of data that they consider invalid or that they didn't realize they needed to be able to handle. All that's really happened is that complexity has been moved into the individual applications and that logic has been duplicated across a bunch of different apps.
In contract, when every single tool is speaking the same language and agrees what is and isn't valid data, then it's very fast to build tools that you know will be compatible with everything else in the ecosystem.
I'm thinking of Markdown as an example of a format with loose validation rules and a low entry barrier.
Sure, having several slightly incompatible versions with different degrees of completeness is a pain in the ass for rendering it. But insisting on a single format (such as titles can only be made with '#' not '-----', tables can only be '|--', comments can only be '-' not '*', etc) and rejecting as invalid any other user input would be way worse in terms of its purpose as an easy to learn, easy to read text-only format.
:) This is a really interesting conversation, because we keep aligning on some things and then reaching opposite conclusions.
I agree that Markdown has loose validation rules and a low entry barrier for writing, and having a low entry barrier for writing is nice, and I do think it's a good example, but just in the opposite direction. I think that Markdown's inconsistent implementations are one of the format's greatest weaknesses and have made the ecosystem harder to work with than necessary.
I generally feel like when I'm working with Markdown I can only rely on the lowest common denominator syntax being supported, and everything else I need to look up documentation for the specific platform/tool I'm using. It's cool that Markdown can be extended, but in practice I've found that Markdown extensions might as well be program-specific syntaxes, since I can't rely on the extension working anywhere else.
Markdown is saved a little bit by virtue of it not actually needing to be rendered at all in order to be readable, so in some cases I've taken to treating Mardkown as a format that should never be parsed/formatted in the first place and just treated like any other text file. But I'm not sure that philosophy works with mapping software, I think those formats need to be parsed sometimes.
This might get back a little bit to a disagreement over what simplicity means. Markdown is simple to write, but not simple to write in a way where you know it'll be compatible with every tool. It's simple to parse if you don't worry about compatibility with the rest of the ecosystem, but if you're trying to be robust about handling different variants/implementations, then it becomes a lot more complicated.
> I agree that Markdown has loose validation rules and a low entry barrier for writing, and having a low entry barrier for writing is nice, and I do think it's a good example, but just in the opposite direction. I think that Markdown's inconsistent implementations are one of the format's greatest weaknesses and have made the ecosystem harder to work with than necessary.
Maybe, but they're also what make it worthwile and made its widespread adoption possible to begin with.
> I generally feel like when I'm working with Markdown I can only rely on the lowest common denominator syntax being supported, and everything else I need to look up documentation for the specific platform/tool I'm using. It's cool that Markdown can be extended, but in practice I've found that Markdown extensions might as well be program-specific syntaxes, since I can't rely on the extension working anywhere else.
I do not see that as an essential problem limiting its value. It would be if you wanted to use Markdown as a universal content representation platform, but if you wanted that you would be using another more complex format, like asciidoc. Creating your own local ecosystem is to be expected with a tool of this nature, and is only possible because there wasn't a designer putting unwanted features in there that you don't need but prevent you from getting what you want to achieve with the format.
> This might get back a little bit to a disagreement over what simplicity means. Markdown is simple to write, but not simple to write in a way where you know it'll be compatible with every tool.
This may be the origin of the disagreement. You're thinking of an information that should be compatible with every tool; but that's not the kind of information system I'm talking about. Open data systems may have a common core, but it's to be expected that different people will use it in different ways, for different purposes and different needs. This means that not everyone will use the same tools with it. OSM data has that same nature as an open data platform that could be reused in widely different contexts and tools.
Think programs written in C. It's nice that you can compile simple C programs with any C compiler, but you wouldn't expect this to be possible for every program on every platform; the possibilities of programming software are just too wide and diverse, so you need to adapt your particular C program to the quirks of your specific compiler and development platform. Insisting that everybody uses exactly the same restrictive version of the language would only impede or hinder some of the uses that people have for it.
I think it's worthwile to have efforts to converge implementations toward an agreed simplified standard, but they should work in an organic evolutive way, rather than as imposing a new design that replaces the old. Following the C example, you can build the C99, C11, C17 standards, but you woldn't declare previous programs obsolete when the standard is published; instead, you would make sure that old programs are still compatible with the new standard, and only deprecate unwanted features slowly and with a long response time, "herding" the community into the new way of working. This way, if the design decisions turn out to be based on wrong or incomplete assumptions, there's ample opportunity to rethink them and reorient the design.
> You're thinking of an information that should be compatible with every tool; but that's not the kind of information system I'm talking about.
You're right, I am thinking of that. However, that's what OSM is, isn't it? It's more than a common format that stays localized to each device/program and varies between each one; it's a common database that everyone pulls from. We do want all of the data in the OSM database to be compatible with every tool that reads from it. And we want all of the data submitted to the OSM database to work with every single compliant program that might pull from it.
Outside of the OSM database, we want a common definition of map features where we know that generating data in this format will allow it to be read by any program that conforms to the standard. It's the same way as how when we save a JPEG image, ideally we want it to open and display the same image in every single viewer that correctly supports the JPEG standard. We don't want different viewers to have arbitrarily different standards or variations on what is and isn't a valid JPEG file, we want common consensus on how to make a valid image.
I agree that what you are saying would be true for information that doesn't need to be compatible with every tool. I don't understand why you're putting OSM into that category, as far as I can tell OSM is entirely about sharing data in a universally consumable way.
> Insisting that everybody uses exactly the same restrictive version of the language would only impede or hinder some of the uses that people have for it.
Isn't this part of the reason why the Web has started devouring native platforms? Write once, run anywhere on any device or OS. And even on the Web, incompatibilities between different web platforms and the need for progressive enhancement is something that we live with because we don't have an alternative. We still pretty rigorously define how browsers are supposed to act and interpret JS. A big part of the success of JS is that within reason, you can write your code once and it will work in every modern browser, and browser deviations from the JS spec are (or rather, should be) treated as bugs in the browser.
Even taking it a step further, isn't a huge part of the buzz about WASM the ability to have a universal VM that can be targeted by any language and then run on both the Web and in native interpreters in a predictable way? A lot of excitement I see around WASM is that it is more rigorously defined than JS is, and that it is trying to be something close to a universal runtime.
> Following the C example, you can build the C99, C11, C17 standards, but you woldn't declare previous programs obsolete when the standard is published; instead, you would make sure that old programs are still compatible with the new standard, and only deprecate unwanted features slowly and with a long response time
I sort of see what you're saying at the start of this sentence, but the second part throws me off. Most specs that iterate or develop over time break compatibility with old standards; Python 2 code won't compile on a Python 3 compiler. It's pretty common for programs to need to be altered and recompiled as newer versions of the language come out and as they're hooked into newer APIs.
Situations like the Web (where we try to maintain universal backwards compatibility even as the API grows) are really the exception to the rule, and while I do think specifically in the case of the Web it's good that we force backwards compatibility, holding to that standard comes with significant additional difficulties and downsides that we have to constantly mitigate.
And I still don't understand what this has to do with standardizing the format for data that is explicitly designed to be shared and generated among a lot of different programs. This isn't a situation where we want each program to have a slightly different view of what valid OSM data is because we want them to be compatible with a central database of information, and we want them to submit data to that database that is compatible with every other program that pulls from it.
Of course, for situations where that isn't required, where software isn't working with map data with the purpose of submitting it back up to the OSM project, they're welcome to keep using the old format, nobody can force you to use the new one. Those programs won't be as compatible with as many things, but if I'm understanding correctly, you're saying it's OK for the ecosystem to be a little fractured in that way and for some programs to be incompatible with each other? And if that's the case, I still don't see what the problem is.
For programs that you don't think need to be universally compatible with other programs, use the old format. When submitting to a database that is designed to be a universal repository of map data that anyone can pull from, use the new format to maximize compatibility. Unless I'm missing something else, that seems like it solves both problems?
For me Steve Coast lost his credibility when he joined the closed and proprietary what3words.
[1] https://wiki.openstreetmap.org/wiki/OSM_Inspector/Views/Mult...