I'm trying to imagine someone ensuring differentiation between minimums.unsettled.depends (Idaho), minimums.unsettled.depend (Alaska), minimums.unsettles.depend (Spain), and minimum.unsettles.depend (Russia) while typing them in on a t-9 style keypad with a 7 figure display in turbulence.
The word list is 40,000 long, so without plurals probably there aren't enough words that people could spell or even pronounce. A better fix would be making it "what four words" - I wonder if they'd already committed too much to the "three" concept before discovering the flaw? Either way, using phony statistics to make unwarrantable claims of accuracy is a poor workaround.
Since the app gives you the words to say, and translates those back to coordinates on the receiving end, in theory they could alter the word list, at the cost of making any written-down version obsolete.
Maybe they should release a new service called What4ActuallyVettedWordsAndWordCombinations ;)
Something like what3words might be useful, but what3words itself doesn't have enough "auditory distance" between words. (i.e. - there are some/many words used by what3words that sound similar enough to be indistinguishable over an audio channel with noise.)
Something like FixPhrase seems better for use over radio.
There are a number of word lists whose words were picked due to their beneficial properties given the use-case of possibly needing to be understood verbally over unclear connections. The NATO phonetic alphabet, and PGP word lists come to mind: https://en.wikipedia.org/wiki/PGP_word_list
I'm particularly a fan of the PGP word list (it would definitely require more than 3 words for this purpose, though) because it has built-in error detection (of transposition, insertion or deletion): Separate word lists are used for "even" and "odd" hex digits. This makes it, IMHO, fairly ideal for use over verbal channels. From the wiki: "The words were carefully chosen for their phonetic distinctiveness, using genetic algorithms to select lists of words that had optimum separations in phoneme space"
It sounds like the w3w folks did not do any such thing
EDIT: According to my napkin math, 6 PGP words should be enough to cover the 64 trillion coordinates that "what3words" covers, but with way better properties such as error detection and phonetic incongruity (and not only that, it is just over 4 times larger, which means it can achieve a resolution of 5 feet instead of 10)
As a New Zealander, the PGP list is unfriendly because there are plenty of words that are hard to spell, or are too US centric.
dogsled (contains silent d, and sleigh might be a British spelling)
Galveston (I've never heard of the place)
Geiger (easy to type i before e - unobvious)
Wichita (I would have guessed the spelling began with which or witch)
And why did the designers not make the words have some connection to the numbers e.g. there are 12 even and 12 odd words beginning with E - add 16 more E words and you could use E words for E0 to EF. Redundant encoding like that helps humans (and would help when scanning for errors or matches too)
I imagine it is even harder for ESOL people from other countries! I am sure the UI has completion to help - but I wouldn't recommend using that list for anything except a pure US audience.
I have been to Galveston and I can assure you that you have not missed anything. There is no good reason to visit or know anything about it.
Making a word list that could work well for speakers different English dialects and for speakers of English as a second language sounds really hard. Has such a list as been made?
Probably it is too hard so we will continue to ignore the problem.
It should be discussed like this! It's clear that the w3w people didn't even do the bare minimum here!
The thing is, once you agree that some words are subpar or need translations, you can do a 1-to-1 mapping.
The problem with What3Words is that supporting the original word set will always be a pain even if they release a v2 word set with a 1-to-1 mapping (I believe they've already released versions for other languages?)
re: Geiger- parsing it could trivially accept misspellings of words
I mean there is the ICAO phonetic alphabet already known and used by every single licensed pilot the world over, regardless of their native language.
or, or... hang with me here for a minute...
We could instead use one of these cool new hash algorithms that require a computer and use about fifteen thousand English words! I understand they are all the rage in the third world countries that lack a postal system.
These are all lovely technical solutions. The problem I imagine isn't coming up with unique words. The problem is organizing a switchover for dozens if not hundreds of systems and agencies around the world. The chaos of change is probably out weighing the benefits.
what3words is not useful at all. 1) FAA (and thus the world) have a hard character limit of 8, this is to support old mainframes running (Delta, I'm looking at you) old unix dispatch software. 2) The cockpit computers have limited characters on screen. A FMC can display 28 characters x 16 rows at best. Most are 8 rows. Military aircraft have some that are 2 rows. The FMC or Flight Management Computer is really just an old embedded chip. 3) The entire airline, flight, tourism, booking, and ticketing systems of the world would need to change. Including all legacy systems, all paper charts, all maps, all BMS's, all AirBoss's, all ATC software, all radio beacons. There is no chance that any of this will change simply because someone came up with a way to associate words with landmarks you can't see from the air.
you're saying what-3-words (W3W) is unsuitable for safety critical applications ? /s