In the near term, not much. I think Spanner is using GPS synchronized Rubidium clocks, which form an excellent combination. If you want to do much, much better (these new optical clocks are sensitive to the change in gravitational potential of centimeters), you'd need one of these.
The primary utility of these clocks in the near term is improved precision physical tests. In the long run, they may form better standards to replace Cesium.
Another way to see this is that few, if any, industrial users have their own Cesium or Cesium-competitive optical clocks, which are a factor 1000-ish worse than these new clocks.
A neat thing about those seems that you shouldn't even need persistent GPS to keep them synched. \for most applications an once-in-a-lifetime sync should be enough.
Google's Spanner goes really far down this line of reasoning. Read the paper; it's great.
Basically, highly-precise clocks in a machine-local context would substantially reduce drift, but you can still never really synchronize two clocks reliably across a network (potentially asynchronous send and receive times make it basically impossible)
Maybe there's some literature on this I haven't read, but I don't think it would eliminate the kind of work you currently have to do to solve this problem (ie. what Spanner has done), though it would certainly narrow the bars on their TrueTime values quite a bit.
I think a very basic reason it shouldn't be possible is that for really accurate sync each device has an unkown bias (innacuracy) besides just high std dev (imprecision) with relation to it's network/processing behavior. If you average out a lot of samples you can get rid of all imprecision (albeit very slowly), but you essentially need a very accurate clock to find the bias, so it makes the whole network approach useless. I would guess you couldn't even solve for bias with a clock once and then keep using the network, because the bias should be time-varying.
I was really thinking that the lack of drift in these clocks means we don't need to periodically sync them. If they are synced initially, without a network, we can rely on them indefinitely.
I'm confused by the use of "accuracy" and "precision" in this article.
From the IEEE article: "[ultranarrow lasers] will make it practical for us to achieve an accuracy below 10^-18–more than 100 times the precision of cesium clocks."
In general, when talking about a class of measuring device, the expected accuracy of the measure cannot exceed its expected precision (the converse is not true, however.)
That is, the expected difference between two randomly selected devices in the class attempting to measure the same true value is a lower bound on the expected difference between the measurement of one device in the class and the true value.
Or, looked at a different way, if you can't shoot a tight grouping (independent of where on the target it clusters), you can't shoot a tight grouping around the bullseye.