Hacker News new | past | comments | ask | show | jobs | submit login

I think he means it the other way around. You have a Single page application with routing based on #someRoute, then someone tries to share some link with the #targetText=... which will lead to an invalid route. Or am I missunderstanding something?



You’re thinking of ?targetText= or &targetText=

Hash isn’t used for query parameters.


Not for query parameters sent to the remote server, but there are definitely pages/applications that store state in the hash.

For example, look at the URLs used by mega.nz, or any encrypted pastebin (they store the decryption key in the hash so it's not sent to the server).


Backbone JS uses hash for routing. Case in point - http://backbonejs.org/#Router. There should be lot of sites that uses hash params as routes which still exists from the early SPA era.


It was used in SPAs. It was a common technique in the mid-2000s up to 2010 or so.

But not for query parameters per se, for app state stored in the query string (and bookmarkable).

https://stackoverflow.com/questions/15238391/hash-params-vs-...



doesnt seem to break on addition of a targetText "hash-param"


It is in client-side JS single-page-apps...


lots of older or poorly written SPAs use the hash exactly in this way. I guess their fault for being poorly written, like those people who allowed GET requests to /deleteaccount based on the theory that you had to be logged in with a browser for it to ever happen and not considering that google would make an extension that did all get requests on a page on entry.


They didn't use it because they were "poorly written", it was a "state of the art" technique back in the day.

Gmail used, others used it.

https://stackoverflow.com/questions/15238391/hash-params-vs-...


sure, it was state of the art back in the day, but state of the art over time depreciates to technical debt, and the ones that are left using what was once state of the art are now poorly written.


>and the ones that are left using what was once state of the art are now poorly written.

They could just be perfectly written (for their time), just legacy and not updated, is my distinction.


at some point the accretion of legacy and not fixing issues to match better understanding turns a perfectly written for its time application into a poorly written application for the present.


That doesn't give Google the right to break all those sites though.


right, I guess I should have indicated sarcasm on the I guess it's their fault part.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: