My honours project was on the problems of tracking users visiting different websites using cookies.
The problems are numerous. I think I went through 9 revisions of the "naive protocol" (including, at one point, ditching the referer header in favour of HTTPS). Subsequently I realised that my tracking protocol was broken anyhow.
My conclusion is that there's no reliable way to track users visiting multiple websites using the standard features of HTML/JS/HTTP in the face of malicious users or publishers.
You have to fall back on traffic analysis.
I developed a successor technology which works better in many respects (but not all). As it's the subject of a current patent application I can't really go into much detail.
The problems are numerous. I think I went through 9 revisions of the "naive protocol" (including, at one point, ditching the referer header in favour of HTTPS). Subsequently I realised that my tracking protocol was broken anyhow.
My conclusion is that there's no reliable way to track users visiting multiple websites using the standard features of HTML/JS/HTTP in the face of malicious users or publishers.
You have to fall back on traffic analysis.
I developed a successor technology which works better in many respects (but not all). As it's the subject of a current patent application I can't really go into much detail.