It’s a pretty poor implementation that is basically matching on the lowest common denominator, by platform rather than by library or framework. An ASP.NET website is fully independent of a WCF vulnerability. They can coexist but definitely don’t have to.
Additional suggestion: many times the home page is a link to many different technologies. Crawl all first-level directory indices to see different techs. E.g. we have a xenforo-powered forum at /forums, a WordPress blog at /blog, a custom ASP.NET CMS at /store, a .NET Core web app at /foo, etc.
The domain index for most companies past a certain age/size not dedicated solely to a single app effectively turns into a static html page.
Creator here. We built this using Wappalyzer to detect the software given a URL and match it against our database of CVEs and thought it might be a fun little tool.
This is a nice addendum to the "Let Us Identify Your Stack" style web services tho I guess some of them might already provide this.
It does have the somewhat negative effect of making potentially vulnerable websites more visible to lower order hackers (I'm assuming more proficient ones have automated discovery tools like this anyway).
It's always a possibility but this tool doesn't look for version numbers so it's then more time consuming to narrow down if it's vulnerable to something.
Finally, a place that can gather IP addresses and associate them to specific security products to have them hacked later. Just what I've been waiting for.
I'm always surprised by the mindset of the person you're replying to.
I learned a few years ago from some DEFCON video[0] that someone had figured out a way to do a (basic) port scan of the whole internet in ~1 day (or something like that).
Thing is... it really shouldn't have been that surprising. Although network latency isn't getting that much better year by year (c = c), the amount of data you can process in bulk, correlate, etc. is ever-increasing.
To be fair anything that is sending like this is probably getting you network blocks:
"This program spews out packets very fast. On Windows, or from VMs, it can do 300,000 packets/second. On Linux (no virtualization) it'll do 1.6 million packets-per-second. That's fast enough to melt most networks."
Port scanning is a real and established thing that anyone who is even thinking of security has known about normally for decades, but port scans don't tell everyone what your whole stack consists of. Maybe you'd like to share why sharing your stack with everyone is a good idea? I'd really like to know. Thank you.
Do you think that fingerprinting is quantitatively different from port scanning? My main point was just that a port scan can immediately identify the 0.1% (or whatever) of the Internet that you're interested in and then try more "invasive" probing.
(I should add that I forgot to mention that IpV6 does make the whole "PortScan the Internet" business a tad more complicated, so that's than argument against me.)
It's not you, but the implicit idea of trust. How are three people I know nothing about going to give me better results than a known name? Best of luck establishing yourselves though.
Those self-managing their machines and sites could doubt if a break change or update would cause downtime, LXD/Docker could simplify on that and reduce the risk to only containers.
i don't need to provide my (potentially vulnerable) production URL to whoever-you-might-be in order to identify the last 6 months of vulnerabilities - I can just google for that.
Submitting your site to this is just asking for trouble.
I’d suggest that running a vulnerable production service is asking for trouble!
My web logs are full of automated scanners. Once when I ran a vulnerable version of Wordpress it got discovered and pwned very quickly. No need to enter the URL in any website ;)
This just seems like a mailing list for CVE alerts for popular software. If you put in HN, it'll say that it failed to detect the stack, and then ask you to choose your software and then enter your email to receive alerts.
It's kind of clever marketing, giving people a sense that they're going to get a security audit in exchange for an email address.
The first URL I entered (coop.co.uk) was actually pretty awesome, it detected Varnish and showed a critical CVE from last week. That’s cool.
I hope that if you subscribe, the site regularly rescans your stack and realised if it’s changed. Otherwise it’s just a mailing list subscription that becomes out of date and therefore not useful.
Anyone can (and people are) just scan the internet for hosts on port 80/443 and unless your site uses virtual hosts and has no HTTPS certificates issued to one of your domains, it's going to be discovered and probed exactly like this site does anyways. The difference is real adversaries are doing it without you knowing.
Additional suggestion: many times the home page is a link to many different technologies. Crawl all first-level directory indices to see different techs. E.g. we have a xenforo-powered forum at /forums, a WordPress blog at /blog, a custom ASP.NET CMS at /store, a .NET Core web app at /foo, etc.
The domain index for most companies past a certain age/size not dedicated solely to a single app effectively turns into a static html page.