As a seasoned (although retired) pen-tester I wanted to say you'll have some serious problems with rating when it comes to results.
When a company hires pen-testers and pen-testers do or do not find stuff, the company has no idea about the coverage. So the pen-test team might have missed many stuff or identified all. I'm sure you've seen in real world even the same members of the same pen-test team might find different issues for the same test.
Therefore one of the biggest problems is to actually knowing whether they are good or not at what they do. It's easy to rate communication skills, responsiveness, attitude, report quality etc. But very hard to rate the quality of the results (which is the real reason for carrying out a pen-test).
When they don't find something, maybe there really is nothing there.
When they found something, maybe there is more there. The customer has no idea at that point. It'll be only a fair amount of time later they'll figure out the coverage / vulnerability finding quality.
I'm sure in the long run market will stabilize (assuming you can change your rating for a pen-tester even after a year) but this is something to consider.
Update: BTW personally I don't like the idea of logging in via LinkedIn (for finding a security talent), it's feels too intrusive, beside of the personal preference my experience showed me especially security industry don't like SSO style things.
Some of this can be covered by the methodology section, where the pen testers shows the approach taken and has an overview of what they did.
There's a happy medium between one-line reports 'nothing found' (which encourages questions like 'did you even try?'), to the voluminous crap produced like old vulnerability scanners. Providing the report template in advance may help set expectations.
The industry is moving towards standards based pen testing. That has some pros but many cons as well. For the moment, setting expectations and having a thorough debriefing with the customer may have to do.
Your question is the scientific one: How do I know what you did was good enough. Just like a patient evaluating a medical professional care, the customer isn't an expert and goes with their gut in some cases. That's why the industry also wants you to use different pen testers. I've seen many a time, when one team finds nothing, and another rips the infrastructure apart. Competency is a variable over time and so is trust.
TLDR: The scientific question does not have a simple fix by any means.
I'm not sure I follow here. Is your firm getting access to pentest reports from the consultancies it matches to clients? That seems untenable; clients tend to be very possessive about those reports.
Is your firm instead relying on self-assessment by clients? Then, like Ferruh Mavitunah said, I'm not clear how this system can work: most clients aren't qualified to evaluate the effectiveness of a pentest, and will instead evaluate based on soft-skills.
I have another business question. The most lucrative clients across the board are "house accounts" that source repeated tests from a single firm. When one of these companies sources a consultancy through your market, what prevents them from bypassing you for all future engagements?
This is a race-to-the-bottom problem that plagues freelancer programming markets.
This may be slightly off-topic, but how do you get started in pen-testing? I read a book about it and the book, with focus on white-hat hacking, was very clear about making sure you have a green-lit target to test on, but even the company I work for isn't going to let me randomly hack away and test security without an agreement in place (they would want to know I was doing it so as to filter false positive attacks, for example).
So without doing anything either illegal or unethical, things that could lose me my current job, how does one build up the skills and experience?
There's quite a few ways to get started with the skills you need for pen. testing and the good news is that a lot of them are free or low cost.
You should probably start by picking an area that you're interested in (e.g. web, infrastructure, mobile) as whilst there's some commonalities each area has it's own toolset and specific areas of focus.
For practicing legally , a lot of CTF competitions make use of skills which are useful for penetration testers, also places like vulnhub https://www.vulnhub.com/ have downloadable challenge machines that you can run in a VM.
In terms of meeting up with people in the industry, look out for local Defcon chapters or B-Sides conferences, both of which tend to be free or low cost and have some good content.
Thanks, that should help. all I need now is time...
edit: the metasploit site asks for payment in the form of a donation to charity, so I'll take closer look at that one first. Now that's a link worth sharing.
The best, most successful software pentesting teams barely market at all. NCC is one of the largest in the US, and nobody is finding them through Google ads, or, really, ads anywhere.
The same is true off firms like Bishop Fox, Leviathan, and IOActive.
Which is to say, at least in the app pentesting market, I'm a little skeptical of the premise.
The complaints I have heard is similar to the big 4.
Many go to them, they see awesome resumes. They also see very large costs. The the customer finds he doesn't get the A-team, but the F-Team, due to 'unprecedented demand'.
That's a line every consultancy that ever had to compete with another consultancy has told at least one client. But the truth is, for the overwhelming majority of application software, and particularly for any software that would procure a pentest through a market like this, the Bishop Fox "F-Team" is perfectly up to the task and far more reliable than a talented rando.
The software pentesters with gold-plated resumes do high-value targets (because there are more high-value targets than there are pentesters to service them). Google is not going to source Google Mail pentesters on DICE. Adobe doesn't source pentesters for Reader on DICE. Microsoft doesn't source pentesters for SCHANNEL.DLL on DICE. Apple doesn't source pentesters for the iPhone bootloader on DICE. That's where the A-Team ends up.
What transparency are you adding here?
If the argument behind this was, "we're going to drive down the price of pentesting", that would be a coherent pitch, although I'd still want to hear how you expect this service will do that; again, the market is supply-constrained.
The transparency of who actually is good. The talent in companies ebbs and flows and the scores will reflect the work they are doing now, not what they did in their best or worst days.
There are many boutique companies that are excellent, but don't have a fair share at the market.
Companies that are 'all things to all men' tend to have quality issues over time... like the big security giants of the last decade. Eventually people get tired of it and look for specialists. That's where this will help.
Help me understand how what you're doing allows me to spot which Bishop Fox testers are "actually good"?
I'm not talking about "big security giants" like IBM and Deloitte. I'm talking about boutique application security firms that do little other than test software. They're already specialized.
Many big-ticket items are not transacted through a sales funnel beginning in google ads, even for individuals (businesses tend to have more formal processes). Houses, cars, etc. are largely purchased by going to a specific place (digital or otherwise) where such things are bought and sold. Not just googling "3 bedroom 2 bath house close to Redmond". I do not think this process involves a lot of unmet demand.
It's very much a supply-constrained market, which is why firms tend not to advertise much, and why it's hard to understand the premise behind this market.
I like the idea of a marketplace, but I don't think background checks and references are the way to build a credible list of the world's best pentesters.
I think what patio11 is doing with Starfighters.io is orders of magnitude better. Run developers through a gambit of supremely difficult tests via a fun CTF-type game and pair the best hackers with the highest enterprise bidder. Works not just for pentesters, but all devs really.
Also, I know where to get the best pentesters because they're listed on all the top companies' bug bounty pages. It's proof of skill I'm after, not some Gartner-esque gatekeeper telling me who's best because they've "background checked" them.
Give me a system more like StackOverflow or Starfighters where I can see the work. Not something subjective like eBay or Yelp, which can be easily gamed.
I hear you and I get it. What you are describing are security-focused pen testers, mission-focused red-teams.
They are absolutely welcome, and they are a subset of the pen-tester universe. They are not a good fit for someone needing to get a PCI pen test, but they do incredible work in other areas.
It highlights the point in the blog that the landscape of finding the 'right' pen-test team is not easy. Some are brilliant at one thing, others at many, but even an elite group may not be the right fit for the task at hand.
We are taking the feedback system seriously and are slowly testing it out. An easily gamed system is useless for everyone.
But why must demonstration of skill be limited to elite red-team style pentesting? You could devise challenges geared at demonstrating all sorts of knowledge (HIPAA, PCI, websec) basic or advanced.
If you've seen the sad state of PCI audits in particular these days, you'll get my drift. I think there's a huge opportunity here to raise the quality bar with your marketplace.
We're hoping to be less "hoops" and more "a fun experience which competes with someone's Starcraft/Instagram/Game of Thrones/etc time" that also happens to be really useful the next time you're in the market for a job.
Take a look at the leaderboard for Microcorruption some time. It's public. (SF's are not, as a considered design decision for the moment.) If you do and cannot understand the claim I am making, that's cool, but I feel no particular need to elaborate.
More important in the long term than the names you will recognize are the names you will not.
Hey @kenbaylor! I think that this is an awesome approach to addressing the issue of the infosec employee shortage. I've actually been kicking around the idea of building something similar for a while now, so it's exciting to see someone making progress in the area!
I saw the StealthWorker table at Shmoocon and wanted to swing by and ask some questions, but I got distracted by some of the other goings-on. Anyways, I finally got around to signing up a few days ago.
One issue that I have from the pentester's point of view is the lack of transparency after sign up. I haven't seen any confirmation that my application was received and is under review. However, I understand that StealthWorker is still in its infancy so this is understandable.
Excited to see what the future of StealthWorker holds!
How does the requirements NDA thingy work? Does every single tester/bidder need to sign an NDA? At what point of the process does the NDA need to be signed?
I feel a little uncomfortable signing an NDA unless the work has been outlined and the work is to begin.
Also I think you're really cutting out a significant portion of the market with LinkedIn requirements. I understand it's probably needed to filter the plebs, but you should probably allow for an alternative sign-up approach (ie combination of phone verification, require business email, etc).
Altogether very cool! I'm trying to get into InfoSec myself. Good to see innovation in the industry!
The NDA is mutual. It might say 'company x failed our pic-audit, we need help fast'. It guarantees we keep that secret but use that info to find the right match.
Also on the pen test side, we wont't say vendor C put in a $10k bid, you should put one in at $9k. It just means we value trust and privacy, as everyone in the security field should. Testers would also sign an NDA when they take a job, so that they won't leak things they learned in confidence.
> The NDA is mutual. It might say 'company x failed our pic-audit, we need help fast'. It guarantees we keep that secret but use that info to find the right match.
Sorry, I'm not sure I exactly understand that. ELI5?
1 Way NDA: Let's have an open honest conversation, but as you signed MY NDA, written in MY favor then: I CAN tell everyone about YOUR secrets, but you CANNOT tell anyone about my secrets
2 Way NDA (also called Mutual NDA):Let's have an open honest conversation, and as you signed MY NDA, written in neither party's favor then: I CANNOT tell anyone about YOUR secrets, and you CANNOT tell anyone about my secrets. It enables honest dialogs.
The LinkedIn approach is to validate people are who they say they are. We have a number of people who have very limited LinkedIn profiles, and some that have 'skeleton profiles'. Once you authenticate the profile you fill out is only seen by us for the purpose of finding you work.
We take the 'stealth' very seriously. We also didn't want to store people's creds.
Aside from an announcement that you've begun, is there anything specific you're looking for from the HN community? For example, are you looking for beta testers or pre-signups? I'm certainly interested in following your progress, but there is no call to action on this blog post.
Great feedback. I want to build a service that the HN community finds useful. The approach I've taken has been based on my experience and approx 15 others. But that said, it's highly susceptible to groupthink. I want to solicit HN to see if it resonates, needs a tweak, or completely needs a redo.
Am I the only one unable to click on the hyperlink in the text? It seems like some sort of weird JS intercept is happening. The HTML shows it as a plain anchor, I'm not sure why JS is involved at all.
It seems to be Chrome trying to protect you. When I clicked the link (also in Chrome), I noticed a little silver shield show at the right of the address bar (next to the "favorite" star). Clicking that shield shows "this page is trying to load scripts from unauthenticated sources", with an option to "load unsafe scripts" or "learn more".
After I clicked "load unsafe scripts", it reloads the page and I can click the link. I'm assuming my system will explode, shortly.
Stealth Worker developer here, I coded a javascript redirection from http to https like this: http://stackoverflow.com/a/4723302. What you are seeing may be the result of a slow transition from http to https. I apologize for any inconvenience (but please only use the https version of our website!).
Interesting idea, a couple of thoughts from a quick read through.
- I'd suggest that customer feedback may not necessarily the best way to guage security tester* competence. Many testers report by exception so if the customer gets a relatitvely clean report they may be happy with that, but if the report doesn't detail the testing completed, how do they know the tester just didn't miss things from the review? You could enforce a consistent reporting style with tests completed to address that, but I'd guess that some testing companies wouldn't appreciate being asked to re-tool their reporting process.
- The model seems to imply the customer scopes the review. In my experience for organisation with less experience of security testing, that's one of the hardest parts to get right. More experienced/larger companies would, I'd expect, be less likely to use this kind of service as they already have a panel process/procurement in place. If Stealth Worker are going to participate in the scoping proces it would need the right set of people to complete that task (not a massively common skillset in my experience as it needs a good combination of technical experience and business understanding)
- Will the marketplace validate vendor claims of competence/skillset, and if so how will they do that? This could be a good value add, but is expensive to do well (e.g. designing and running assessments for candidate companies to provide a level of assurance of skill in particular areas).
- It'll be challenging to create an international model for this, as the regulatory requirements are different per country, and whilst testing companies might currently have indemnity insurance in their local market, that may well not cover international situations.
- The site could use some fleshing out on the team side. Currently says "Stealth Worker is a team of CISOs, developers and lawyer" .... To me there's a large ommission there which is from that it implies you don't have any testers on staff?!? I'm sure that's not the case, so it'd be worth making sure that was in clear on the site.
*Pet peeve, I prefer the term security testing to pen testing. The term pen testing rarely describes what most organisations actually need and also what is delivered. Pen testing implies a black box adversarial review "emulates a malicious attacker". This is only really desirable for mature organisations who have a strong handle on all the basic (which is not, in my experience, the majority). Also truly emulating attackers is very difficult as they tend not to worry about breaking the law, unlike testing companies (you'd hope!)
Great feedback. Will flesh out the team section.
You're right on the customer feedback section that there may be a huge disconnect between customer and vendor perception. We will likely have to get in the middle of that in certain situations to ensure feedback is fair and accurate.
There is some movement on standardization of reporting e.g. Now a 43 page document from the PCI council on how to do the pen test.
On your personal peeve: I hear you. That why I tried to define pen-testing for what it means to me. There's tons of other definitions out there and many are prejudicial to scope and quality.
I think the arguments against penetration testing are interesting.
As a software test engineer, we do external audits like this, but I wonder: Documented or "known" vulnerabilities are not worth testing until there are systems in place that expect to cover them.
It seems that all vulnerabilities that are not intentionally addressed should be considered dangerous. If they are penetration tested but vulnerability is unknown, the best you could hope for is "Not vulnerable for unknown reasons" which is just as bad as vulnerable in my perspective.
With some admitted trepidation, I assert that all software should be tested this way, with expected behavior being a primary dependency.
Edit: hiring outside penetration testers is still totally valid and desirable, since development and testing are two totally different domains. I'm only pondering methodology.
I believe a LOT of security testing will evolve to QA and software testing. There a LOT of issues will be found. Instead on micro-focuses (in the past), the CD approach can test the whole system, which is a huge leap forward.
Pen-tests should be good reality checks to ensure the system is working, and that it is sufficient to withstand current attacks.
Sometimes companies are very smug and need the reality check. Boards are starting to request them to ensure the confidence is warranted.
I've also seen pen tests used as a tool to GET funding. Fail one big time due to known vulnerabilities just to show how messed up things really are...then get a budget to fix them.
The smugness thing really hits home. Sometimes clueless, overconfident or downright negligent people are so embedded in a culture that an outside expert opinion is exactly what's needed.
The owner of www.stealthworker.com has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website.
Crowdsourced pay-per-bug model is reactive. Penetration tests are preventive. So you're talking about different services for different stages. Penetration test is performed prior to release to production and after released, a crowdsource pay-per-bug takes place to stimulate white hats rather than black hats.
Well described. Also pen testing is a litmus test that can be done anytime. The vuln models are for the few companies who have made great progress in squishing bugs and are taking a very proactive approach. Unfortunately that is still a small percentage of companies
As a seasoned (although retired) pen-tester I wanted to say you'll have some serious problems with rating when it comes to results.
When a company hires pen-testers and pen-testers do or do not find stuff, the company has no idea about the coverage. So the pen-test team might have missed many stuff or identified all. I'm sure you've seen in real world even the same members of the same pen-test team might find different issues for the same test.
Therefore one of the biggest problems is to actually knowing whether they are good or not at what they do. It's easy to rate communication skills, responsiveness, attitude, report quality etc. But very hard to rate the quality of the results (which is the real reason for carrying out a pen-test).
When they don't find something, maybe there really is nothing there. When they found something, maybe there is more there. The customer has no idea at that point. It'll be only a fair amount of time later they'll figure out the coverage / vulnerability finding quality.
I'm sure in the long run market will stabilize (assuming you can change your rating for a pen-tester even after a year) but this is something to consider.
Update: BTW personally I don't like the idea of logging in via LinkedIn (for finding a security talent), it's feels too intrusive, beside of the personal preference my experience showed me especially security industry don't like SSO style things.