Hacker News new | past | comments | ask | show | jobs | submit login

Where I'm working now, we've got security engineers assigned to seating in each development team.

They're not managed by, or working for, our teams. They have their own manager and security work that they're getting on with.

Having them sitting amongst the team, however, is resulting in a much different narrative than any I've been around before. There's a much higher quality, and less antagonistic kind of engagement going on. They've become someone you chat with at the watercooler, or at their desks, instead of having to file tickets, or wait for scheduled reviews to raise things.

People can quickly consult with them and deal with a whole heap of small potential risks way early on in the development process, and it's paying serious dividends down the road.




That approach Works well with Q&A too.


You're talking about Squads basically. Bring different people in the same group. And yeah, QA is very similar to Security in some points, but if you think straight QA should include security. Weird to say that a software has quality without security included, but the truth is that security is specific that the regular QA usually can't handle.


You've capitalized Squad, but it's hard to Google. Where did you get that term, and where is it defined outside your head?


As xxr said, Squad is how Spotify names their (previously Scrum) teams. Other interesting concepts they use are "Tribes" and "Guilds". Take a look at the Spotify engineering practices, they are really inspiring.


Not the commenter you're replying to, but at least at my organization we borrow the term from Spotify.


Security engineers are seen as experts you consult about something you don't know. QA are not seen this way. Some QA engineers actually are experts that can give good advice on structuring an application in a more testable way, but that's not the norm.


Most QA guys only check that something meets the spec/story requirements, not that the code is sane or testable... many don't even go beyond UI testing. That said, I think GP was referring to having a QA embedded as part of a team.


You know... I keep thinking that with source control systems like Bitbucket enterprise, etc... why aren't more mid-large sized orgs requiring a security signoff for every pull-request with a pull request to master/release branches being the trigger point.

I do a lot of PR reviews, and while I may not catch everything, I will catch a few things here and there... someone with that mindset would be in a better position to handle that from the start...

Having a few security guys that do PR reviews for about half their workload would go a long way to improving things.

We're going through an audit for an internal application now... there's 1 major flaw (SSL2/3 is enabled), a minor (session cookie isn't https only) and a couple trivial (really non-issue) concerning output caching on api resources and allowing requests with changed referrers (this can be spoofed).

In any case, having auditing earlier on and as a potential blocker would make each minor change easier to deal with than potentially much larger changes... the app in question was developed for the first 8 months without even a Pull Request check in place... by then many issues regarding code quality are already too late to fix completely. :-(


Nobody wants this.

No "security guy" who has a choice wants to spend half their workload waiting for PRs to come in so they can chime in with feedback about default configurations.

No product programmer wants to deal with some "security guy" parroting the results of an automated tool to them over a code review platform.

No product manager wants to see progress stall because the product programmer and "security guy" are arguing over whether or not a call to strncpy should be replaced with a call to strcpy_s.

In the immortal words of my generation, ain't nobody got time for that.


Honestly, someone should have time for that, it's part of the problem... I go out of my way to comment on as many PRs as I can, because I'll catch things that will become problems later far more than other peers who just click approve.

The same can be said for security guys... they spend their day needing to work as well, and seeing a bunch of smaller things fly by is just as valid as a big audit periodically. It's easier to catch a lot of things before they become big as well...

There are plenty of times I'll comment (Okay, letting this through, but in the future revise to do it this way), sometimes I'll push back, but not always, that's what the review process is for. I'm just suggesting multiple approvers for PR, where one is someone who is security minded.

It's funny how many issues I'll see from other systems where someone does something per the spec, that has a flaw because they were completely compliant. Someone crafts an exploit, and I'm interested because I'd usually be more pragmatic in implementation. Last year there was a huff about JWT allowing cert overrides in some frameworks, as they don't ensure the origin cert matches a whitelist... when I'd implemented JWT, I only checked against our whitelist and ignored the property.

Sometimes security guys will see things and think of things in a way others won't... for me, one thing I often catch that others don't are potential points for DDOS target viability. Some of that comes from using node, where you do NOT want to constrain your main event loop thread. Others don't think about putting limits on JSON size, or compute heavy tasks, etc.

And, frankly, I'm tired of fixing related bugs to patterns that were broken from the start.... turtles all the way down, but the turtles are eating all the errors.


In the immortal words of every other generation :), "someone is going to find your issues. It's either you or your customers."

You don't seen to have an appreciation for the difference between a secure and an insecure product. Yahoo didn't either.


Personally, I have an appreciation for it. I'm a working security professional.

However, for a decade and a half I've been part of many different security regimes at many different organizations. None of them had an appreciation for the difference between a secure and insecure product, and additionally, none of them were punished by the market for it. Products have success or failure because of other factors. Security is something that organizations invest in, in the best case, because it's something they believe in, and in the worst case, for compliance reasons.

So now Yahoo has a big problem because they had this breach. First of all, is this actually a big problem? Yahoo has many other big problems. Is this going to make or break the company? No. Has any security issue made or broken a company? Microsoft thought they could be broken by security, so they invested billions into it. They were wrong. They were broken because they had crappy products that people were forced to buy. They figured this out and shut down their security organization. What about Target? What badness has befallen them? Surely not to their earnings or stock prices. What about any company that has suffered a breach? The biggest thing that happens is the CSO gets fired. Maybe some vendors get fired. That's it.

This is where the questions end when you start to push for more security involvement in the product. Ultimately you will (personally!) stand in front of the CEO who will ask you "will I lose my job, or suffer some other negative outcome on that scale, if I don't listen to you?" and you will answer, truthfully, "no." And that is the end of the conversation.


> Target’s chairman and chief executive officer, Gregg Steinhafel, a 35-year company veteran, is stepping down, as the massive pre-Christmas data breach suffered by the Minnesota retailer continues to roil the company. The decision is effective immediately, according to a statement posted today on the company’s website. John Mulligan, Target’s chief financial officer, has been appointed as interim president and CEO.

http://www.bloomberg.com/news/articles/2014-05-05/as-data-br...


Well, I am most certainly a working security professional. It sounds as if you've given up and become a bean counter.

If the answer you give your CEO is "no," then you aren't giving the proper answer. You are just being a "yes man," saying comforting words.

>> So now Yahoo has a big problem because they had this breach. First of all, is this actually a big problem?

I mean this in absolutely the best way possible, you shouldn't ever be allowed near either a business or a security decision that affects people's lives or livelihood. If you think that disclosing hundreds of millions of records (many of which must contain PII) is without repercussion, then I have a pretty good idea of which end of the security stick you are holding. You are describing a business model where you piss on your customers by transferring 100% of the risk to them.


You don't pay me. The C-suite pays me. Thanks for making this personal when it has no need to be, by the way.

Personal attacks aside, let's you and me go out to a bar and sing songs of how things should be. Tomorrow, we have to go back to how things are. In the land of how things are, to the business, the disclosure doesn't matter. Full stop. Does it matter to the customers? Oh yes. Dearly. It's a really big deal to humanity. The business and humanity are discrete.

Is that a tragedy? Yes. I weep. I go home and drink every night for this reason. Until I don't want to work for people that pay money, though, you have to think about the business first. Humanity second. Anything else is a fairy tale or communism.


Much more valuable to have the security folks a critical part of reviewing the _frameworks_, and then pushing adoption of those frameworks. Human reviewers won't catch everything no matter what, but you can make entire classes of problems go away by making them impossible to commit.


Does that mean we can kill angular 1.x because it encourages points of disconnect, undiscoverable code, too much pfm (pure fucking magic) and failure?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: