Hacker News new | past | comments | ask | show | jobs | submit login
Interview questions for QA managers (testmunk.com)
87 points by jgillman on March 6, 2015 | hide | past | favorite | 31 comments



This doesn't seem right to me:

> 3) How do you determine which devices and OS versions we should test on?

> This should generally be an easy question for the candidate. Good candidates will point to app analytics as the best measure, looking for the most used devices for their particular app. Another good answer would be to check the app reviews where people might have complained about specific issues happening on their devices. Less creative candidates might simply suggest “looking at the top devices” on the market

Looking at top devices is also incredibly important, and it seems foolish to dismiss that strategy.

What if your website doesn't work on iPhones, and so you have no iPhone users? What if, because people can't even sign up on an iPhone, they don't bother complaining about specific features not working?

Surely it makes more sense to test based on your target market, rather than the subset of people who are self-selecting to use your product?

Ideally you should be looking at both usage and market data. If your target market contains lots of iPhone users, but everyone's on Android, this points to a potential problem.


Plus, at early stages, you may not even have a user base yet. At that point, as you say, look at top devices for your demographic.


thanks for the feedback! I just did an update and elaborated on this point.


As an Android developer, if hiring for an Android QA position I'd be thrilled to interview a QA Manager or QA Person who was specifically emphatic about testing on a wide array of Samsung devices.

Not because they are among the "top devices" (though they usually are for each subcategory), but because it shows they have enough experience to have seen first-hand that a surprising amount of device and/or OS version specific issues that Android apps run into occur on Samsung/TouchWiz devices.


The purpose of QA is to serve the existing user-base base first, therefore it makes sense to find what that user-base is first; you're missing the point.


This is actually awful. A QA manager should be asked how testing is integrated into the development process, how to infuse quality into everything, how to hire/train/retain testers, how to estimate for QA. Writing test plans is an antiquated approach so I'd rather hear what they've done that works and why.


Hmm. I find proposed answers to these questions somewhat... unexpected. Lacking, in fact. They focus so much on technical ability, and seem to ignore the human side of being a manager. Starting with the first one... "1) Let’s say you are the first QA manager joining our startup. What are the first three things you would do?" The kind of things I'd expect to hear here would never be of the kind 'write a test plan'. And I hardly believe anyone would be able to contribute a test plan right away, would they? I expect ability to write a test plan, or setup a process where it is created and managed, to be rather easy to check by just asking someone to do a test plan on an imagined scenario. The answer I would like much more would be: '1. meet the team', '2. understand the product, its vision, goals', '3. understand current dev/QA setup and code before I proceed'.

In other words, it seems to me that the first things to do should be to see where we are, connect with people rather than making things up. How can you propose first actions without at least basic understanding of where you are?

I like questions 5+ much better, but still these seem more suited for interviewing an experienced individual contributor, not a team manager.

How about setting up a team? How about hiring? How would you define and split tasks, assuming we agree on tasks? What kind of testing would you personally do, which would you delegate to QAs and which would you require to stay on the dev team? What would you do in situation X? (any difficult multi-optimization problem where people, technical and philosophical issues are to be considered)

I can imagine a great individual contributor to ace every single question on ths list, and then fail utterly when he has to manage even 2 people effectively.


Thanks, good point.

This has been added: "It is important for the QA manager candidate to ask questions of his own regarding the current process and the challenges facing the organization that led to the search for a QA manager."

The questions don't go into further details of managing and leading a team, there is a whole set of questions that are more tailored towards hiring a "QA team lead" - might be a follow on blog post.


At my company, we'd call the position the author is interviewing for "QA Lead". We subcontract QA testers, so they're not responsible for managing people beyond making sure work gets done.

I think these are great questions for QA leads, not the person who manages them.


The issue of where to draw the line between developer testing and QA testing is an interesting one.

I think it's best to answer it in reverse and start with a bug.

Did the bug make it to production? Is it a critical path? Was it covered by the test plan? Was it covered by a functional/unit/integration test? Was it clearly defined in the product requirements? Was the behavior clearly defined as part of a business objective?

Most bugs could have been discovered earlier in the chain. The earlier the better. Many times bugs are a result of systems failure, bad communication, assumptions, etc.

I think the two most important concepts in QA are 1) answering the question of "can we ship this now?" and 2) encouraging systems thinking so that all of the processes related to executing business goals in a software company can be continuously improved upon.


I manage a QA team where there was none until very recently. If you have any reading on point #2 that you'd recommend, it'd be hugely helpful to me.


I'm not aware of any specific reading... but I'll share a few more thoughts:

The key is to not make it a blame game. Every bug that is discovered on staging or production should (in theory) trigger a root cause analysis of some kind... because it means that one of the earlier processes failed.

If you have manual QA people, it's mostly just improving the QA plan and adding edge cases and domain knowledge, but it can also sometimes be that there are data bugs or integrations on production that are not on staging, slightly different on staging, etc.

With every bug there is an action plan for how to fix the bug, but that is separate from the knowledge that we gain about our system because the bug occurred. Maybe the fix is to get someone to manually clean up some dirty data that wound up on production due to someone forgetting to validate/clean input data ... that's great but from the QA perspective you might have learned that several of your steps failed.

So I think the main thing is considering bugs opportunities to learn about the system as a whole. The philosophy behind chaos monkey at NetFlix is that even a well tested, solid system needs to be resilient, so any opportunity to make your system stronger (regardless of the cause) is a good thing. In particular, any bug found before it hits production is a win overall.

I'd also add that it's important to let the knowledge flow back out of QA and into the product team, etc. QA people often end up becoming internal domain experts who catch lots of issues, but that is something that quickly exceeds what one person can remember/understand as a system scales, so organizational learning/practices pay off big.


I really appreciate your comment. One of the more exciting parts of expanding has been getting the opportunity to think about systems and processes at a higher level now that I'm not the only one fighting fires. I'll take this advice to heart.


The two books on systems thinking that I'd recommend are Gerald Weinberg's "An introduction to general systems thinking" and Donella Meadows' "Thinking in systems: a primer". Anything by Gerald Weinberg is eye-opening, if you haven't come across him before. He's the grand-daddy of modern thinking on testing.

If you follow the advice in this article, you'll end up with an inefficient old-school, documentation-heavy, dogmatic factory-style quality-police testing department, set up in opposition to your development team.


Pretty much stopped reading when the answer to the first question involved writing test cases. Testing does not have to mean churning out hundreds of test cases, they are inefficient and not that effective at finding bugs. If a QA Manager ( or tester ) had such a strong attachment to them then the interview would be over. Definitely stopped reading when answer2 was 'detailed test cases'


True dat. My company has an awesome QA practice and test cases are never written. Testing is automated where appropriate and otherwise based on acceptance criteria of stories.


This is a pretty good list. It's a bit self-serving on the technology questions as well as the general line of questioning for TestMunk (understandably).

If you are really hiring a QA Manager, also consider asking them about their team structure/building philosophy, elaborate on the technical requirements for those roles, if they have experience developing a testing process that works with your company/group's development process. What experience they have with problem hires and how they've dealt with them.

This is really a list of questions of a principal test engineer. A good manager should be able to answer them, too, but it's far more important that they can make a group of QA engineers perform well (otherwise you should be hiring a QA lead).


11. There are two number 3s. Or this is an attention to detail trick for QA managers.


Most of this is pretty good. Some of this is a little old-school.

There are a couple of things I disagree with, speaking from my own experience as a test lead. The biggest is the definition of "a good testcase."

If you define your test cases at the specific UI widget level, they have to be updated whenever any UI change happens, period. Since you usually end up with a number of test scenarios for a given area of the UI, that usually means that a UI update actually requires a number of test updates.

That has the same maintenance as UI automation tests, but without the benefit of a widget map, reusable functions, or an automated callout when the test is wrong. And as anyone familiar with the test automation pyramid knows, UI automation tests provide a huge scaling issue with maintenance even with those. Echoing that scaling issue into your test documentation is a huge mistake.

Tests should be defined at the level that makes the intent of the test clear to an experienced tester or user--no more, no less--unless for some reason you're shipping them to an inexperienced audience like an outsourcing firm. That usually means being very specific about input data, and -if- it's microtesting UI behavior (tab from here, end up here) being specific about that. Everything else, generalize. You do not need to spell out stuff 123abc for the person who's been running tests for you on a daily basis.

Maybe you don't write it at "check that signin works," but maybe more like "check that sign-in works with a valid username/pw, at no more than 2 seconds latency."

But you know, if you don't care about latency, just say "Check that valid sign in works." That's fine. You only have to get specific about what kind of invalid data should trip it up, and even then "without a digit," "without a symbol," "with less than 8 characters," that's all fine too. The test will get executed correctly if your tester isn't a total loss.

Also, people suck at manual regression testing but are great at shortcutting and will wander off a script that specific anyway. So you may as well accept that and write them at the level they'd execute. What you lose in repeatability, you 100% gain in maintenance and productivity.

Repeatability is overrated when humans do it: take advantage of the fuzziness. It's more coverage. Just make sure they can write a good enough bug report to tell you what they actually did that caused the issue. If you want that level of repeatability and definition on the tests, do it with automation. At least there you can define a widget map and some flow-encapsulation methods.

Honestly, if I didn't get the job with at least the nice and less-ranty version of that answer, probably not the right company for me anyway. If you're not legally IEEE-compliant or an aerospace/medical company, wasting time benefits nobody.

One of the most exciting movements in QA is context-driven testing. It has 7 tenets here:

http://context-driven-testing.com/

...but they all boil down to "do the right thing for what you want done, and quit being so damned dogmatic about it." That includes not generating artifacts if nobody else will read them, and not defining tests past the level of detail needed to make them effective.

QA has a crappy reputation because it's slow, ponderous, and often not very effective. Maybe part of that is because people are updating overly-detailed docs all the time because other people have told them they should, rather than because they really need to.

If you need that level of product documentation, write the product documentation and refer the tests to it. At least that way you have "single point of truth" for flow. Fragmenting and repeating the docs across a bunch of disparate tests is just like copy-paste coding: a maintenance nightmare.

I'm personally in the middle of helping my org move to checklist-based tests. They can be used as loose regression tests, or as missions for exploratory testing, and only have to be updated when the basic layout of the app changes instead of every single flow detail. It'll get us out from the currently crushing maintenance of reviewing and potentially updating 6000+ micro-documented UI tests on every release, and will be a huge win.

People really should compare this sort of thing to coding maintenance. We've learned so much there that can be applied here as well.


Couldn't agree with this more; especially:

>So echoing that scaling issue into your test documentation is a huge mistake.

and

> QA has a crappy reputation because it's slow, ponderous, and often not very effective. Maybe part of that is because people are updating overly-detailed docs all the time because other people have told them they should, rather than because they really need to.

I started my career in QA and moved over to Dev, then back to QA, and now back to Dev. One thing that frustrates me most as a Dev about QA folks that I've worked with is their insistence that the script is God, not the right functioning of the application to serve the customer/user. Often that comes down to these two issues.


Which organization do you work for? How long have you worked there? Take a few minutes to read over what you just wrote and see if it makes sense to anyone else besides you?


What part can I explain to you better?


It was clear to me at least.


I haven't worked in a continuous integration environment but why would that fall on QA?


Typical process might be - and this is at a very hight/crude level - 1) job that checks out & runs unit tests triggering 2) a job that creates a build triggering 3) a job deploying to an integration/staging environment and possibly even triggering (or with an air gap, requiring manually running) 4) a job to promote to a production environment.

Testing can/should be added between various steps there depending on the type of product you are working with. Automated functional tests be integrated into a process like that at several points (qa ownership of portions of unit/integration testing framework and automated tests that smoke test new deployments to a staging environment) and knowing those points/tools that integrate well with your CI system/ability to implement those integrations is probably something you want a qa manager to be familiar with.


Because depending on your level of CI integration, many of the automated tests that QA writes can be run as part of the CI process. Developers are really bad at maintaining CI systems, so it usually falls on the QA team to ensure new tests are included in the CI workflow.

Ultimately, it's QA's job to certify a release. With CI, that certification is often done in an automated fashion. Thus, QA should have ultimate responsibility for the configuration and execution of the tests as part of CI.


That doesn't really work, though, because the test cases need to be updated at the same time a code checkin that would modify their assumptions happens or that checkin won't pass. Really, for the most part developers have to at least be skilled at updating the CI tests if not creating them in the first place.

Plus, good unit or component tests are generally written to validate architectural and interface assumptions, not so much business rules and requirements. That's what most people really run in CI, not so much full-stack systems integration or user acceptance tests.

The types of tests you would use to do heavy acceptance verification often don't run that well in CI due to either having ecosystem concerns that can't/shouldn't be mocked, or because they simply run too slowly (most UI test frameworks fall into this).

At the end of the day, everyone really needs to know how to do some level of testing, at least at to verify their own assumptions about the work they're generating.

Edit: and you don't necessarily use CI to validate a release--you do in Continuous Deployment, by necessity, so that covers a lot of HN's web startup audience for sure.

But my experience is the majority of other kinds of companies need a release acceptance pass to independently verify a final bundle against requirements.

The type of QA described in this document wouldn't mesh with a CI/CD-only organization anyway. In those orgs, just write the tests and run a code coverage tool. You don't really have a process step that would allow you to do much with this kind of documentation or rigor anyway, since it's build/push/results/deploy.


If you build your workflow so that you build a branch with a certain user story, you don't merge back to trunk until the tests are also written. It's actually pretty easy to manage (at least with Git).

And you're right -- this is for a continuous deployment workflow, which many large companies are moving towards as a next step from Agile. Continuous deployment is ultimately a business capability; rather than have your product guys focus test features, etc. you can just use a hypothesis-driven approach to set up A/B tests and go with what works. This works even better if you're an old-guard company with millions of users already. It short-circuits a lot of the hand-wringing and political maneuvering around product features when you can say "Eh, let's just break off 5% of our user base and test both versions of this for a few days".


In my experience QA wants it a lot more (test from known artifacts) than dev.


Any dev who doesn't want CI is a bad dev, end of conversation.


May be any personal question which may express your sense on "compromise". If you say i am deep rooted with perfection, have a added value.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: