Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Resources for learning manual software testing?
54 points by tasdev on Oct 11, 2015 | hide | past | favorite | 42 comments
My partner is going to be testing software I've written. He handles the business side of things and isn't a programmer.

Can anyone suggest some resources for him to read to how to best touch our software?




I agree that it's largely a mindset. From https://twitter.com/sempf/status/514473420277694465 :

"QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv."


Don't forget, he came in on February 29th.


Two years in a row.


In two different time zones.


Via front and back entrance at the same time


Walking like a Fremen in the desert.


While other commenters are correct that manual QA is a mindset, there are readings that can help develop that mindset.

I have new QA engineers read the first five or six chapters of "Testing Computer Software":

http://www.amazon.com/Testing-Computer-Software-2nd-Edition/...

to get a feel for the mindset and methodologies and to help them understand what testing can and can't accomplish.

"Lessons Learned in Software Testing", mentioned by another commenter, is another good resource. Lots of good anecdotes:

http://www.amazon.com/Lessons-Learned-Software-Testing-Conte...

Both are a bit dated in some ways ("Testing" has a section on filing paper bug reports), but the lessons and thinking are still highly relevant.


Hmmm. I think you're looking at this the wrong way: it is not he who should be learning more about manual testing, it is you who needs to learn about how to write manual tests.

Manual testing is not at all that different from, say, integration testing: you write a specification of a task that needs to be performed, you write down the expected output, and you compare it with the actual output.

What you end up with is a document containing dozens of pages full of small tables with test specifications, somewhat like [1].

So, to sum it up, it is you who should be doing the hard work of finding out what to test. You make a document full of tests which are as specific as possible, and let your partner walk through it. He doesn't understand what to do? Then you failed at being specific. He cannot find the functionality you ask for? Either a usability issue, or once again, not specific enough.

Hope this helps you somewhat!

[1] http://www.polarion.com/products/screenshots2011/test-specif...


What you are describing is exactly the type of test that should be automated.

Manual testing should be exploratory, it shouldn't be following a script. Computers are there to follow scripts.


I do software QA on a physical device, that has a computer in it. We set up scenarios that exercise the software in specific ways. It is very much manual, following written tests driven by software requirements. This is specifically software testing, although we use the hardware to exercise the software.

Even exploratory has written tests that basically say "explore," and they are often assigned with a particular focus.


For something like what you do I find that there's often a cost/benefit trade off to be made:

#1 Create a mock system that you can run automated tests against.

#2 Only do the manual tests.

Which one is the 'right' decision depends largely on the expense of creating that mock system, the complexity of the system under test, the nature of the bugs you're getting from customers and the frequency with which your software changes.

Simple, infrequently changing system? Expensive to set up a mock system? #2.

Complex, frequently changing system? #1 will help more than you realize.

>Even exploratory has written tests that basically say "explore," and they are often assigned with a particular focus.

Of course. However, exploratory shouldn't mean following a script and it shouldn't mean doing repetitive work.


That's a very good point, I hadn't looked at it like that!


Ahh, this is the kind of thinking that frustrated me in my time as a software tester.

From a functionality perspective: maybe, but if the developer needs to explain the intended functionality of the application to the business end of the product then something has gone horribly wrong.

From a sheer "finding bugs" perspective: If you knew what would actually expose buggy functionality to the extent that you could write it down, you wouldn't have written that in the first place!

I encourage you to teach him the way that your specific language makes things happen on the machine and the way that software in general works (boundary conditions, etc). But I don't think that the above way of doing things, ESPECIALLY for a 2 man outfit, is a good idea.


I think we are talking about different goals. If the goal of the test is for sheer fun "bug hunting", a more pragmatic approach should indeed be taken. If, as I interpreted it, the "business guy" is going to do some kind of acceptance testing, and you want to be able to perform this test multiple times, you want the tests to be specific and well documented.

In other words: OP, start with telling us what you want to achieve with your manual tests!


Totally agree. Though the "[t]hen you failed at being specific" may be a tad harsh for my taste :)

Can iterate on the test spec/script, like anything else... and at the beginning the doc may even prove to be a great tool for contrasting baseline context between different people/roles/backgrounds.


Yeah I know, it was more the kind of point you need to take with this. It is similar in spirit to "the customer is always right" -- of course that's too harsh, but it gets the point across. :)


Truth be told, I totally agree with your sentiment - if the test spec is unclear to the guy executing it, something needs to be changed... just reflective of the direction I've been trying to take my attitudes and my language; I even have a git alias "an", short for annotate, which performs the "blame" command :D ("git-blame - Show what revision and author last modified each line of a file")


This makes no sense to me as there are a load of things good testers do which shouldn't be in instructions and most developers don't think of, otherwise they'd have coded against it.

Even silly things like whacking a single button loads really quickly to see what happens. Did you just order 100 tickets? Take down the site?

So telling someone to write better instructions seems a bizarre approach.


The purpose of instruction based testing is usually to avoid regression bugs and to make sure requirements are fulfilled. Randomly poking around is also good, this is why you have manually based instruction testing instead of just automating everything, while performing the instructions you usually notice weird things on the side. Neither alternative can replace the other fully.


> The purpose of instruction based testing is usually to avoid regression bugs and to make sure requirements are fulfilled.

You just described my job. :)


"Explore It" by Elisabeth Hendrickson [1] is a short, easy-to-read introduction to exploratory testing ("manual testing") that has many concrete ideas for what and how to test SW.

[1] http://www.amazon.com/Explore-Increase-Confidence-Explorator...


Related is the "Test Heuristics Cheat Sheet" [1] that she and others put together.

[1] http://testobsessed.com/wp-content/uploads/2011/04/testheuri...


A good start would be the ISTQB foundation level syllabus. While the ISTQB seem to be a litte outdated in terms of their views on the software development process: A focus on sequential waterfall-like models - it is a good resource to learn the vocabulary of software testing. Furthermore it explains different types and stages of software testing: http://www.istqb.org/downloads/viewdownload/16/15.html


Yes, this is a real problem! I learned testing in a waterfall environment (basically followed IEEE standards, ISQTB processes) and now work at a company that is more Agile. So many of the skills/techniques are fundamentally incompatible.


Predictably the concept of testing in an agile environment has also been been explored, even if not in the core syllabus:

http://www.istqb.org/certification-path-root/agile-tester-ex...


Testing (effective testing I should perhaps say) is linked to the domain that it operates in. Understanding the nature of "what" the software does is often more important than "how" to test.

"How" you test will be impacted by other things as well. Some environments (companies) have a need to formally record testing. Others use 'non IT people' to run the testing. Some have expert users who know the app inside out as 'testers'. etc etc The need for how much detail is in the test scripts, and in fact if you document manual test scripts will depend on nature of your company.

You will find a couple of schools of thought on "how" to test. ISTQB is formal and has a good bag of technique, the other school of though has some good ideas (like session based testing) but IMHO tends to throw the baby out with the bath water. The ISTQB technique can be applied in an agile environment what you would not use the documents they describe.

What I have personally found is that a good tester picks up ideas, techniques (BVA, EP etc), and applies these where the will return the best value.

I see the arguments in the testing world a bit a kin to dev's fighting over strongly typed vs. loosely typed.

Automation is good BUT if you don't know what you need/want to test then really it is a means to get is a mess really quick.


I think something that drove it home for me is an actual written test script at my first part time job (before university). I was testing a tool called Internet Call Manager (if you used dialup and received a call while on the internet, this software would pop up a notification on your screen and allow you to decide whether to ignore the call or to take it).

Basically it was a table with the left hand columns being the instructions to perform, in point form, and the definitions of the expected/correct behavior, and the right hand columns being checkboxes and blank spaces to write in, indicating whether the software performed correctly.

It was super clear and to the point, and it was just a document that could be easily updated (and was, I believe I later made some modifications to the script when new versions of the software came out, but it was so long ago that maybe someone else was the one to do it).

Maybe you could write one of those up and he'd get a better idea for what his job was, and you could run through it with him a few times. After he gets the hang of it, I think it will have some value outside of just testing the code: he may come to understand how changes in one part of the code bring up issues in unexpected places (and get an intuitive grasp for, say, code reuse); he will be a true expert on the product (I've always noticed that QA people are often better versed in software than the assigned Product Manager, come demo time); and perhaps he'll start to grasp at a more physical level what your work actually entails, and it'll help give him context for software development as a process.

--

"The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.... Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. […] The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be." - Fred Brooks

Let him learn some of the magic behind the poetry :) To your whole idea (biz/product guy getting hands dirty with product work), hear hear, bravo, etc.


The Ministry of Testing is a good starting point, http://www.ministryoftesting.com

If doesn't sound like you are providing an API but if you are feel free to mail me directly (email is in my profile) for some resources; my company works in that area of testing.


> (email is in my profile)

The "Email" field in your profile is private, others can't see it. You need to put your email in the "About" field for it to be publicly visible.


Some manual testing would have uncovered that.


Got me!


Doh! Many thanks; have updated.


To be completely unhelpful, I've found it's largely instinct. Some people can look at a thing and find a way to break it. Only those people benefit from formal QA processes.


First and foremost one has to know the domain, what are the things that tend to go wrong. And this is platform specific knowledge.

I mean, you're testing a web app? Disable JavaScript in the browser.

Testing an Android app? Rotate the phone to change screen orientation, especially when there's a background operation going on - that's a typical spot for bugs, but no amount of general manual testing know-how will tell you that. And so on


I am assuming that as programmer, you have completed the unit/system testing so your partner should focus on the acceptance testing such as usability. I have found creating personnas and putting yourself into the position of those personnas a good starting point.


I am assuming that you have done the unit/system testing as the programmer, so your partner should look at the acceptance testing. I have found creating personnas (of your end customers) and testing as though you were one of them a really good starting point.


Give uTest a try, you can participate in crowd-sourced test cycles and gain both experience and money.

And they have really cool resources over here: https://university.utest.com


TMAP Next is a good read. It's a little heavy, but there are some very insightful chapters on the what and the how of testing.


I found Art of Software testing to be a good guide on the subject. I recommend reading it.


Regardless of what else you read, try this one technique for manual testing. You're probably interested in getting more serious about testing because of a couple of major defects that you have seen in the software after your last release. Create a test plan document that walks through the procedure of verifying those defects are not present in the software. On each release, add to the test plan to make sure new features you've added work properly. As you work on the product, the test plan will grow. But it won't grow as much as you might expect, because often several new features can be tested just by making a couple of edits to the test plan.

The goal of testing is to prevent defects from surfacing in production. So track every defect that surfaces in production, so that you can watch that go to zero over time.

Whenever a defect comes up in production, edit the test plan such that you would have caught that defect. Now you won't be bitten by that class of defect in production again.

If you keep updating the test plan in this way you will see a dramatic drop in defects released to production. Once you've done this for a while, you will probably discover that your biggest source of defects released into production have to do with how different your test environment is from your production environment. So you will then start attacking that issue by setting up a proper staging environment, where the staging environment mirrors production as closely as practical.

Then you will start to discover that your biggest source of defects released into production becomes other things, such as little problems with your release methodology, which you can then address.

But the key concept here is: document what your test plan is, and continuously improve it. It's important to note that you must actually follow the documented procedure for this to work. If you write a document so big that you won't actually do it, you're doing it wrong, make a smaller document. If you feel like you only need to do 2 minutes worth of resting, document what you will do during those 2 minutes. You can start with an empty test plan and that will work, as long as you continuously improve your test plan. The same goes for the procedures that you use to deploy. Always follow the same procedure exactly as documented, because you will need to improve that procedure.

I have followed these procedures at a number of companies and in a variety of environments, and seen it turn chaotic messes around many times.

Once you have this process down solid, you can automate some or all of it. But the important thing is the overall set of processes around testing and deploying software, and the process for improving those processes. How much of it is automatic versus manual matters a lot less.

As for resources, I'd recommend books on continuous improvement. Because as you get better at testing, you'll discover that General process improvement is what you really need in order to cover the range of things that cause defects in production.


Point him to "Lessons Learned in Software Testing" by Bach, Kaner and Pettichord: http://www.amazon.com/dp/0471081124

Also, "manual testing" is a slightly unfortunate monicker for the activity we are discussing. It is bound to generate some degree of incomprehension or even hostility on the part of some people, for no foreseeable benefit. "Testing" will do. It is something you do with your head primarily, your hands being involved to pretty much the same degree that they are in programming (and we don't usually call that "manual programming").


I believe the OP used the term "manual" to differentiate from "automated" testing, e.g. unit tests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: