Tuesday, June 10, 2008
What is a bug ... A new meaning ...
Let me give a try to this one-liner ...(short post)"A software bug is a reflection of the mind of a confused human user"Analyse this statement .....Shrini
Here was my response to his post:
I liked the challenge here. 'A software bug is a reflection of the mind of a confused human user'
Well to be honest it doesn’t work for me personally I’ll try to explain why.
Using the phrase ‘Software Defect’ narrows the range of the types of testing services we provide. The word “Software’ implies that is ‘Dynamic Testing’ of something that has already been coded into ‘Software’. This overlooks ‘Static Testing’ including, inspections if your organisation is that advanced to walkthroughs of Business and / or System and / or Functional / Non Functional requirements.
My next issue would be the phrase ‘reflection of the mind of a confused human’. This seems to infer that the human doesn’t know what they’re doing which is generally not the case. I think that confusion is the wrong word in context of SDLC and testing. I'd opt for the word 'Ambiguity'.
Finally, the use of the word ‘user’, in the context of SDLC user has a particular connotation associated with ‘end user’ which to my mind are part of the bug detection lifecycle, but they generally don’t have anything to do with the coding, unless perhaps it’s software for the development team?
So where does that leave us: Well I’d rewrite your definition and it would look something like this…
‘A bug is a reflection of an ambiguous human mind’
Now looking at this definition, I could be happy with this and would offer one of my own to try on for size…
To err is human, to detect divine.
Wednesday, June 4, 2008
I was reading Steve Rowe's blog and he was making some interesting comments about the whole Manual versus Automated testing argument. It made me think about what I thought and I thought I'd share what I think.
I actually think the more appropriate title for this would be ‘We need a better way to execute testing’?
I’ve read your post with interest and the recent update. When I initially read your original post, I was thinking you were pro-automation at the expense of all else. Now having read your other posts, I can see that you’re playing devils’ advocate and putting forward views to ‘challenge and invoke discussion’. So here's my five cents worth. (In NZ and Australia) two cents is no longer legal tender, so I'm moving with the times :)
I guess my thoughts in wading into this issue of Manual versus Automated testing is that both are tools that assist in the 'Test Execution' phase. But the test execution phase isn't the thing in testing that gets me up in the morning and lights me up. Test Execution, for me is just about executing enough scripts and / or scenarios to ensure that you can put your hand on heart and say that to the best of your knowledge, system coverage has been achieved. This coverage is not testing at least in my opinion. Testing (again, in my experience) comes from the Analysis phase which runs in parallel with the execution phase. It's the Analysis phase that gets me out of bed in the morning and makes me feel alive. Understanding the results of the points of failure and analysing why this failure is a defect. Just to be perfectly clear, it's not how is it a defect, that's the role of the development team. The why stems from a defect being logged because a requirement hasn't been met or an expected result and actual result have deviated or whatever. We document the why and the development team say how, as in this defect occurred because of 1, 2, 3 or this defect is not a defect because of 1, 2, 3.
I've been part of many projects where the answer to the Execution phase was to get more resource and typically this just lead to more confusion and detraction from the deliverable. Steve uses an analogy in his post When to Manually Test and when to Automate of the canary test, which I think is awesome, where the Automation or Manual Suite can assist in the detection of 'problems' that can be further 'Analysed', prodded and poked to detect further problems / defects.
I've always thought of regression in 3 layers.
Layer 1 Core flows (Typical candidates for automation)
Layer 2 Alternate Flows (Good candidates for automation depedning on complexity)
Layer 3 Exception Flows (Usually too complex to be automated and maybe better handled manually)
So in my ideal world if level 1 fails, I'd invoke level 2 and if that fails then I'd invoke level 3
Back to the Blog at hand:
In Test Execution, I've often had to employ an analogy Executive teams, project managers and a myriad of IT and Non IT teams is that you can't get 9 women to be pregnant for a month to expedite the process of a baby. It's untenable and downright stupid at least right now. Who know what medical marvels are around the corner if a man can be pregnant. :) The point here, is that Execution may be able to be sped up but the Analysis can't. Now that I;ve come back to this blog after day, I realise that my analogy probably doesn't fit as well I had hoped, but I'm going to keep it here anyway. I think it's really good, given the right context, now I ahve figure out what that is...
I've set up test teams where we had 'Testers' and 'Test Analysts' and at the risk of causing offence to people, I don't really want to get caught up in titles, but I do want to recognise that I believe that a tester is usually a domain expert or someone specifically used in the execution of scenario's and scripts (Automated or Manual - as stated before I believe they're tools) to help achieve the 'Analysis' phase which is to 'Analyse' the results and correct me if I'm wrong, but don't spend a lot of time analysing why a script / scenario passed. My focus is generally on points of failure and I'll go out on a limb here but if I hit 20% failure rate with any iteration of test execution, then there's bigger problems with the Project than just testing. :(
For greenfields projects, I typically plan for two iterations of testing and then a final regression (3rd iteration) on stable code assuming downward trending of defects exist and entry / exit criteria has been met.
For maintenance releases , I'd plan for 3-5 iterations of the 20% tested manually change and regular regression tests for the remaining 80% (usually good candidate for Automation, but manually if this doesn't exist). At the end of the project the changed / enhanced functionality scripts / scenarios are passed back to the Automators (assuming it's seperate person(s)) for integration into the regression suite. Incidentally, anything beyond 20% should be classed as more a re-write or refactoring project than a Maintenance release.
So the summary here for me, is there a better way to execute tests. Absolutely. Is there a better way to analyse failure? Absolutely and that's why we're in this game to seek out the truth and continuously improve the way we do things.
Tuesday, June 3, 2008
An interesting observation I made today was that in reviewing all the blogiverse for every argument there was an equal and opposite counter argument in relation to testing.
- Certification is good vs Certification is bad
- Manual vs Automation Testing
- Agile vs Waterfall
- Doo doo doo vs Da da da (Borrowed from the Police song - because we need to credit our sources)
- Blah blah blah blah blah
There are smart if not downright brilliant people on either side and I wonder how things got so polarised when in the 10 years I've been working with testers, we have nothing but things in common? Is it because we 'fight' a common enemy (and by that I'm not referring to the development team or the project manager), but the fact that we (in projects) make ill considered decisions, we accept things that shouldn't be accepted. We assume things that shouldn't be assumed.
Does anyone besides me think that assumptions are just one of our biggest enemies when it comes to testing and to the entire SDLC for that matter? We look at risk assessment plans with associated mitigation strategies, based on impact to the organisational, project and product risks and yet I've seen pages of Assumptions which aren't challenged, reviewed or impact assessed if said assumption were to be proven correct or incorrect for that matter. Since it is an assumption, it becomes someone else's problem, because we assumed this was true or we assumed that this was not true? Anyway - I digress, but maybe not as far as I had initially thought. You gotta love a media where you can say something, get sidetracked and get back to the point or stay sidetracked...
OK - so where was I?
OK, I need to detract a bit more. My wife and I had a huge fight today after I picked up the kids from school and we went to the mall and I parked way to far from the bank that she needed to go to. She said 'Fine, I'll get out then', hopped out and slammed the door. I waited with the kids for about half and hour and rang both cellphones and got voicemail and thought, she's not coming back, so I came back home. In 15 more minutes she rings me and says 'Where are you'? I say, 'I'm pulling up our driveway' and the phone goes dead. So who's right? Did you see what I did? In the clinical detachment of using this forum as a surrogate marriage guidance counsellor, I can see that I made a huge assumption that she was not coming back and was still mad at me. The risk assessment in my head was that it was going to cost $22.00 to catch a taxi home and I had the kids so it was all good. Apparently not.
How I've stayed married to this woman for 17 years is probably beyond all reason. Surely I would have been paroled by now if I had committed an even more heinous crime of murder. So here I am rambling to myself about about work and life in general.
Is fighting a waste of energy? I guess it depends on what comes out of it. No - it's not make up sex, I said I've been with this woman whom I love for 17 years, so none of that happens anymore. We still push each others buttons, we still can annoy the crap out of each other and still be perfectly committed and relatively happy. So when it comes to testing blogs and testing forums, who's right and who's more right? Does it matter? Should it matter?
So in the end of what started to be a work related post I guess it goes full circle by acknowledging that sometimes being right at all costs does actually have a price tag. The real question then becomes are you willing to pay the price for being right or the most right.
Wednesday, May 21, 2008
At the risk of offending these people I want to express the opinion that Domain expertise does not a tester make. I think it helps, but in my experience access to Domain expertise or within your test team or even just available to the test team can be enough. The most blatant use of domain expertise in testing are with SAP projects and Telco networks. The catch cry here is that testers must have had experience in these systems to even have a shot at being chosen for this type of work.
What I've learned from my experience however seems to contradict this. I've seen that testing is an overarching skillset that transcends Domain. I agree that it is important but not to exclusion of what I am submitting for your consideration to be far more important skills; namely the ability to analyse. In my previous posting about certification I outlined a group of testers with different titles, job descriptions and workloads.
In my initial assessment I put my theory to the test and rotated the group. After the initial shock I sat with them and said as I am conveying to you, that testing of an application is an overarching skillset and Domain expertise while useful was not what they were employed to be. The immediate concern / backlash from the development teams was that the testers domain expertise was going to be lost. I disagreed since they would retain this knowledge as they were still in the test team.
They would document handover documents and compile a deskfile of handy hints to provide to the new 'application tester'. They would also develop a holistic view of the testing undertaken in the organisation and this would help them understand the bigger picture. Also, some members of the team needed to see How lucky they were in terms of documentation provided and support from each of the development teams respectively. Domain expertise started to be shared and when the development teams found the expertise had not dissipated, they became more accepting of the change. The test team themselves found that they really did have analyst skills and that testing as a discipline was something to be reckoned with. I've always suggested that while testing may not be 'rocket science', it is still a science. :)
I'm not discounting Domain Expertise as a valuable test resource within the team, I am saying that if I didn't have this expertise, I'd expect that most Testers should have the ability to extrapolate test conditions, scenarios and scripts from the available documentation. If there is none, then to second a business / end user into the Test Team for this expertise, or possibly a combination of all of these.
I've tried to impress on the test and development teams respectively that the active word in their job title is in fact ANALYST, not test or programmer . I believe that I can teach people to test in terms of executing scripts / scenarios and capturing what outcomes. The trick however is the ability to analyse what the results are showing or not showing as the case may be. it is the analyst in our job titles that cause us to ask "What happens if I do this?' 'What happens if I do that?' and systematically and methodically uncover and detect defects within the allotted time constrained, resource intensive, highly stressful, joyful experience that comes from testing and for that matter development.
I'm new to the blogging thing, but I've some time over the past couple of weeks reading some blogs of interests and cam across one that I really liked the content of. I wasn't really intending on making my own blog, but after I unsuccessfully tried to leave a comment on his blog, I decided to create one and then to follow it up periodically.
Anyway the person's blog I've been reading is Shrini Kulkarni and his blog is http://shrinik.blogspot.com/
You should have a look at what he writes about, I find it relevant, timely and interesting.
Anyway, I was reading with interest what's been written Shrini's blog around certification. I seem to subscribe to a different theory and I thought I would offer my opinion for your consideration.
We in the QA / Testing all know that the market is flooded with, Test Analysts of various levels, Engineers of various levels and Managers equally so.
Anyway, I have the following theories about certification
1. Certification is a necessary evil
2. Certification under the ISTQB umbrella was not as easy as it looked
3. Certification is not about us (as Testers)
A NECESSARY EVIL?
Ok, is it really evil? I thought it was. When the ISEB / ISTQB curriculum first came out, I was offended. How could a multi choice exam possibly test me on 7 years experience as a professional tester. I'd worked on multimillion dollar projects and in teams ranging from myself as the test team to over 20 test team members.
Is it evil? Probably, especially since my career was founded on a 5 day UAT course I was sent on 6 months after being employed as a Test Analyst and having already delivered Y2K test deliverables back in 1998. Even though I had practical experience, the idea of someone dictating the 'rights and wrongs' of testing made me equally nervous doubtful of my abilities.
ISTQB - NOT EASY
I managed to avoid the Certification trap until last year, when I began Test Management with a wagering organisation. When I arrived, the test team were 3 testers who worked separately under a development lead. Each tester had differing titles, workload and job descriptions.
One was a QA Engineer, the other a tester and the other a Development Support Officer
The engineer tested a reporting application and worked standard hours and had excellent documentation provided and development support for this domain
The 'Tester' worked with the applications team that created the application that managed legalised wagering for the Country, including Sports, Horses etc. She worked long hours and often weekends and maintained and ran the test lab equipment which replicated production. Documentation could be limited to Defect log entries, developer technical documents and the goodwill of the development team.
Finally, the Development Support Officer, worked on the applications that supported the Point of Sales equipment for the wagering application. He was quite technical and the development support role included testing as well as production support. Documentation was extremely limited.
1. This was an 'in-house' development shop which surprisingly is treated very different from an external vendor. (This may the the topic of a future posting)
2. Development Centric in approach, definition, design, build, implementation aspects of SDLC
3. Testing was a poor second cousin in relation to development
Oh My God - Where to start?
Not to get too tied up here, I decided this was great point to get equalisation across the team. Cetification offered me a way out to determine their skillsets and to get a 'baseline' of these skills. It was relatively easy to pitch as with most managers' it all comes down to 'What's in it for me?'. Anyway, I thought that since I was asking them to get certified, I should lead by example and get my certification as well. I started by trying to understand where the gaps were in my knowledge and reverse engineered by sitting the mock exams with no study to see where my limitations were. I scored 26 on both mock exams and googled everything and anything on ISTQB and ISEB in search for more. Funnily, I couldn't find any. I opted for the 1 day refresher course which my team and I attended since we were already in the role.
The course was interesting and we were given another mock exam on entry to the course, scored and then focused on areas of weakness or in politically correct terminology, the opportunities for learning. :) Anyway, I scored 28 and so was reasonably happy with my ability knowing I only needed 25 to pass.
Then I sat the ISTQB Foundation exam and having completed this exam I was 100% certain that I had failed and not just by a couple of questions but monumentally failed. Thoughts of asking people whether 'they would like fries with that' filled me with dread and I felt like I was a fraud. It meant that 10 years as a career tester had wasted, by the fact that I was insisting my team do the certification and I needed to lead by example. What an Idiot. Over the next few weeks, I anxiously waited for either my certificate of incompetence or invoices from former employers asking for a refund of contracting services provided.
I was more than pleasantly surprised to learn that I had passed and comfortably, but there were some lessons to be learned.
1. Certification is about understanding the 'Curriculum of the day'. Finding out that ISTQB call the phases of testing Component, System, Integration and Acceptance as replacements for Unit, System, Integration and UAT both broadened my horizons and challenged my 'current thinking' about what testing is and isn't. Isn't that why we do why we do. As testers, we challenge the norm and employ critical thinking to pose the question 'Why do we do it this way?' 'What happens when you do it it that way?'
2. I would like suggest that certification is in actual fact not for us, but for the employers of our service. All employers know about testing is that certification means that they get a standard skillset of testers by a duly recognised and accredited body (hopefully). This is a minimum standard. All that employers of testing services know is that it is like most other teams within the SDLC. It's a big black hole that eats money, time and resource. People understand that being certified gives you a minimum competency assessment criteria.
3. It's true that multi-choice can be limited in exposing knowledge gaps, but I should add that ISTQB questions are posed so that some answers are more correct than others. So it made the choices more complicated, at least in my eyes.
4. As for those in the industry, I'm sure you will agree, certification may get you entry into the door for an interview, but a tester can spot and 'test' a tester at 100 paces with pinpoint accuracy. Isn't that why we do why we do?