Agile Testing Days - Day 2
So, last night happened. In my opinion, the evening programme is what truly sets this conference apart from many others. It’s like being part of a big family or something, although the feeling gets somewhat enhanced by the nice food and drink. I ended up being part of a planned table setting, and we stormed the front when the diner started. I already had a couple of beers on an empty stomach by then so my body was in desperate need of some food. But with all the talking and fun I ended up not eating that much while the people around me made sure I was constantly having a drink at hand. After diner much more laughing and talking etc. So I ended up waking up with a brutal headache in the middle of the night. When my alarm went of at 8am I was feeling worse and only went for breakfast at 9 am. It took me insanely long to eat the tiniest bit of fruit and drink some coffee. But it helped! I started to feel better and some people helped me by supplying aspirin and ibuprofen. Thanks for that. I missed the keynote and apparently I missed something awesome. I’ll make sure to watch it back later (yay for recordings!) but my apologies for the live-blogging fail so far on day 2. I’m going to pick it up now.
Peter Thomas – Testing the Untestable – Beyond Requirement Gathering
Talk starts with two silly examples from testing in production, where users got to see messages that were only meant for testing purposes.
He starts with asking if testing is really worth it.
– yes because it gives you confidence that your application is working (the basics)
– yes because you can make sure you are building the right product and building it right
– no because testing takes a long time (when you are depend on someone manually reviewing a bug fix, was the example)
– no because people don’t bother fixing some bugs or broken automated tests
Your team can get caught up in the ‘lets test more because we found a lot of bugs’ cycle. You find bugs, you do more testing, you find more bugs, etc….feedback slows down.
But when is a bug a bug? If they are found in production they are often labeled as ‘missed requirements’. But yeah…it happens everywhere.
So what could be a solution here? Try to do Deliberate Discovery. You don’t know what you don’t know, so you’ve got to start with a question.
You frame a story (something your product needs to solve) and come together with the Three Amigos (dev, test, ba/user) and try to ask as much questions. When you ask questions you’ll sometimes find out that the user doesn’t know everything. He needs to do research to find out.
Trying to find the missing requirements before you’re going to production is a journey you need to undertake with the team. It’s impossible for one person to find them all, you need everyone to put in effort.
Another approach is Spike and Stabilise. “Only make production worth the things that are worthy to be in production” (nice vague statement?). Seems a bit of a paradox here if you ask me. How can you know up front (for sure) if things aren’t or are worth it.
(Now he said something that makes me furious. His team had a bug fix but couldn’t put it in production because a manager didn’t reply to an email for a day. THEN PHONE THAT PERSON. Geez. I see that happen a lot. People need someone and send an email rather than pick up the phone or make actual effort to see that person. If something is urgent, you make more effort. Period. Ok, breathe and move on.)
Another option is to give users a choice in production. But not classic A/B testing where users are unaware that it’s happened to them but really give them a choice between A or B.
Radical option: don’t test. Yeah….because that’s the best idea ever. The idea behind this is that you are so quick at fixing bugs that it doesn’t matter if they occur. This might work for internal systems but the idea seems disastrous if you are dealing with end-users outside your own organisation. Also, there are enough crappy internal applications already so please for the love of God, don’t do this to people.
More options:
– monitor bugs in production. Not the normal way but almost real time. You see that someone is using the application and failures occur and you help that person immediately.
– look in production for abnormal patterns. See if your system goes crazy sometimes and try to find the solution. This helps to identify the Black Swan events.
– the best testers are your users (in production)
Conclusion: think why you are testing. Only test what is worth testing (how do you know??), testing can only test what you know.
Baseline: unknown unknowns exist. Be sure that you can react quickly to them if they occur in production (or rather: when they occur in production).
I highly agree with this, especially that we have to be aware that we only test what we know. It’s frustrating but true: we have to accept that there are always unknown unknowns. It’s an uncomfortable feeling but we have to prepare for them. This talk made me more aware of that again and I’m surely going to do something with it when I come back to work (since we are close to releasing a new product).
Joris Meerts – Moving from Ad Hoc Testing to Continuous Test Data with FitNesse
My colleague Joris is doing his presentation now, so I gotta cheer him on!
We are actually working at the same client, he’s stationed in the database team while I work in the frontend team. He admits he knows next to nothing about the frontend lol. He is all about testing the logistics, the backend flows. A customer makes an order and he tests the complete flow that the data goes through. Being the context driven guy that he is, he first explains his situation so people understand what he is doing.
If a user makes an order that effects the databases for weeks. That is quite the impact so testing thoroughly is really important.
Joris asked himself why you would automate. He wanted to keep the test data flowing through the test databases. That was his mission. He had to learn a lot of new stuff: Java, jUnit, Fitnesse…A big challenge!
He made sure his testing harass was set up modular. He made tiny building blocks that were easy to reuse and put together to form tests.
This talk is kind of hard to summarise, but if you have any questions about Fitnesse Joris is your man!
Insights from Happy Change Agents – Fanny Pittack and Alex Schwartz
These guys are trying to keep us awake after lunch, which I applaud!
“How about you?” What great idea do you remember from a conference? Did you try to apply it in your environment? Do you think it was successful?
They noticed a theme in the conferences. Not many pair presentations and too many success stories. Could that all be true, the successes?
The challenge is: you go to a conference, you learn a great idea and you try to apply it and then it fails! How does that happen? Can you prevent the failure?
I love their unorthodox theme here. They aren’t afraid to tell about their failures, which seems more realistic than all the success stories.
Don’t be afraid to fail, but if you fail ask yourself what you’ve learned from it. Analyse what went wrong and try to do it better next time.
— The hangover still got the best of me, so I was resting a bit in my room. I was really sad about this because I wanted to go to the Coding for Testers workshop —
Keynote David Evans – The Pillars of Testing
Last year he was my favourite speaker, so I’m kind of expecting a lot!
Last year was all about visualisation, now we’re talking Greek temples. Sure, cool, I’m down with that. A temple has foundations, pillars and a roof. Ok. We’re getting a little bit of history on Greek temple styles, brings back memories from my first year in highschool ahhh…..
(David is still very good at cracking jokes. Which is a good thing because it’s the last keynote of the day that needs this kind of energy).
So, what is the pillars thing? “A set of things to create or improve within a dev/test process.”
David shows us a slide with all kinds of decencies testing has to deal with. The idea is that you need to have your base level in order, then you can move on to the top. Example: if you want to give rapid feedback to your PO on automated tests, you need to have your base level test automation in check. Control environments —> create tests —> give feedback. The data control, environment control etc is the foundation then the tests are the pillars and the roof is the feedback (and in essence: confidence).
This ‘model’ can be applied to any of the test stuffs you deem relevant. You can also make the picture look as pretty if you want, if I look at all the exotic example pictures.
The word ‘pediment’ distracts me and gives me a pavlov reaction to think it reads ‘impediment’. I feel a weird sensation if I think about the word ‘impediment’, I must have worked with scrum too much…I get the urge to get up and fix problems.
“The product of testing is confidence”. Still holds true, this was the theme of his presentation last year. Confidence is a balance of safety and courage. This is a nice point I think. Full confidence isn’t possible, you also need courage to take the leap and have ‘faith’ that your product will be successful when it goes into production.
Columns of testing you can focus on: stronger evidence, better test design, greater coverage, faster feedback. I like this model a lot. It is simple, you can’t do anything but agree with the principles. Only thing is that of course, this is meta, but you have 4 things you can focus on.
Team foundations: Collaboration and Communication (DUH, but it’s not easy to get this right). Business Engagement & Team Structures are below that. If you can’t engage the business and you don’t have the right team structures you can’t successfully collaborate.
Capability Foundations: Test Techniques, made possible by Skills & Experience.
Technical Foundations: Effective Automation, based on application testability and configuration management.
Below the whole pillars and foundations is the Eurhynteria (que, my computer sets a red line below this word??). All this isn’t possible if you don’t have management & leadership (Eury…wierd word) in place and engineering discipline. This is the most frustrating thing, because it’s usually hard to change your organisation. That takes time and power.
Yep, I truly like this model and I can see it being very useful to explain testing to non-testers.
Now that we’ve passed the theoretical part it’s time to ask “How can I use this”
discovery tool
use it as a discussion starter
reference it in retrospectives
apply root cause analysis on where your weak points are
survey your team or organisation
rate the perceived success and importance of each element
look for hotspots and for variances
And maybe you think this model is crap (you’d be silly) but then you can start challenging it. If you think it’s crap, think of something better!
Free book: bit.ly/1v2L2vx woohoo!
Comments ()