So this is very fitting, I actually spoke to Alan last week, after he tweeted about a BS article. I said I’d always considered doing blog posts as reactions to other material, be it articles, blogs or tweets. Yet never got around to doing any. Until now.
Last night Alan shared his thoughts on the future of test automation and in particular the role of the automation engineer in an excellent twitter thread. So I’m going to share each tweet in the thread and share my views on it.
I probably should blog this next tweet stream, but fuckit - Twitter is a great place to write an outline....— Alan Page (@alanpage) July 29, 2017
100% this. Twitter is a great place for this. This is one of the things I think Twitter is actually good for. Lengthy debates and elaborating on ideas, not the best place. Blogs, forums like The Club, and voice to voice, be it online or face to face are much better suited.
I read a few people say recently (I'd cite it, but I can't remember where) that test automation is the use of *any* tools to aid in testing— Alan Page (@alanpage) July 29, 2017
Which I don't agree with, so for the purpose of this stream, define test automation as code that executes the system under test...— Alan Page (@alanpage) July 29, 2017
Now, I’m not looking for a debate about words here, I made my stance on such discussions clear. I call this Automation in Testing (AIT), the use of automation in the context of testing. I’m not trying to be smart or clever by creating my own term for this, I’m using the term to aid my own thoughts on the topic of automation in software testing. As widely agreed, at least I believe it is, automation > automated checks/tests. It’s also going to be getting a lot bigger with more CI, CD, virtualisation, ML and AI in the future. For both automated checks and tools to facilitate testing. I’m working with Mark Winteringham to create an AIT namespace. Not to tell people how it should be done, or send them links to say ‘you’re wrong’ but to build up a body of material and courses on how Mark and I see it. I hope you’ll hear and read more of those in 2018. I’m excited about it.
I digress, apologies.
I would love for test automation to mean the use of any tools to aid testing, but it doesn’t. The term has been bastardised and tarnished over the years, in my opinion. Test automation in the wider software development space means automated checks/tests. I’ve had great conversations with early authorities in this space like Dot Graham and they’ve indicated they share the same view. But this is OK, it’s OK for it to have that meaning. It just means we need to explore alternative words, alternative thoughts to discuss other usages for automation. Hence my direction with AIT. Many will disagree with AIT, and I hope they do, because that’s going to aid us in having conversations about the use of automation, and hopefully move it forward.
I have friends, both personal and virtual who are in a "tester" role and write test automation. It's also what a lot of testers aspire to do— Alan Page (@alanpage) July 29, 2017
I’ve been occupying such a role for the last 10 years. I consider the ability to code, and my understanding of automation to be some of the many tools I have on my belt. It certainly is what a lot of testers aspire to do, but I also wonder how many just want to be a ‘test automator’?
I feel strongly that the role of "test automator" will disappear in the near future. It's a job better suited for the product developers.— Alan Page (@alanpage) July 29, 2017
Now, in a recent talk I also said the role of a ‘test automator’ will disappear, and also feel strongly about this. However, in my talk, I didn’t say it was better suited to product developers, although in the majority of cases they would do a much better job. My thoughts are centred around ‘toolsmiths’, a role specifically designed to build tools to assist with software development. Those would be tools to support all the roles involved in software development. I intend to write more about this soon.
I feel really strongly about this. 95% of the test automation needed on a product can, and should be done by the programmers of that project— Alan Page (@alanpage) July 29, 2017
Definitely. One of the best places I worked, we would explore the existing code and checks in the pre-planning meeting, once the room agreed on the understanding of the new feature we would make notes on the ticket about which checks may need re-work, deleting and which new ones would need adding. Now sure, it takes some testing knowledge to design such checks and make decisions on risk, but if you’re in the room, you can just lead such discussions, ensure all agree, then it doesn’t really matter who implements them does it? The other angle of this is that if the developers are writing these checks, these checks that have been agreed and discussed with the team, then you will get software that can work because there will be an agreed check already on CI passing. Allowing you do some further testing if required.
There is, of course, a massive amount of testing work that can done via coding or tools. Stress, analysis, diagnostics, etc.— Alan Page (@alanpage) July 29, 2017
There is indeed, a huge amount. Again, I’m not writing enough, but I’ve spoken a lot about the distinction between reading and writing code. Reading code is a great source of test ideas and bug discovery. Even better if you read or create it with the dev. But yeah tools. There a lot of fantastic tools out there, I personally try to find time each week to try a new one. Performance tools, proxies, tools within IDEs, tools within SDKs, the plethora of tools in the virtualisation / DevOps space. It’s truly awesome. I’m always looking to add new tools to my belt. I don’t claim to be a master of them all, but I know they exist. Sadly though, I hear and meet a lot of testers who just stick to a single language and framework. Go explore others, fill that belt up. You’ll be surprised how much you can transfer from one tool to the other when you take the time to reflect on the skills you have.
Any testing not done by the implementer(s) of the code requires people that can use, implement, and write these tools.— Alan Page (@alanpage) July 29, 2017
Definitely this. Again though, a huge difference between spotting opportunities for tools and implementing them. In recent talks, I’ve been sharing heuristics for spotting testability opportunities. I’ll write something in more depth on this soon. Another skill related here is what I call the education section of LegoAutomation. Educating the team on the tools we’ve built, how they work, how to install them, who to talk to about improvements. The same for checks. Hopefully, all were in the room, so we know which were added and removed, but sometimes if you’re not in the room it’s important to share with the team what has been implemented, so there is a wider understanding of what a green build radiator actual means.
Sprinkle in some data analysis, customer empathy, and systems thinking, and that's the future of test.— Alan Page (@alanpage) July 29, 2017
Ooooh, I do love me a bit of these things. Data analysis is big right now, and only going to be getting bigger. Testers have a huge role to play in designing and implementing such data capture, using our systems thinking skills. Then when we have a mass of data we need to be able to find interesting information from it. Looking for patterns, anomalies and traits to assist us with our testing decisions. Such data can support risk assessments, test design and test prioritisation. Also, not just data from the live system, data from our own development process. Looking for opportunities to improve quality earlier on.
Customer empathy is really important. I was really close to my customers in my first few testing gigs, my automation heavy period in the middle of my career I probably couldn’t have been further away, these days I always find the time. To clarify, customers to me here are PO/Business but also users. Social media is huge, there is probably a customer somewhere talking about your product, what are they saying? Could it be useful? I remember when Pradeep Soundararajan introduced me to Twitter driven testing, which I later changed to social media driven testing. He was working on a public app and decided to see if there were any tweets about the product, there were hundreds of them! He’d just discovered a huge resource of test ideas. In my recent role at O2Priority, I did exactly the same.
I've alienated a lot of testers by now, but I've spotted trends before, so you'll have to trust me with this next, important tweet.— Alan Page (@alanpage) July 29, 2017
If you want to be a better tester and be an expert in testing, do not even bother learning to write test automation.— Alan Page (@alanpage) July 29, 2017
Now having read this tweet, I immediately tweeted to clarify to others that Alan was saying don’t learn to write test automation, based on his earlier definition. He didn’t say don’t learn code, he actually said to learn code in an earlier tweet. Another important thing here is that he says write. As already mentioned, there is a big difference between identifying and designing tools and checks, than actually implementing them. Plus there is most definitely someone in your team better at writing code. It doesn’t mean you shouldn’t, if you want to, please do. Also if you do, work with those developers, learn from them, pair with them. I’m just making the argument that the other work involved in automation is harder to master, and more important, than the code itself. In my opinion.
I’ve given this a lot of thought, hence why this post wasn’t finished 6 hours ago! I agree with Alan.
As I’ve been trying to teach with LegoAutomation for the last few years, there is so much more to automated checks than writing the code. I’d argue the writing of the code itself is probably the easiest part of the whole thing. So why not utilise the people in the team with the strongest skills to implement them?
So many other aspects of testing to spend time on, to deepen our knowledge on, to learn, and to share. Exploring, story telling, talking testing, test design, check design, coaching, mentoring, pairing, CI, CD, AI, ML, bug reporting, information sharing, agile, empathy, problem-solving, risk assessment, biases, big data, analytics, it’s late and I’m tired, perhaps you can all add to this list.
In summary, I’m onboard with Alan’s views, important for me though is to distinguish between the design, creation, and information usage from automated checks. I’m going to be writing a lot more in the coming months, I care deeply about this topic. Also I’m not saying don’t write automated checks, if you are currently or want to, please continue, I encourage you to go learn. I’m just agreeing with Alan that’s not the only way to go, and the way the future is shaping a deeper understanding of automation as a whole (not the writing checks part) and other areas of testing will be more valuable.
I want to thank Alan for putting his thoughts out there, encourage him to continue to do so and I eagerly wait to read them. I hope to do more posts like this.