Programming Philosophy :: Inverted Pyramid
I have been attempting to write this blog post for a few weeks now and have (obviously) not been successful. This is my latest iteration of the little war of thoughts that are happening inside of my head. Hopefully this blog post is better for it and that the readers can get something out of it.
We learn in a traditional quality assurance course that quality assurance is a sequence of steps to verify that the code matches the requirements. There is a rigorous, structured approach that is outlined in these courses that is modeled after how some companies in the industry perform quality assurance. Unfortunately, this is modeled after the "throw it over the wall" waterfall development methodology. How can we apply quality assurance techniques in an Agile environment?
Over the Cubicle Wall
This is how the industry does it.
I am lucky to be going to a really good school that has a great group of computer science and software engineering professors. These professors really care about teaching undergraduates (that's all they do; no research) and put a lot of time into giving us the best education possible. The result is that, in my experience, the students are taught "ahead of the curve" in academia. For example, our software engineering undergraduate degree is one of the handful in the country and teaches a solid series of core software engineering courses. This includes everything from requirements gathering to software quality assurance.
However, we are talking about academia here. To no fault of the institution or the department, academia lags behind the industry. The nature of higher education leads to more red tape and bureaucracy that cannot keep up with the bursts of acceptance that can overtake the industry. ABET accreditation along with other oversight programs hamstring the department from being agile enough to create the modern programmers that they strive to create. So, despite the efforts our my school and my great professors, there are some crufty old ideas that stay in place long passed their shelf life.
One of these is how to go about software quality assurance. When I took the course, there was a clearly defined line between software quality assurance and software development. We worked on a project that was feature complete but needed to be tested for quality. It was well done if we were to emulate the traditional approach: we were a team of 4 testers that needed to test the application for faults. We were given the software documentation, source code and one of our team members was a member of the development team for that project. We had no contact with any of the other team members and for sure I could not even tell you who they were! We were to report bugs to them which they would promptly fix.
Finally, let me proffer a definition of traditional software quality assurance before I go providing an alternative. In a traditional approach, there is a separate team of "testers" that test the system. They are not involved in the development of the application and they are often physically separated from the team (not always; I was on a project where this was not true). They receive their instructions via project manager or note stating that Feature XYZ has been completed and that the quality assurance team should start work right away on verifying that the requirements have been met. The testers toil away until they have tested the application fully (and the 'other team' has fixed the bugs) and all of the requirements have been met. Then they release.
Yep, that's how we do it.
To me, the traditional approach sounds terrible just by its definition. However, in case you're not quite drinking the Kool Aid that I am, let me list a few reasons that this may be good and a few reasons that this may be bad.
Testers need to be apart from the development team in order to test effectively. This seems to be the most sound argument for doing the above. The first problem comes from the assumption that a tester is more closely aligned to the customer. I would argue that this is marginally true but the problem is that the tester is not the customer. If we want to know how a customer uses the system, let us use the customer herself!
The second problem comes from increased risk. Any project manager will tell you that managing a project is, on some level, about managing risk. The introduction of two technical teams evaluating the requirements increases the risk that the customer does not get what she wants! The argument may be that the re-evaluation of a requirement puts more eyes on the requirement and thus increases the likelihood that the collective team gets it right. However, if a developer is unsure about a requirement, think of how solid his resolve becomes if a tester corroborates his (false) assumption.
Developers know too much about the code to test! (AKA, they cannot blackbox test). This is another relatively sound argument for the use of traditional software development. Indeed, it is more difficult to black box test code when you are the one that has written the code. However, we can mitigate this with another strategy which I'll get to soon.
Competition between the test team and development team increases quality. A bit of healthy competition is a good thing. We are, by nature, competitive. However, Us v. Them mentalities, especially when sustained for long periods of time, create animosity and not competitiveness. Animosity is definitely not what you want on your project.
Developers do not know how to test! I do not think that developers are given the opportunity to prove themselves as a tester very often. And, fortunately, it seems that companies have to sometimes improvise by putting developers on understaffed quality assurance teams. From what I have heard and seen, these developers do not struggle when they get up to speed.
"Testing is below me," says Mr. Developer. Despite the stigma of quality assurance, being a tester is just as rewarding and enjoyable as being a developer (I love doing both!). As with being a developer, it all matters on what you put into it.
... There are bound to be more; it is hard for me to get into this mindset. Please, let me know if you can think of any arguments for the traditional method of quality assurance!
Creates animosity between the test team and the development team. In each project that I've worked on with the separate team structure, the development team would hate whenever a member of the quality assurance team would contact them. Invariably, it was to point out something that they did wrong. The test team, on the other hand, would get frustrated with the development team at the amount of bugs there were (no matter how many bugs there actually was) and how the development team kept making these mistakes!
Developers lose the motivation to test. If there is a quality assurance team whose job it is to catch your bugs, then why would you spend any time catching them yourself? Perhaps you want to avoid the ridicule that comes with creating a bug, but that seems like a poor motivating factor. Eventually, the developer is just not going to care if someone ridicules him for his bug that he introduced.
Customers end up fighting both the quality assurance team and the development team. Meetings with clients involve the quality assurance and development team lead. Both of these teams have their own goals that the customer must fight against. We want the customer to be as comfortable and satisfied as possible--not frustrated and unsatisfied.
Time between development and testing is increased. This may not be apparent right now, but the time it takes to complete a feature, toss it over the wall, create a set of test cases for the feature and run them increases the time it takes to test a feature, even if its happening adjacently.
This all sounds pretty bad. But just pointing fingers isn't going to solve anything! What can we do about this?
In Extreme Programming Explained, Kent Beck tells us that the "extreme" part of the practice comes from taking Good Things(tm) to the extremes. Sometimes, these practices are counter-intuitive and costly. One controversial practice that serves as a great example for the purpose of this blog post is test driven development.
Test driven development turns the normal development process on its head and forces developers to write unit tests before they write the production code. Without going into too much detail and analysis of this practice, it has proven to be very effective at both reducing software bugs and increasing developer productivity.
We are going to solve our problem by turning the quality assurance process on its head. In fact, I mean that a little literally. Looking at the figure of the left, we can see that the traditional pyramid has been flipped over and turned about. This figure describes a methodology that solves the riddles we outlined above. We'll get to that in a second. However, the most important thing about this approach is that it satisfies our collective values. These values, outlined in Extreme Programming, should be the driving force behind every principle and practice.
We will briefly explore which values are supported by this methodology. There will be a brief discussion about how each of the values are satisfied. Each value could be explored more in detail and I would love to do so at a later date. Please ask questions if you're not sure about my reasoning!
Communication: We, as people, like to communicate and interact with other human beings. In this process, we embrace communication by creating a single coherent team. Instead of fragmenting the quality assurance team, the development team, and the customer, we bring them all together in order to foster communication and collaboration. This gives us the benefit of understanding the needs of the customer without any barriers of communication (such as a Customer v. Dev Team v. QA Team process.
Simplicity: I think that this process embraces simplicity in the most obvious of ways. Instead of having a complex system of teams that interact in a structured manner, we reduce the complexity to a single team that handles the entire process. This does, however, put increased pressure on the developer, tester and customer and we will get to that later.
Feedback: In this system, we reduce the need for the structured communication channels and increase the communication between all parties by putting them face to face every single day. The developer gets feedback from testing almost immediately and the customer can interact with a developer that is developing the story as she develops it.
Courage: Courage is a bit more abstract in this system but is still present. The courage occurs when a developer has the courage to take on the role of a tester and has the courage to put her stamp of approval on the code that she has written. Courage takes the form of the customer becoming more involved in the software development process and well as the engagement of customers from the quality perspective of the team.
Respect: Instead of breeding animosity, this system breeds trust and respect. The customer, involved in the day to day conditions of development and the care that the developers take in ensuring quality, comes to respect the developers. The developers, working closely with the customer, come to appreciate and respect the ideas and concerns that the customer has.
I am a developer, what does this mean to me?
As a developer, you are now responsible for the quality of your code and features. Instead of relying on another team to ensure quality, your team is now going to have to do this. In this aspect, you are going to have to learn. The good news is that developers that are thrown into quality assurance teams tend to survive. This isn't an impossible task and hopefully you can approach this with enthusiasm and respect. Testing your own code is going to make you a better developer and allow you to confidently release your code to customers.
I am a tester, what does this mean to me?
As a tester, you are now going to have to become a developer. A "tester" position does not exist in the process we have described. Instead, you are going to become a generalist that has a little more experience in testing than the rest of the team. You, indeed, have the hardest path ahead of you. It is your job to guide (but not 'pinch hit') your fellow developers in the process of testing their code. You also have to learn how to interact with customers as a developer and be a part of the development team. You've been under appreciated for a long time now and its your time to shine.
I am a customer, what does this mean to me?
As a customer, this is all positive for you. If you are working on an Agile project already, you are deeply embedded in the development process already. Instead of having to wrestle with two teams, you are now working closely with only one. The team, as a whole, will come to understand what you mean by quality and how to create what you need.
Yes, this is all just theory though! How do we do it?
I have put this article under "programming philosophy" for a reason. There is very little in the way of practice in this article and I hope to address that in my next article. It will include a screencast showing this philosophy in practice and how it may be better that the traditional approach. Unfortunately, I have no data to support my philosophical claims here but I think that it is in the spirit of Agile and Extreme Programming. Those have been proven to be effective and piggybacking on that data may serve as an effective indicator of the practical implications of this methodology.