Presentation TranscriptAll right. Thanks very much for having me. So … And you set things up nicely. My background is, as a programmer I spent most of the 90s building things. I certainly didn’t know how good my life was because I also spent time showing those things to people. I would stand at these conferences and demo my product and hear from customers, and then I might go geek out with my friends and change things that night. And I was pretty heavily influenced by a lot of different Kent Beck writings. So I thought we’d start with this cover here. Obviously I took Kent’s book and took some license with it because his book was called, Test Driven Development. And I sort of called myself a Kent Beck wananbe. Because I think Kent’s one of the people that has influenced me in the last couple decades because he’s a strong thinker and he’s always looking for root causes and interesting abstractions.
And that’s sort of what I want to talk about today. Because sort of my journey in Kent Land started with this book. Everybody calls it the white book. I always think of it sort of like the white album. It was simple, clear, easily approachable, and we could read it over and over and get little things that we could concretely do out of it. So this book actually came a little bit later when he was formalizing how to do this test driven stuff. And one of the best quotes in this book goes something like this. Kent said, “When I’m sort of lost in my design, I just start writing tests.” And that really synthesizes why it’s a good thing to talk about at a higher level. Because programming is more discreet. Product is more ambiguous. And so if we can apply, extrapolate from some of those discreet things into a larger space, maybe we can use the same learning. So if we sort of just jump for a short time in the wayback machine, this the conference proceeding from the very first Extreme Programming Conference I went to.
And one of the sessions I was involved in was all about testing. And there was a bunch of us there and we had our ideas about how different ways to test with different programming languages. And how to break things down, how to scale up this testing stuff, how to manage distributed systems that were part of the testing. And we were out there, and the language we were using was we were out there to test infect people. Now I think we sort of meant it like a flu shot and not jut a horrible virus. Like how do we get people excited about this thing that could be really helpful for them? And we were very very focused on unit tests. And I think that was good because a lot of us had strong skills in smaller spaces in the code, but we didn’t wield the larger design aspects very well. So calling it unit test I think got us closer to the code and it was tough for some people and exciting for others.
For me, those unit tests were just a replacement of the notes that I had been writing on post-its on my desk. And I just replaced those with code. So my ideas, my designs went more quickly into things that had more permanence and more feedback like automated tests. Some of the mantras we used to use were these. Like, “If something’s good let’s do it all the time,” or “If something hurts, let’s do it until it doesn’t hurt.” Like continuous integration, a lot of those early practices, many of which are just the norm now. Like in 2001 you had to sort of convince people to do continuous integration, but today that’s almost a given everywhere you go. And I was wondering if some of these ideas we’re going to talk about today, might become a given in a handful of years.
So there’s a bunch of us out there. We’re sitting with people. We’re getting them excited. We’re teaching to do this test driven stuff, and it started to stick a little bit. And these names of these mantras popped. And so TDD, Test Driven Development was the most common one. But it was also called Test First Development because we were trying to say, “Well let’s express what we’re trying to accomplish with the test so we have an idea of what we’re trying to accomplish, and then we’ll write the code.” But this idea of refactoring got tied into these metaphors. So you had someone like Ron Jeffereys who’s very pithy say things like, “Make it work, make it right.” I think Ron came up with, “Red, Green, Clean.” And those things I think made it more sticky. It was like, “Okay now I don’t have to …I’m not a crazy extreme programmer. I’m just doing this testing thing.”
And then exploded. And suddenly there was … Most of it early on was in Smalltalk and Java, and it just started spreading to all these other languages and we saw more of these xUnit frameworks, JUnit, PUnit, RUnit. Whatever further language pop up. And that was pretty exciting. It was pretty exciting to be a little small part of people that were starting to say, “Hey as engineers, let’s be more responsible.” And I wrote sometimes it was good because I think it’s important to be realistic about the impact we had. One of the things that happened that went unnamed was that the tests started telling the story of the code. And that’s a really important part of where we’re headed with this idea of test driven product. I used to see someone sit down and instead of saying, “Let me show you my code,” they would say, “Let me show you my test.” And the test kind of said, “When I set up this situation, and I execute this code, I expect these results.”
And then as a second person coming over to do a review or to give feedback, or to help a junior person, it was more clear what the intent was. Then when you went to go look at the code you didn’t have to reverse engineer it. So the test telling the story of the code was really a powerful thing that doesn’t really get enough airplay. The other thing we showed is that yeah it was nice to catch bugs, but that was almost the lesser value I should say. The higher value was like, these tests were allowing people to collaborate and design. They were allowing us to design with automation. But also kind of gave us that instant feedback or almost instant feedback. And I’d love to say that everything went great, but sometimes the popularity of the stuff was not necessarily good. Because the people didn’t understand you communicate a design with a test. And you don’t write the code unless you understand the design in the form of a test.
Those people sort of went out and they wrote bad tests which led to more bad code. And I can’t remember if someone said this to me or if I blurt it out. At one point I started thinking, “When the design thinking is lost, so is the process,” or when you don’t understand the intent of the process like test first, then all you do is write tests first and forget that, boy the idea of the test was to say, “Hey I think understand this enough that I can go explore in the code and have this feedback mechanism that says, ‘Boy it’s not behaving the way you though it would.'” It was a really powerful tool. And one of the things that happened when it was done right, especially if you happened to be working with someone else, is if you express something as a test and the other person didn’t understand the test, a lot of times you didn’t write the code.
And for over a decade, people always asked me, “Well how do you get more done? I want to go faster?” And I always tell people, “One way to go faster with the same amount of energy is to do less of the wrong thing.” Now that was true in programming. I think that’s even more true in product development. So another mistake we made is we called it pair programming and test first development and we sort of isolated ourselves from a lot of people. There was some early on discussion of, “Well we don’t need testers.” And it was overly focused on developers. That was good, but it was also sort of bad. And a bunch of us started kind of going, “Well most of the stuff is good, but boy there’s these things that aren’t working. We’re missing this other community.” And when we went and sat with these testers, we saw that while we had fixed some of our problems at a micro level, there were still problems for them at a macro.
And that’s sort of a good place to stop and see are there any online questions for us to field?
Ben Lack: Not yet. But I do a have a question for you. So if folks certainly have a question so far, please feel free to put those in the chat box. And this is more at a high level, and it is just kind of around testing. There’s so many teams that don’t put the kind of emphasis that they need to on testing. And I’m just kind of curious how you’ve seen testing evolve in the last ten plus years because it is such a critical component to building goodsoftware.
David Hussman: And can I actually put the test driven spin into that question. I think what was good that we did was it was sort of by the geeks for the geeks, and asked people to be more responsible and more honest about their design. And that line of like the test told the story of the code really changed people’s ability to have more essential conversations faster instead of trying to reverse engineer that stuff on the fly. That’s where test driven really worked well. When it was used as a design automation tool. I think where we’re headed, in this next little section, is like well what happened when a bunch of us were out there looking for something new. It became not about those tests, which a lot of us stopped calling unit tests. We started calling them, micro tests or developer tests because the word unit test was so noisy. But we sort of zoomed out and went, “Wait a second, there’s a set of tests that aren’t at a micro level. They are cross cutting. They’re more like what a human does.”
Where those other tests told the story of the code, these became what people called acceptance tests, story tests, they have a lot of different names. And people started practicing the same thing. “Hey, if it’s good to write tests before you write the code to really express what the code is doing, well what if we did the same thing at this next level? Take a story that has its completion in these acceptance tests.” They say, “Let’s understand … In fact, let’s possibly automate those before we start writing all the code for that story.” And both levels are important. This level I think started challenging people to sort of make more connections sooner to understand context. And a lot of people’s story cards, now called user stories, became clearer faster because someone that was a technologist that was sitting with someone that was in the product space or someone that was in the testing space, and they’re all gathered around this one thing, and we started putting this rule in place to say, “Hey. How about if no one starts working on one of these until we’ve expressed all the tests?”
And then we went further to say, “What if we express those tests in code?” And it was harder. These tests ran slower. But they were inching towards understanding. Getting closer to the product, and the use, and the customers, and the impact. Now they didn’t always get there. And because it was harder, it wasn’t as sticky as fast because there was sort of a disconnect between the code and the tests. But it was neat to see things change. So pair programming got turned into pairing. It suddenly had maybe two engineers sitting together, or a development engineer and a test engineer, or a product or a business person and a tester or developer. And the thing that was connecting them was the story sort of started the discussion, but the test closed the loop. And it gave us … Like, “Boy, I don’t think I really understand that. Can you tell me a little bit more?” And instead of someone expressing things in detail, what I started doing is kind of saying, “Well let’s take the detail and express that int tests where we can.”
In fact, if I go back probably more than a couple decades I used to do that with people’s requirements documents. I would start reading through them, and the deeper you go into the detail, the more you started seeing conditional language like if/then, or the system must, or the system shall, and I think in today’s world, and this changed right here, these pairings, this collaboration started saying, “Let’s add clarity to these tests that it turns out we can also automate. And by thew way, if we understand that clarity in the form of that automated code before we start, we get less of the wrong thing in place.” And the hard part about that is that people don’t tend to celebrate as much as they should.
Test driven development sort of became test driven. This is a wonderful book. It’s a bit older, but nonetheless it still has nice content. You can see it talks about test driven development and acceptance test driven development. It’s for Java, but I like that language just being test driven. The acceptance tests, these story tests, they added context and like I said, they clarified stories. But it wasn’t like you just wrote a great test and everything was good. The clarification was always augmented by collaboration. When two people sat together, had this richer discussion about, “Boy what do you mean by that word?” And a lot of the important things where someone that’s asking for something and someone that’s building something comes together around that language. If you put that in a testable space, tends to be a little bit more specific. So I said in that first section that unit tests or micro tests, or developer tests told the story of the code. These story tests, or acceptance for stories, started to tell the story of the product. Not always, but sometimes they did a good job.
They were … It was a little bit problematic because they didn’t always hang together well. But I saw people be less likely to just rush forward and start coding for a story, and then start by sitting with someone else who was maybe not in their discipline to try and get that shared understanding. So again, to make sure that no one thinks I’m just, everything’s wonderful in test driven land, I don’t think that’s true. And this is a little bit controversial for some people that are really tied to their methodology, but one of the things I see is that I’ve been using the word, story test and acceptance test. Sometimes I bump into people using the term acceptance criteria, and I’m not sure why it is, but a lot of times those people lose sight how are those tests helping us explore the product and its use in the interactions of the customers, or the users are going to have. And just a simple idea of like, “Let’s get stuff done. We have those criteria, we click those off. We’re getting stuff done.”
And now it’s a slippery slope backwards just doing small things faster as opposed to exploring the product and use the same way those mini tests helped you explore the design of the code. And so I could tell you a specific example because I was working on a team, we had about 15,000 unit tests and we were so proud of ourselves. Breaking our arms patting ourselves on the back. And I wandered over into this … At that time, in this company, this is quite a few years ago, the testing group sat in a different room. And I wandered over to that room, I sat down. You could see the testing folks were sort of grumbling and frustrated. We were all proud of how many of our tests were running all the time. But our tests were these small micro tests. And their tests were not a small micro test, they were more like a user experience. And I started seeing, “Wow that’s not working for them.” And that sort of drove this acceptance testing. But when the user experience cut across a handful of stories, then those tests were too discreet.
They were bigger than those micro tests, they were more in the language of the product, but they didn’t hang together to show the interactions that would span stories, which would span multiple classes if you will in some kind of a code base. And what happened was this whole product driven development thing popped up. Now it didn’t have that name. I still don’t think that’s a common name. Before this I talk I Googled it, and there’s other people out there talking about, “How do we start understanding that context before we start coding.” So it’s sort of emergent. But I’m going to pause here again because I feel like we’ve talked a little bit about … I wanted to lay the groundwork for test driven in a micro unit level, test driven at a story/acceptance test level before into this larger level. Are there any questions?
Ben Lack: So if you have any questions for David, please feel free to post them in the question box. And while we wait for questions, I’ll go ahead and ask one for myself. David it may be helpful for you to just kind of summarize what was better about acceptance test driven over test driven development.
David Hussman: Yeah. I feel like a lot of times I live this stuff and I don’t necessarily kind of break it down. And I think probably, put simply, what was better was collaborative discussion of true customer use or product value that added more clarity and sometimes stopped us so that we paused before we just started jamming out code. I don’t have a lot of data to show this because the metrics weren’t as good, but my intuition tells me that we wrote less code that had higher value because we had more understanding of what success meant for one or more story.
Ben Lack: We’ve got a question from Max who asks, “What’s you take on teams that get a false sense that everything is 100% tested when they are practicing either TDD or ATDD?”
David Hussman: Hey Max, I cut my teeth writing testing code for medical devices where you have to have 100% line and path coverage. And that’s really important if you’re going to stick a device inside someone. But it think for people that are trying to do 100% of everything it’s real suboptimal. Because if you’re working in a real exploratory space, I think it’s okay if some things aren’t 100% tested because you need to let more of the designs emerge. So I’ve always kind of cringed when someone says, “Well we’re shooting for an 85% test coverage.” And I was actually sitting with a developer one time and we hit 85% and we all celebrated, and then I thought, “Well okay. Let’s keep working.” He goes, “No no no. We don’t have to write anymore tests.” So something about that percentage seems to sort of game the system.
When tools like Sonar popped up and things that gave us visualizations into the complexity that was existing in the code, I saw more people start having intelligent discussions about how much testing do we want? But where do we invest our test design bandwidth? Because I don’t think in a large scale, or even a medium scale emergent system you’re ever going to test everything. Nor do I think that is … That’s a little bit too of a geek driven approach to understand how to use testing because you’re measuring coverage and you’re not thinking about design. Probably a much longer answer than Max wanted.
Ben Lack: I’ve got another question for you and this comes from Patrick. It’s in a couple of parts so I’ll ask you the first two parts first. “Should we tackle TDD before having ample unit tests at all? And is it a ladder of maturity?”
David Hussman: So there’s a really cool company, just south of where I live in Minneapolis, and they do these test stands for these big giant systems like earthquake simulators and building stabilizers. And one of the groups I worked with a long time ago was an aerospace group. And they had a pretty powerful backend C++ infrastructure. And then they had sort of VB front end and then they had OEM front end. And they were having some struggles because the C++ can get really clumsy pretty fast. Especially in that complex space. So to answer your question, what we did there is what I try to do everywhere. I think you got to have some kind of wall from the outside in where you know when you start changing things in the innards and the unit test base, you’re not going to wreck the user experience. The core product.
So I would actually probably take your question say, “I think it’s better to zoom out and test it like a service/API level before you kind of start digging into the code.” And then one of the things, a common pattern that was out there for a long time was, don’t just run off and start adding tests in some place. Add it with intent. Like people used to call it test driven bug fix. You write a test to reproduce a bug, then you when you fix it and the test runs you know absolutely that you’ve showed the failure so then you can validate the result. And that is nice because … I mean yeah, bug’s not going to come back because you’re going to catch it next time before it goes out because you have automated testing.
The other thing I think people could do I say, “Where is the high level complexity in our program? And is that where the problems are coming up?” People that are looking at bug clustering. That’s another way to kind of make intentional and discreet investments in testing. Did we answer the second part as well?
Ben Lack: You did. And now I’ve got a quick followup from Patrick that I want to go ahead and ask. And for others if you have questions keep them coming. Patrick also follows up by asking, “If you tried a lower rung and see too little value, could we blaze ahead or should we retreat?”
David Hussman: That’s a nice metaphors. I guess in those first few slides when I was showing those books and being really impacted by extreme programming, we did almost everything from the inside out. When we were working on something, if the code was ambiguous, we started writing tests so we could refactor stuff so we could start cleaning it up. But my experience was we broke a lot of high level things that cause us a lot of noise and caused real problems for our customers. So I think you go to sort of get that shell in place and then go strategic once you have that in place. But I’m sure you could find six other people who have different opinions on that.
Ben Lack: And we’ve got a quick clarification question also from Patrick. He’s asking, “For integration and system tests, should you do those before investing too much in unit tests?”
David Hussman: Yeah. So those words are kind of tough words because they mean so many different things in so many ecosystems. If integration test meaning, integrating across subsystems in a larger system, that feels like, yep let’s zoom out and get some of those in place before we start … So we have a stable state before we start refactoring things at the lower level.
Ben Lack: Terrific. All right. We’re good for now. We’re going to have you keep going.
David Hussman: So here’s an example of this product driven development stuff. And it probably doesn’t look like it, and I picked this picture intentionally because no one in this picture is a programmer. And I was teaching them, but I went out of my way to not use a bunch of agile words. We just put up this wall up here and did this classic, queued up, in progress, and done thing. And then they started organizing the work in some fashion, and listing out things they needed to do. And I kind of said, “Wow that’s really great. You guys have a lot of things done. Do you have a lot of things validated.” And that says validated and below it it says valuable. And you can’t see because this person’s sort of in the way, but there was nothing in this column because they were focused on this definition of done. But their definition of done was internally validated. Not externally validated.
So that’s one of the big changes. It’s not a giant revolution, it’s just a really appropriate evolution to say, “When we got it done, how do we know it’s right? How are we going to measure that?” And so if I can erase some of this stuff, what’s really cool if you do that is before you pull things sort of into this in progress space, you start saying, “Well how are we going to validate it?” And what happens is just like test driven, you stop just pulling things in here that you don’t understand. And it allows you to kind of zoom out a little bit. And that’s something that I sort of didn’t feel like happened before. Now this is a … Let’s see if I can … [inaudible 00:29:36] This is the same thing, but this was an HR group. This is a product development group. A digital product group. And what they’re doing, you can’t really necessarily immediately see this whole product driven development thing here, but it’s sort of happening. Maybe they don’t even know it. Because this column right here is them discussing, “Why are we doing what we’re doing?” And then this next column over here is them going through and saying, “Well who are we impacting?” Right here, you can’t really see it. I’m kind of struggling with my little device here. But this person is, who are they impacting? And then over here further, they’re starting to kind of map out like, “Well what did that person’s needs?” And so on the right hand side over there, those stories that they’re looking at, those stories that are being written right here, those are the beginning of things that hang together a little bit more than just an acceptance test or a story because they’re stories that sort of cut across the user experience.
So here’s a better version of the same thing. And now you can sort of see in this picture here how things are sort of being laid out. You can see these cards kind of in this order right here, and in this dimension is interaction. So I’m going to call it like, this is the interaction model right here. [inaudible 00:31:06] And this dimension is complexity. So they walk through some simple examples, and then they sort of come back and they walk through some more complex examples. And every time they’re going down and something becomes more complex, they’re not expressing the details in more language, they’re trying to, and you can’t see it on these post-its, they’re trying to express that stuff in tests. Now this is a really interesting picture. There’s another that I’m going to show you because most of the people in this picture are people I would call product engineers.
They’re product engineers, they’re software people, engineers, who are writing products not unlike a guitar player is someone in a band. Like they care first and foremost about the music they’re playing or the product they’re producing but their skills happen to be [inaudible 00:31:55] in this case engineering. So when they have these discussions, it’s easier for them to start digging in more quickly and expressing things in testable language. Expressing product for customer needs in a testable language. This is sort of what they’re doing. It’s not a really complex process they’re following. They’re going through sort of a simple process and I have to think of like narrowing. Like over here is the universe of all product ideas. But over here they’re starting to narrow that down a little bit. They’re getting into, “Why are we doing what we’re doing? Who are we doing it for? What are their needs? Where are we going to start?” And then this is the big change. Is trying to say long before stories go into sprints or iterations, or kanban board or whatever, what are these validation measures?
What are we going to measure that has direct impact, something that has impact so we can show, “This is what’s happening in production.” Not the product owner that sits by us showed up to a sprint demo and said, “Yep everything’s good.” Like, are we increasing subscriptions? Are we keeping more users? Are we moving into a down market? If we’re an internal group, are people behaving the way we sort of think would be helpful if our product was meaningful? And this is sort of the process view. Some steps why, who, what, where, and again coming back to this validation measure. This is sort of what it looks like. So this is a couple examples of a company that came over and we did some discovery work with them. And we were walking through, and I picked these pictures because you can see in sort of all these pictures, there’s someone in the picture who’s telling a story. And this is that vehicle. Those stories, but the stories are arranged in that format I was talking about where they’re not just work items. The interactions go left to right, and the complexity goes top to bottom.
And what I’ve been doing with more groups, this is not as sticky as I would like it to be, is get people while they’re telling these stories to start capturing these tests. Now in a high geek space, you can see people get that right away and there’s some tools that you could use that sort of help you capture that. But just getting someone to sort of start writing some of that stuff down when they’re having these discussions because there’s just a ton of product tests that come up when people are doing this story time that we’re not capturing. And then later, we kind of go, “Wow I wish I understood that better.” Well it was right thee. And I told this story to someone the other day and they said, “Well it sounds like you’re talking about more documentation.” I thought, “That’s not what I said. I said I’m trying to capture details and tests.” And I want the measures of those tests not be, “Does it work?” Does it work for me feels like table stakes in the 21st Century. Is it valuable is what we want to start testing.
And that’s I think the difference between this acceptance test driven development and some of this new stuff. And so I wanted to put at least one of these pictures in here that had a little bit that was easier to look at. So these are sort of who. They have some people that they’re talking about so they’re not abstract conversations. And they might say, “This is one thing that she’s trying to do. An example … ” And in that example she does this. That’s the obvious thing. And I used to use … When I was talking about complexity, I used to use easy versus hard. But I got rid of that. I started saying, obvious versus complex because I think that’s a much stronger dialogue. If you’re trying to build something that’s meaningful to people and hopefully saves lives or helps you become more lucrative as a company, the obvious path happens to be something that is often missed by engineers because engineers are looking at the most complex path. And then you get the obvious thing wrong.
So when they go through a second example that’s a little bit more complex, remember you’re going from obvious to complex. When they go through that second example it might go like this. There might be a slight variation. In the little DevJam world they called that path through a customer journey. And instead of kind of saying, “Well in sprint one let’s start with this story and this story and this story,” we don’t really organize things that way. Instead we kind of say, “No no no. Let’s start with this journey because that’s going to be significant to this person we’re trying to learn about.” And that’s in and around all that language a lot of what people would call design thinking, and all I’m doing is trying to connect that design thinking to this automation model that I found so powerful and test driven and acceptance test driven. And you can see the difference is acceptance test were for a story, but the product test I think you have to cut across stories.
So people ask me about what I meant. And I started thinking, “Well how do I kind of put this into a sentence so it’s more easily digestible?” And this is the sentence I came up with for better or for worse. “Product tests are best expressed as measurable impact in production.” Now I don’t think you have to learn everything in production, but eventually that is the most real learning. When something’s out there and it’s live, that’s when you can start learning. So this is what I mean by a measurable impact. This is a tool called Mixpanel and it’s showing things like purchases people made and showing how many times someone did a certain action. It’s showing trials created. Those are the things that hopefully change the game for your company. Not, “Did we get the work done,” and “If I click on it does it work? Super. I’m glad it works.” That’s like being excited about a car that drives and has a speedometer. It’s like you want to start saying, “Well what’s it like to have someone driving that car?”
And again for better or for worse, these are a couple of people at DevJam and you can see sort of [inaudible 00:38:06] We’re having these discussions about who’s trying to do what. And what I asked them to do is change their behavior. And I asked them to do a simple thing that I think anyone can sort of do, which is before you start working on a story, instead of just looking at the acceptance test, for a story just as a learning vehicle, how are you going to measure the impact. When you get that thing done, does one of these things popup, you get sort of a new measure that says, “Wow. Look at the impact we had,” or “Oh what happened here?” And you get the developers looking at this stuff instead of, or along with, or before burn up and burn down charts, which are just sort of typically they’re progress measures that hope they have value. This is more discreet value.
This has to be able to say, “This is why we’re doing what we’re doing. We got to do more and get it done. We got to make it meaningful. We got to make it obvious. We got to make it more than usable. Enticing if you will.” So that’s this idea of impact driven development. So you can almost say unit test, test driven development, unit test is on the bottom micro tests and acceptance tests are sort of the next level up. It’s like well sort of has to do this. And impact is like, if it does that middle thing, are we getting the results we want? Because if it does the middle thing, and it doesn’t have the impact, you should stop celebrating that. You should start figuring out how to not do more of that, because it does consume some of your bandwidth. So this is simply put, “Don’t start working on a story until you discuss how you will measure the impact.”
Now one of the questions that often comes up is, “Well does every story have impact?” And no that’s not always the case. Sometimes it’s cutting across a set of stories. But if you’re just doing story after story or task after task and you’re not measuring impact, then I think you’re becoming overly confident that what you’re doing is successful. And sort of before we go, I don’t really even know what to call what they’re doing at Netflix right now. But the terms they’re using are chaos engineering and intuition engineering. And I’m enamored with what they’re doing not because it’s the highest form of geekery. So this poster is we just held one of the Twin Cities Chaos Community Day. And it was a followup to the original Chaos Community Day. And the first one I went to was at Amazon. And there was amazingly cool technology and technological ideas being presented. Like the chaos monkey and the chaos simian stuff. And there’s a new tool out there called Gremlin. And there’s just people doing all sorts of neat stuff to try and make these systems more resilient.
But the higher order thing I saw was people saying, “Well how do we make these interesting technology changes so that people are having a better customer experience?” And this is why I think Netflix has started producing sort of the nuvo automation testing tool. The JUnit of today. Especially large complex distributive systems. And one of the tools they’ve written is this tool called ChAP. And what ChAP does, it’s an evolution of a lot of their other tools. And so what ChAP does, and as I understand it, is in production it takes requests of a certain and it buckets them in statistically significant sizes into a control group and experiment group. And then they run experiments and look at the results. So int their production environment, way beyond what we were doing early on in XP, they’re running these statistically significant experiments and injecting faults into their system or looking for interesting signals as a way to kind of say, “How can we change the system to make it be a better user experience for any one of us that are on this phone call, use Netflix?” That’s who they’re thinking about.
And so that feels like another evolution, if not a possible revolution of kind of saying, “Systems should be resilient. We should stop worrying about whether things are going to go down, and we should assume they should go down. The bad things will happen. And we should sort of test drive it by injecting problems into the system sort of the same way you would give someone a flu shot so they have a small bit of the virus and then they possibly become immune.” And there’s a lot to go into, but I want to put this up because it’s part of their tech blog on medium and you can go read about chaos monkey and ChAP and fit and all the other tools they’ve written. It’s some pretty neat stuff. Feels like the evolution of this discussion. So I think I left some time for questions. Do we have any questions?
Ben Lack: We do. We’ve got a question from Patrick. And for those that ar still on, still feel free to shoot me some more questions. So Patrick says that his company builds tools, reports, and websites for business partners who use them to do other work that results in other people doing work that has a business impact. So he wants to get some ideas on how we measure impact when it’s so distant and indirect.
David Hussman: Yeah. So sounds like he’s multiple degrees of separation away. And if you kind of look at … Let’s say an in house system. Like here’s a really common stack, for better or for worse, that I see out there. There’s something like Java or a .NET or some other language sitting on top of SAP and then up here there’s like mobile and IOS and Android, and some other stuff. And whether these are three different stacks in the same company or they’re companies one, two, and three, out here is the people that you’re impacting. And they don’t really care about all this other stuff in this picture. But I think you have to understand sort of the producer consumer relationship between each one of these like boundaries of trust. The tough part, to Patrick’s question, is if his company is way down here, and the customer is way out here, how does he get the feedback from this person?
And I don’t know his specific setting, although I will say thanks for all your questions Patrick. If you can see those results, that picture I was showing would be measuring more out here. But Patrick’s group could be measuring certain interactions down here to see how responsive, what is the usage rate. Because if they feel like they’re doing things right, but the usage rate is really low at this level, it might be because there’s a bunch of muck in between what Patrick’s doing and the customer. And Patrick, if you have a strong relationship between these partners, if there’s some contracts of trust between them, maybe there’s a way to sort of share that information if these people are doing that kind of testing. And that’s why it sort of goes back to some of those earlier questions. Like if you want to make a change way down here, man I think if you don’t have stability all the way through the stack you’re really playing with fire because you have to understand how those interactions are going to go up to that technology stack.
So, Patrick I hope that integral relation gives you that context.
Ben Lack: Thanks David. I wanted to ask you a question. Your entire presentation is really making the business case, as the topic suggests for applying test driven thinking to the product world. Let’s say somebody buys into the business case. What do they do that’s actionable to start transitioning the culture or thinking within their organization so that they can start making this a reality?
David Hussman: And let’s just to kind of narrow the question so it’s not just a broad sweeping thing, let’s say that person’s a tech lead on a team.
Ben Lack: Perfect.
David Hussman: And they say, “Hey I this is a great idea.” I think a tech lead on a team can do this. That’s why I was showing this little team that I … I was going to say infected, but probably injected to try and say … If you are a small product team, and you own that full stack, I think you can start applying some of this stuff. If that person you’re talking about is outside of that technology group, let’s say they’re someone in the design space, if you think of how a lot of people think about iterations, and there’s, for better or for worse there’s design that’s happening that feeds kind of this delivery thing. And obviously you’d like to have these two things pushed over the top of each other, but let’s say it’s serialized for some reason. You could practice some of these concepts up here. Like saying,”Well how strong is my ability to express to this development group down here what the impact is going to be? Do I even know that? If not, maybe I shouldn’t be taking it to that delivery team and saying, ‘Go build this for me.'”
And that alone would change a lot of the dysfunction I see with scrum right now where people are just trying to get stuff done in the backlog. So I mean, you could also ask your question and say, “If I were the scrum master and I was excited about that stuff, I think this is the right level to start working at.” Because you might outside of that delivery team. Any other perspectives that you would like that question answered for?
Ben Lack: No. I think that’s probably the right angle to take. We wanted to give some people some time back before the top of the hour, so I wanted to give you David an opportunity to share just some final thoughts before we let everybody go.
David Hussman: It’s really neat being at this chaos time because some of the stuff they’re doing is just so over the top. And I think it’s easy for someone in Minneapolis where I live to say, “Oh well you know they get to do that because they’re Netflix,” or “They get to do that because they’re Facebook,” or “They get to do that because they’re Google. And I would say that’s why they are Google. And I think that this idea, this impact driven development thing might feel like, “Oh well we can’t do that. We got to get stuff done.” I think that’s the mistake is assuming that what you’re doing is right so that you should try and get more of it done faster. And this is going to be the same thing at the product level that happens at the code level. Let’s pause, it’s not like pausing for six months. It’s pausing for a couple hours maybe to say, “How are we going to measure the impact of this before we start doing it?”
A lot people used to tell me early on in continuous integration, “We can’t do that.” And I thought, “Well that’s not true. You can do that. You’re just choosing not to do it. So please just say, “I’m not doing that.” And then if you want to do it, I think find some place to start. Look at some of the models that are out there. I’m jokingly still calling it impact driven development, but I think this stuff is already happening. It’s not impossible, it’s just a matter of how much you want to invest in it.”
Ben Lack: David thank you so much for your time and for everybody joining us, thank you as well. If you have any additional questions for David … You can go ahead and move to that slide David. Please feel free to reach out to David directly. If you are a big fan what David’s had to say, certainly feel free to reach out to David and get DevJam to help you guys implement some of this stuff. If you’d like to learn anything about CPrime please visit us at Cprime.com. And we have many future webinars about topics like this coming up so we hope to see you on a future webinar, but in the meantime enjoy the rest of your day and rest of your week. Talk to you soon. Bye bye.