← Previous · All Episodes · Next →
RSpec w/ Justin Searls S1E4

RSpec w/ Justin Searls

Matt and Justin talk about RSpec and testing DSLs and try to answer the question "do we need them?"

· 51:19

|

Matt: You're probably the ideal guest to have on this podcast because you have like some of the best, uh, hot takes. But I think you are also like pretty nuanced and willing to talk about these things.

And I think that they are the good kind of hot takes where you've thought deeply about something and know how to market a message instead of, you know, you're just shooting off, uh, about something that

Justin: They, they sneak up on you like, like a really spicy curry or something, and you're like, Oh wait, there's more, there's more to this than I thought. Yeah, I

Matt: a heat, It's a heat that builds, you know,

Justin: Yeah. Well, I, I think that's true and I, I'm, I'm happy to be here. When I found out that you had a podcast called Agni, I immediately loved it and I didn't even know what it was about. I've got probably 30 draft blog posts that start with you might not need, and then X technology.

Matt: Today I'm joined by Justin Ss. He's the co-founder of Test Double A consultancy that builds great software and great teams

Justin is a pattern recognition machine and is especially good at spotting things that will fail five to 10 years down the line in a code base.

But he is perhaps best known for his salty hot takes on Twitter.

On today's episode of Agni, we talk about whether or not we really need our spec running a test suite for CEOs, the burdens of DSLs on non-native English speakers. And I go Justin into creating his own a spec competitor. Welcome to ya.

Matt: So we had talked a little bit and one of the things you had mentioned, uh, was just kind of the general state of testing but more specifically R spec.

And so I went digging through, uh, you, you know, your Twitter account and I found out that this is not a new hot take for you. Uh, you know, as far back as, you know, I could find things in the 20 twelves. But, uh, in 2016 you had a, a popular thread, uh, where you, you wrote rubus, uh, was something you like about apec.

What are, uh, what's an advantage of reusing over mini test and in what context? And there's a whole host of, uh, good replies. It, it was really interesting for me as someone who sort of came up in that like early 2010s era of Ruby, cuz. There's, there's some of the same people and then there's some different people that have sort of, uh, moved on from the community.

But, uh, I thought it was, it was a really interesting thread. The one that stuck out to me was that, uh, uh, Aaron Patterson tender love had, uh, replied and he basically said like, It has a book. That's that, you know, that's what he likes about our spec.

Justin: And I know, and I know the people who wrote multiple additions of that book, and they're good people and, and, and I like them personally. Um, I will mention that Aaron is the one who deconverted me from our spec. So when he and I started pairing. He was the first one to be like, Oh, you're using a spec. Yuck.

Here's a bunch of reasons why I hate a spec. And he, uh, slowly convinced me to, to shed it.

Matt: Yeah. So in this, in this same thread, I think you really had a good response and you basically said that, uh, good tools should yield good outcomes for people that are less disciplined than Corey Haynes. And,

Justin: Where For the listener here, If you don't know who Corey is, he's a dear friend and he is way more disciplined and rigorous than most of us will ever hope to be

Matt: Yes, Yes. So, yeah, I think the, the, the summary is that, uh, you know, a good tool should make a good outcome for, uh, you know, most of, most of the users, or it, you shouldn't need to sit at the, like, extreme, uh, you know, the, the 1% of of people using the tool to have a, a good outcome with it.

Justin: Yeah, totally.

Matt: So, um, I can, I can play like a devil's advocate for some of this, uh, if that's helpful.

The, basically if we wanna do like a, an r spec versus mini test thing, or do you wanna talk more about the history of,

Justin: I think we should start by setting the table a little bit.

Cause even DSLs as an idea kind of came and went and are gone

Matt: Right.

Justin: Um, so, so do you wanna set, set me up with a question to kinda frame that and, and I can kind of, uh

Matt: Yeah. So, um, I think when people find out about a spec, there's a lot in it that is sort of, um, like a, a vestigial like artifact of where it came from, right? Uh, so I don't, I don't think that the common person coming into the Rails and Ruby ecosystem these days, uh, is becoming familiar with r Spec from like its origin as like a, a behavior driven development.

And, uh, I know like kind of the Dan North School of like, this is how you can get non-technical people to right specs that get converted into executable programs or, uh, this is how you can like, you know, really focus on testing behavior over implementation. So features like, you know, subject and, um, some of the, the DSL in R spec now probably like, looks completely.

Bizarre to somebody that isn't coming into it. Um, so maybe you could just help us kind of walk back in time and do a little bit of archeology. Like how did we, how did we get to the place where we have this tool that is, I don't know, I haven't looked at any numbers lately, but it seems like it is the, uh, the, the preferred testing framework, but it has all these sort of idiosyncratic features that people might just write off as like, that's just Ruby people doing ruby stuff.

Justin: And at one time, maybe this is still true, I believe it was the most popular non builtin gem. Full, full stop. Cuz there's, you know, so many QA departments around the world that depend on our spec. Um, Yeah, so rewinding the clock a little bit, I think it's important to understand a spec in the context of what was going on at the time.

So, you know, I started programming professionally in 2007. Um, I'd been doing kinda like oddball projects and internships for several years before then. And my testing experience at that time was limited to, um, J unit and maybe NG unit in, in Java land, where, you know, a test was a class that was symmetrically opposed to the thing that you were testing.

Each method that started with the word test was, uh, you know, itself a test, uh, method of that. And you'd write some example code in three phases, like a range, the setup, and then act the thing that you're calling and then an assertion, um, or maybe multiple assertions. And, uh, you know, J unit, the way that it works is it'll instantiate the.

Run a single test, run the setup and, and tear down hooks, uh, before and after, and then throw that instance away and create a new instance for the next test method so that you don't have any test pollution inside that class. Now, like if you're a, uh, diehard, uh, object oriented programming fan, that's like heresy because like, that's not what classes are for, but it is a tidy and sort of symmetrical way, and it, it could leverage all of the tools of like, if you're using a, an IDE like Eclipse to um, very quickly navigate between alternate files, uh, jump into the implementation really quickly.

Like, because they're just classes and methods, the tooling didn't have to know any better. And so that was a huge advantage. Um, when I found Agile, uh, and, and I, I was kind of, you know, tiptoeing in the Ruby community in and out for a few years, uh, extreme programming designed teams in a way that I think we don't think about very much anymore, which is like, You'd have a bullpen, quote unquote, uh, of, you know, some number of developers, like, let's call it five, maybe a five developers and two QA people, and a, a business analyst, uh, and a product owner.

And that, and that composed a, a cross functional team. And the most important thing for any team to do other than like build the thing was to maintain really clear, um, directional alignment on what are we here to do? How will we know we've done? And, you know, where are we at? And the best way to do that when you have like, you know, relatively non-technical people, like there's no, assuming that the QA folks code or that the BA people are gonna be looking at your tests or whatever, uh, would be to write tests in a way that everyone on the team could at least read them.

Like read, like this is what this test says to do. And in, in some ambitious cases, I had a couple, you know, teams I was on where we would have the, um, the test runner would literally be executable in like a browser window. So like the product owner would have a URL they could go to, they could click play, they'd actually watch all of the test cases that they'd agreed to.

And we'd actually, you know, in our demos, um, uh, you know, every week we'd show off what we did. They'd look at the tests and those were literally the acceptance criteria. And if the test passed, they were like, All right, cool. Ship it. And then the next Monday of planning, we'd um, go to the, you know, a document projector with a Sharpie or whatever, and like literally write out given win then and build that.

And so, We, we would build those cases, the, the, the, the natural language mapping, um, using JHA and Java annotations. Uh, Cucumber came along after that as a, as like another tool to kind of, you know, express your design, your eal intent when you're building it right from the scratch. And our spec, uh, again, you know, as a, as a Dan North wrote J Behave and I, Did he start a spec or did Dave Chomsky?

I don't really know the, I wasn't there on day one, but I was a pretty early a spec user. A spec was sort of a happy medium of like, this is all still a ruby. There's not some like weird indirection to some English script file somewhere,

but

Matt: not programming our test suite with, you know, regular expressions.

Justin: But there's an awful lot of strings, , there's an awful lot of like, you know, like it should do this, win that.

And Dan had some great blog posts about like how to structure this well. Um, it was, uh, in being completely charitable, To like the, the, the, the best case, like when you're doing it right. Uh, it was really helpful in finding that alignment with that cross-functional team. And in the perfect case, it, it was like, okay, cool.

This is a way to build trust inside the team. The product owner not only sees the app working, they see all these sort of like less visible requirements that they might have about, I don't know, password complexity or whatever. Like, like, uh, those are all, Or like a logging, right? Like, oh, even the logging stuff is passing, so I know I can check that box.

Right. And I think one big problem, uh, and you'd mentioned this previous, referring to a tweet I said about like, you know, the, the tools having to serve the kind of common case, not the lowest common denominator case, but like the common cases, like most teams don't look like that. Right? So like

Matt: Right. And

Justin: you're,

Matt: now.

Justin: Yeah.

Especially now, like how would you compare like the average team you see now?

Matt: Yeah, I mean like someone that is a manual QA tester is like, Uh, that was way more common, I feel, you know, 15 years ago than, than now. And even, even someone that is just a test engineer, I always get sort of confused when I see some of the bigger companies that have like a soft, what is it, like software engineer in test as,

Justin: Microsoft made that one famous.

Matt: And I think like Amazon has that as a, as a title and it's, it was always just kind of confusing to me that um, at the same time that like having programmers write automated tests was sort of becoming table stakes. We were also saying that there's also like another parallel career track that is like people that are just doing the test automation stuff.

And like you said, back in the day, it used to be that there was, you know, people that would not even be able to necessarily write the automated test, but they could follow a like literal script in, in, you know, like you would do in, uh, like the elementary school, uh, days when you would like write out the program to like make a peanut butter sandwich.

Right. That was,

Justin: Right, Exactly.

Matt: we would do of like, click on this button and like, make sure that you see this string and people would manually,

Justin: And lots of companies wrote basically like Gurkin, the Cucumber dsl, like cookbooks of like, here's like a thousand different, you know, fancy sentences that we've built with like kind of Mad Libs style, fill in the blank here. And you can automate the website. So you might not be a programmer per se. You might only have like maybe one quality engineer or two 10 quality assurance like testers.

Um, and the, and the. The engineer would like just maintain the cookbook and all the, uh, QA folks would like write those scripts. The problem is like whenever anything went sideways or when, uh, you kind of added up all of the cost of, of that indirection from a performance perspective, uh, which, which, you know, obviously the QA folks don't have the experience to do.

Matt: Yeah. So we ended up in a world where these, uh, like as the teams were changing, sort of the tools are changing, um, but maybe not as quickly. Um, and so we end up with, uh, with APEC coming out, it's not a full like write an English, uh, DSL variant. But the, the interesting thing to me, I don't know if it was ever the default or not, but there used to be a mode in APEC where it would like print out like, uh, your test strings in sort of a

Justin: Yeah. Yeah. It's, it was, it was a, it was a doc format. I think you can still say tack doc at the command line and it'll print it out that way.

Matt: Right. So instead of getting your like screen of green or red dots, you would, you would have it sort of spit out these sort of, I don't know, it was almost like a poetry or something like a

Justin: It, it, it, it demoed, it demoed very well when I was showing it to like, you know, business leaders, right? Because now they're like, Oh, this, I understand. This is like, you know, progress.

Matt: right. And I, I think that that's something that's interesting and not really, um, something to take for granted is this was still kind of the days of like getting buy-in from companies for some of these practices. Right? So having, having a demo that worked well for, you know, uh, an executive was actually, you know, valuable in getting budget and funding for these things, right?

So it may, it may be silly now that no one is gonna actually read these like, uh, pseudo, like paragraphs that were written and be like, Oh yeah, you can see here where this line turns red. That means that that step in the process is. Is not there. But if it helped you, you know, get approval to hire, uh, you know, an agile coach or, uh, to get, you know, the deadline shifted because you had, uh, you know, more, more leniency from, from, uh, the boss's, uh, up above that is an interesting, like side benefit that probably doesn't matter anymore, but was probably really important in, you know, 2012.

Justin: and in, in addition to the, um, you know, the dog and pony show of like, you know, the ceremonies sort of like running of the tests each week or something and seeing what passes and what fails and what's gonna go into production and so forth. There was also. Quite a lot of just excitement about how it would improve design, right?

So like, you would have clear alignment between product and engineers and truly like the only projects that I ever became friends with, the product, people like truly were like teams like this, where every day I'd go and check with them for clarification, and we'd be working really hand in glove to understand like, no, what do not, not only like, what do you want, but like, why do you want that?

And really drill in. And so the, the dream, uh, of, you know, extreme programming done right, uh, and using our spec as a tool or cucumber as a tool for that really can work. I think that the, apart from the main issue, right? Like most teams don't look like that, didn't look like it then don't look like it now. And if everyone is a developer on a team currently, like we're all engineers and we're all, you know, like we all understand code. Like what are we, who are you writing these like kind of flowerly, flowery, flowery English strings for, uh, to, to express our authorial intent to, uh, when, you know, example code of three lines, if it's well factored code, if it's a smart API, should be self-explanatory most of the time, right?

Uh, who are we serving by that? So like that's the typical, I think, argument against our spec. But there's an additional one which is. When you think about the design of a tool and the benefits people, especially early on with APEC, would talk about the design benefit. Oh, like don't call it test driven development, call it test driven design, because this is what's shaking out the why, the value proposition, the value story of this application that we're building.

And by showing in explicit terms, like how this application is going to serve its various stakeholders, then we make the business case, like you were saying to the executives, you know, like, this thing's gonna make or save money. This thing's gonna provide such and such security to avoid liability and so forth.

And that continuity is like not worth nothing. It's like truly pretty valuable. And if you're just developers writing, you know, kind of like normal J unit style unit test or mini test style unit test, it is still. It, you lose that benefit and it becomes your responsibility to like replace it with thoughtfulness.

Like it, the, the tool isn't gonna frame your mindset for you about like, why am I here? And granted, you know, you use our spec enough and you got, you're on your 3000th spec. I really doubt you're like coming fresh, you know, bright eyed and bushy tailed thinking like, oh yeah, like it should do what, you know, after a while everything becomes rote.

Um, but that, that,

Matt: the junior product manager of the, you know, division is not coming in there and being wowed every week by, uh, you

Justin: Yes,

Matt: green text that is showing up on the screen.

Justin: No, yeah, your eyes start to glaze over after a while, but like that really, if you think about like that's what they were touting and that was the benefit and the expressiveness, like expressing yourself was the value. That's a very front loaded benefit when you think of like the life cycle of software.

So if like 60% of the tools complexity and our time and attention to it are. Doing everything right the first time because software is expensive. Like that's a valuable aim. But like software also lives a long time. And so if you're on year 10 of maintaining a big nasty rails code base, then you know, you start to wonder like if I'd use different tools, would I have different benefits?

Like I'd much rather than have very expressive te uh, tests because like by year 10, I know what the app does. I know why it does the things that it does. You know, like I've reread like all of these tests over and over again until again my eyes glazed over. But like, I'd much rather have like really good reliability of those tests or a lack of test pollution or like clarity about scoping or like, you know, greater refactor ability of those tests.

Or in respectability where like, you know, if I'm doing Ruby and I have classes and methods, I can slice and dice those with a little bit of meta programming much more easily than I can with a bunch of anonymous nested procs. Uh, and that is something that I don't think we talked about very much a decade ago and, and a lot of teams are hurting with now.

Matt: Well, the interesting thing too about in a Rails context, um, if I think about a 10 year rails project, actually in my experience, the thing that's been the most burning pain is like not being able to upgrade. Rails, right? It's like getting behind and then you're forever behind. You can never find anybody that wants to do the upgrade.

You can like, it, it hurts you in like recruiting people because they're like, Well, I don't wanna work on a legacy Rails app.

Justin: sidebar, Test double has helped lots and lots of companies with rails upgrades, including great companies like GitHub and Gusto. And if your company is stuck with a, uh, older version of Rails, we were uniquely good at helping companies with it. So end of end of free advertisement,

Matt: I will vouch for that. But it's just kind of interesting too of like if our, if, if part of R Spec was helping, um, helping you with the design of your code, and the primary use case of R Spec is in Rails Apps and Rails itself is somewhat inflexible to being. Designed in a different way, especially if you wanna just be able to easily upgrade versions.

Um, yeah, it, it does make me think how much of it like, would've been helpful when, when they were, you know, building out rails or like establishing the patterns to have this like, design environment. But then you should really just throw it away and say, Okay, we figured out how we want the basics to work, and now we don't, we, we don't necessarily need this like, free space, uh,

Justin: The structure is necessarily fixed. You know, to be honest, I'm kind of Eli, I'm lighting a lot of my own culpability here. I was a huge APEC like booster. Um, you know, uh, friends of mine, like, uh, Chad Humphreys is the reason why APEC two literally worked. He, like, it was, it was kind of a rats nest. Uh, from what I've been told.

Uh, my friend Zach Dennis wrote the like, uh, uh, uh, second edition and I think the best edition of the APEC book kind of explaining, you know, like the. Our spec and cucumber as productivity tools. Um, it were really, it was a special moment in a lot of ways, and I really pushed it, but I failed to think about some of the externalities.

Like for example, there are five ways to do everything. You know, like you could have a style where I use subject, some people don't like subject at all, or I use lead or like, I will have every single, uh, r spec construct, uh, as a one liner block to some English. Whereas other teams will just use like, you know, it, it pros exactly like they'd use mini test test where they'd like maybe do all of the test set up in there and the test action and the test assertion, right?

Um, the right

way to

Matt: matchers or regular, you know,

Justin: Some people were like, you know, they, they'd see symmetrical like, Oh, you've got a user RB file and you've got a user spec. Rb, well, that's bad because that means you're mirroring your implementation. It should be about the benefits you get. And so you'd have some teams that have like, you know, almost like.

Uh, just storybook of like, you know, kind of like elaborate of different kind of files and stuff and nothing maps to anything. And of course cuz everything's just prox, there's almost no way to like, you know, get reverse referential, you know, backlinking to figure out like where the code is. And so you'd have these test failures that were just like, really hard to, to nail down, like, All of that was lost on me when I was just excited about like, having really meaningful conversations about like, what should the system do?

And you said a good, great point about rails, like these same people, including myself, who are pushing our spec at the time, were also pushing like, don't put all of your code in rail subclasses. And I still say that like, don't, don't implement your entire application as a subclass of an active record model or, uh, an application controller.

And yet, uh, you know, still people put a lot of complexity there and you experience a lot of design pain, like a mocking library or R spec might tell you, give you a pain signal to say like, this is a bad design. But like, to your point, like rail's design is pretty fixed for good reason. Uh, and so if you're, you know, tests are telling you that Rails has a design problem, you don't own that design, you're just stuck with it.

And so now you're just experiencing useless pain.

Matt: Right, Right. And instead of, instead of letting the test guide you and say, This is painful, we need to think about this, You know, in another way,

Justin: You just text dhh, you say, Hey, what if we just changed the API for controller so you could instantiate them? And he's like, Oh, La

Matt: Yeah. Yep.

Justin: There are a few other curve balls I think, that are issues with our spec that don't get talked a lot about. I'm kinda curious your opinion on this. So one is, and and I've mentioned this before and it it's resonated with a few people, is when you work on, uh, work with teams where not everyone speaks English, uh, having a lot of English language, uh, be the driver, be the definition, uh, a source of truth of like what the expectation is, increases the burden for folks for whom English is a second language or not a language at all, versus code, which is kind of a finca cuz you can, you can assert it, you can test it, you can play with it, you can see it.

I'm kind of curious how that.

Matt: Yeah, I, I was interested to hear the, the fleshed out version of this. Cause I did see you had sort of, uh, mentioned this before, that it is, you know, uh, sort of unfair to non-native English speakers to, uh, to write this, um

Justin: if you go to like red mine, the, the, the, the issue tracker for the Ruby programming language and you see, um, feature requests or even just kinda like issue definitions sometimes you'll see and they're normally, you know, they're out, uh, a Western looking name or face. They'll, they'll write 10 paragraphs of like very high reading level of like, why this feature is really great or why this thing should get deprecated, or how this like very arcane bug is happening.

And the first reply is very often one of the, you know, the vast majority of Ruby Committers, um, uh, and, and Ruby Corps members are Japanese. Uh, the one of the first replies is almost always could we see some code? Cuz we understand code. And everyone who does Ruby like speaks Ruby, right? And then you gotta get into the brass tax really quickly.

Now, obviously not every team's gonna be international or kind of, you know, east meets West. Uh, but it's important to kind of consider that there is some clarity in like, you know, if, if the goal of, uh, cucumber and r spec was supposedly like, you know, example driven. Development or behavior driven development, you can express those as English and it's very useful when, like you're most concerned with communicating with product people or non-technical people.

But over the course of 10 years, 10 years from now, like the person I'm most concerned about understanding my test is like the developer, three developers after me who's trying to understand what the hell I was trying to do. And in that case, you, you know, example Driven Co uh, testing is great, but the best example you could write is like code that works

Matt: Yeah.

Justin: that describes it.

Matt: And I think that is, it is interesting too. I mean, Ruby and Rails in particular does have, um, uh, I, I feel like a more, uh, different language diversity in, in terms of, um, contributors, right? I mean, Ruby, like you said, has a big, uh, cohort of, of Japanese, uh, speaking folks. And, but even, even Rails, I, I don't think, I don't think DHH is, would say that, you know, English is his, uh, first language.

Uh, there's lots of Spanish language, uh, contributors to the framework and, and things like that. And even, even, even kind of like the old school, like J unit style of where your test was, maybe still a sentence, but it was still like code. So you would like put underscores instead of spaces like probably.

Make you write things that are closer to code than, than closer to pros, which I think our spec for sure leads you to. Right. And even, even trying to do things like, uh, like do you, do you assume when you're reading it that like, it is like part of the sentence, so, you know, describe, describe like user creator, Like it creates users, uh, but like the, it is not part of the string that you're doing.

So Yeah, I could definitely see how that would be sort of, uh, needless like mental burden for non-native English speakers

Justin: Yeah. So if we could

Matt: pretty low cost or pretty low benefit, Right? If, if we're saying that actually we're not, we're not really doing much with the, like, document mode of, of our spec these days or the, you know, the cucumber style, uh, English.

Justin: There is, I think, some salience to, uh, an argument that both of us have kind of gloss over here and is unrepresented in this conversation, but would be represented if the 10 year younger version with slightly more hair than me, Justin was here in the room. And that is that we should test like the, the extrinsic behavior and the intentional like API design as opposed to the implementation.

And so when, when people would advocate for apec, they would often be saying, Test it for like the outcome that you want, not like what you want the internals of the thing to do. And I think it's important to remember that mock objects were becoming in vogue, in Ruby in a way that was like, I think unhelpful, uh, at the time where people were sort of mocking stuff out willy-nilly.

Like you'd, you'd, you'd, you'd be testing a user and then you'd just mock out all of the methods on the user that you were maybe uninterested in or that were inconvenient for getting some test of some other method done. And of course, you're getting zero real confidence there. And one solution to this, of course is like, design better APIs, that have inputs and outputs and are more functional and less stateful.

Uh, and another solution is to like, you know, keep your tests and your code at arm's length from one another, uh, as if to like present, prevent yourself from uh, you know, losing that black box clarity of like, does the thing work or not? Right? And I think our spec. And our spec promoters tacked on, uh, to the ladder where it was like, you know, we gotta have safety scissors to, to prevent you from even thinking about, uh, invoking this as if you'd invoke, uh, as if, as if you were testing its internals, quote unquote, versus like, you know, it's public api.

Matt: Yeah. And so then that kind of led into like the fast test sort of

Justin: Um,

Matt: movement or whatever. I'd be curious, do you think that that was a net positive or negative for, I'll say for Rails. Um, but if you have wider thoughts on like the testing in general.

Justin: I remember a conversation that I had with Aaron in like 2013 about this cuz like, um, I, I'm good friends with Gary Bernhardt, Corey Haynes and Aaron Patterson. Three people I look up to for different reasons and for, uh, all of them are, are way better at this job than I. And I remember being very excited about and bought into the, the quote unquote fast specs meme.

And if you're not familiar with that, what it, what it basically was, especially in the air of like early rails three, um, it was so expensive to start up a ruby process that required rails that you, you'd be talking like just to run a single empty test that that just did nothing. No assertions. It might be like one and a half to three seconds depending on your machine and like how big the application is and so forth, uh, until you like got feedback from your terminal.

And if you're, especially if you're Gary Bernhardt, that is way, way too slow. Like you should be able to ask co uh, questions of your computer as if you're in live conversation with it. And so anything more than 50 millisecond, uh, millisecond uh, response times is like, uh, a burden on your ability to retain attention and, and energy and momentum as you work with your computer.

And because of that, you know, Just sort of limitation of our computers, of SSD speeds, of Ruby's gotten a lot faster. Rails has gotten a lot leaner and smarter about what it requires. Tools like Boot Snap have made it, uh, much more cashable, of course, and. Things are faster now, so we don't think about it as much then.

But it was like a real going concern at the time of like, wow, these, these, these suites are taking hours and hours and hours. And if we can at least have like a lot of our business logic, which is just pure logic and doesn't really need to be real all the way to the database kind of integration tests, uh, as a separate suite that we could run really quickly, we could have our developers like actually running that chunk of tests before they commit and before they push and before it wastes two hours in ci.

And that was, I think, like a total net benefit. It was just meeting the moment of like where we were now I remember at the time though, I was talking to Aaron and I was like, See, and this is why we should do this. And he's like, No, this is why people should invest in Ruby and Rails and make them faster.

Like if you're doing, going, going through all these hoops to kind of like, you know, test it in sort of this compromised state where it's not really how it would really be, uh, the solution is like, you know, make the thing faster and, and instead of making that kinda like fundamental investment, we had to wait a decade, uh, for, for Ruby and Rails to catch up to the point where, you know, actually.

Whether or not I'm inheriting from mini test, test or from, uh, uh, active support test case, it's pretty much the same speed,

Matt: Right, Right. And we have fleets of, you know, micro VMs that can spawn thousands of processes and we don't have, you know, uh, Jenkins running on a computer in the closet that can do one, one job at a

Justin: a decommissioned workstation,

Matt: Yes. Yeah. Yeah. And it does, I think just highlight like the fact that this topic is so complicated and the tradeoffs are really like high stakes because while I think that, uh, you know, isolated testing to get faster feedback is like sort of virtuous.

I think it probably came at the cost of, uh, sort of demonizing, uh, you know, system, system level, integration level specs, which I think do have more of like a confidence building. Um, like you have more confidence if your full system test suite passes that your application is working than if your unit test pass.

Right. Especially if you're doing a lot of mocking and, uh, things like that where, where you know, Hey, I've used like VCR to record this API response and it's never gonna change. But then actually in production it's completely, it's completely broken.

Justin: It's, it's sort of, it's tricky too. You gotta keep the message simple when you want, when you want people to do something that maybe they don't wanna do or that isn't particularly fun. The, the more you can make it, uh, either seem fun or otherwise just be really straightforward and digestible. Such as like everything we do has to have a test for it, right?

That's a real simple message. When you get into the weeds of like, everything you do that doesn't literally need to actually touch the database to assert that it's behavior is proper. That, you know, uh, uh, should go into a fast spec folder over here and it should only inherit from this thing. But if it has to cut to the database, keep it as minimal and like narrow it down.

And so I would see teams of developers where I'd pair with one developer and he was like a fast spec's proponent. And so he'd find a way to like, kinda like trans mortify, whatever he was writing to like maximize like the, the plain old ruby objects under test. And then the next day I'd pair with the other guy who would like, look at all that and like roll his eyes and be like, Yeah, this could have been like a three line method inside of like a user model, right?

Um, it was, uh, you know, when, when testing become, gets that much attention and is that much of a source of consternation, you start to question the ROI on this as an activity. If you're spending twice as much time either arguing about or fighting with, uh, or trying to keep straight, you know, what is the right way to write tests?

Uh, you're, you're definitely.

Matt: Mm-hmm. do you feel like there's a sentiment in, in that time period where it was almost like, Uh, people went too far into, into saying that like, your test suite was like the, uh, output of, you know, like the craft that you were doing, and like, that is like, that is what you should focus on versus, you know, either the production code or, uh, you know, like the user, the user value from the actual app.

Justin: There was a virtue to it. You know, another irony of this era is that, um, even though we were being paid by businesses to build websites, for the most part,

Matt: App. Mm-hmm.

Justin: I think that there was a tidiness to your job as an engineer thinking of like, you know, my responsibility, my inputs are requirements and my outputs are code and tests that prove the code works and then I hand it off to the next person.

And the irony is like at that same time we had this cross current of also make it very englishy to kind of like, you know, buttress its value as that social proof, even though that makes it somewhat less technically rigorous or even fit to purpose. . And so at the time, this is maybe, uh, 20 13, 20 14, I'd have to go and pull it up, but Kent Beck tweeted something kind of like relatively benign and flippant by today's standards of like, Oh, you know, like remember we're getting, we're all paid to write production code.

Not, not to, not paid to write tests. So maybe think about, you know, the, the, the value of that. Right. And I remember like a lot of people were like, Oh my God, Kent Beck gave up on testing and like, you know, the dream is dead. And like there's a lot of naing of teeth over that. And I remember feeling a feeling in my heart at the time that I do not feel now

So something definitely changed in the water.

Matt: Yeah.

Justin: There's one other example or issue with a spec that we haven't touched on, which is like, learning how to program is hard. It's a lot. And if you've learned Ruby and you know everything about how classes and modules work and how methods and prox and Lambdas work and how arguments and qu arguments, keyword arguments, and block parameters work.

Uh, and, and you understand scoping rules and you understand switch and case versus if and else and all this other stuff. And you always get the right number of ends at the bottom of the structures that you build. And then you, someone hands you a spec, you may as well know nothing because now you've got a DSL that is informed by 15 years of agile coaching history that is, like you said, mostly vestigial and it is uh, not ire respectable.

If you use solar graph or something like that or vs code ruby extension to like navigate your code base, like those are no, no longer gonna be very helpful to you to like find your way around. The scoping rules are super confusing, right? Like if I write code that's inside of a described block, it's going to run at a totally different time and only one time.

As our respect builds the tree of my tests. As opposed if I run it inside of an IT block and then that's like, that's the actual test part, and that runs at test time, right? Like all of these little rules, much less forget about state and everything now, like when we talk about trying to be more inclusive as an industry and, uh, lower the barrier of entry, I think that like, we should really look hard at, at the value proposition that our spec gives us.

And say like, are we just making it like needlessly difficult for newcomers to figure this thing out?

Matt: yeah. And I know there's kind of a infamous, uh, Nate Beek tweet where he took a screenshot of the R spec documentation and, uh, you know, the, a spec documentation is one of these like self-referential things where the documentation is written, you know, in its own syntax, right? So like you.

Justin: I, I think it's a, Is it still on Relish app? The cucumber, uh, runner site?

Matt: yeah, yeah.

So, you know, Nate has written like, you know, given that I am not a computer, when I read, you know, the R spec docs, like, then it takes me twice as long to understand as if it would, you know, just been written in English instead of, uh, a code block that describes itself. And that, that definitely feels like a programmer.

Like, you know, I need to write Ruby in Ruby that, you know, can a self-executing program type thing where it's, it's, it's not serving the end customer. It's more of like, tailored for the maintainers of the project. And, uh, yeah, I think that it, it is interesting. Like why, like why, uh, like how could, how could we, like if we say the programming's hard and writing tests is even harder, like there's so much, um, material and time and energy given to learning to program, but hardly anything.

Around like learning to test by comparison.

Justin: Mm-hmm. , it's. It's hard not to listen to your, you know, explanation of kind of, you know, the documentation Nate's take on it and not just think that this can't, this, this is just another example of smart software people seeing a social problem.

Matt: Yeah.

Justin: and thinking like, technology can solve this. So like, we're gonna build this technical structure that they are going to have to rinse and repeat at Infin item, uh, to, to, to help them have a, a conversation to get this particular outcome. Whereas, you know, the, the harder thing is like just what if programmers were either incentivized or trained to be very value oriented and very friendly

Matt: Right, Right. Yeah. If in, if instead of, uh, you know, learning a new programming language, uh, we had to empathize with, uh, someone and, uh, you know, view them as, uh, an equally valuable part of the team, despite the fact that they, you know, are not, uh, slinging code with you.

Justin: Yeah.

Matt: That doesn't, that doesn't sound as fun as, you know, installing a new, uh, jam that will solve all the problems.

So I think, I think maybe in the past, um, I would've been the type of person that would've said Rails and or Ruby, like they should use APEC as the default because it is sort of the community de facto and for years and years it's sort of held out, right? And, and many tests is what is actually sort of used by these kind of the two.

The two core projects right. By, by Ruby itself. And then, uh, Rails also, uh, you know, has its own set of test utilities, but they're at the core based on, on many tests, at least in the, the default, um, rail sense. And now, I mean, after our conversation and after thinking about it more like, it does seem like that that was probably a very unpopular decision.

That was probably good because I don't think you could ever go back, like if they had switched to a spec and now we're saying that like, yeah, there's a lot of stuff that's maybe wrong with our spec that, you know, if we, if we, if we use the device from, you know, men in Black and wipe everyone's, uh, memory and say like, Hey, there's a new framework out called Ruby on Rails, and these are the tools used to write tests, we probably wouldn't, uh, need our spec. But I think you, I don't think you could like put the genie back in the bottle if. Had had switched that. So in a way it's giving me some appreciation for what must have been a really difficult, like, line to hold if you were the maintainers of Ruby and Rails in, you know, the, the early 2010s when, you know, you've got like people saying like, you know, 70% of the community is, is using this tool.

Like, why, why is it not the default? And I'm sure there's probably similar arguments of, of, uh, onboarding and how do we make it more approachable if like, well if everyone is installing this gem that you, you know, why is it not just built in?

Justin: That's, I think it's a. One way to think about it in in more concrete terms is, uh, if you think, uh, if you think of a, an application like a Rails application, having two identical test suites, one of them is in R spec and one of them is in mini tests. And just for a moment assume like a hundred percent code coverage and all of them are kind of perfect assertions.

It is, you know, in my experience, more likely that it would be easier to take the mini test and extract it into R spec. In fact, you could probably even write a tool to like generate realistic looking r spec from it than it is to figure out how you would take the R spec and recombinate it into kind of symmetrical looking J unit style tests that were.

That had like good code examples. Now part of it's because our specs are very feature rich, but I think at, at the end of the day, a, a dsl, a domain specific language, it's a acronym we've been using without really defining it. That is, um, implemented as like it's public API to you is like, pass me a bunch of strings and prox.

That's a high entropy system and it's just really hard to go to a low entropy state from that. And classes and methods are a low entropy state that are very, I respectable and very useful from a tooling perspective, but also just like way more cognitively like, you know, something you can understand.

Matt: more boring, as I would say, and boring

Justin: And that boring.

Matt: and not,

Justin: Yes. Yeah, it's a liberating constraint, I think. So when you, when we talk about, you know, is it, is it good that Rails, um, uses many tests? Ruby, Uh, the Ruby, uh, repo, if you go to Ruby, Ruby still has like a, uh, I don't know the state of this, I don't know the story, but like has the Ruby Spec specification, which isn't our spec ease.

I don't know if it's actually our spec, but like I was actually having to read some of that recently and, and it was difficult for me to parse out what was going on in some cases, and that's because I'm not used to our spec, but it did give me this exact same, you know, thought that you just, uh, shared.

And honestly, I think a lot of my complaints about mini tests were really complaints about like Rails built in testing facilities for a long time, like through rail. Four, maybe even Rails five, but around Rails five folks like Eileen and Matthew Draper, Matt Jones, Aaron, they invested a lot to like, make not only like rails, uh, subclasses of mini test test, very, very like robust fault tolerant feature rich systems testing came along, which was like, you know, a really natural extension.

They changed how people thought about controller tests, so those were less problematic. Uh, the test runner that ships with rails, now if you go to like Bin Rails test, like it's, it's competitive with R Spec in terms of being able to run a single test or run at a particular line or run a particular name of a test.

And so, like, things are just like a lot better now than they used to be. And so what, you know, seven, eight years ago looking at our spec be like, well, the CLIs really

Matt: Oh yeah, for sure. Yeah.

you would, you would, you would find people that would port an entire test suite so that they could have, you know, run by line number or like the, you know, the tick, tick only, only failures option, right? It's like that's a re, that's a reason to make this bigger. You know, more, um, impactful change of like, what is your underlying test suite as is, you know, because it has this command line feature

Justin: And, uh, Nick Quantro had a mini test rapper cli that I have forgotten the name to, that I don't believe he continued to support. Um, but I, I was actually texting with Aaron just before joining today. Um, cuz I, you know, he, he knows we love to talk about apec. Um, and uh, I proposed to him like, Hey, would you like to pair with me on like a little cli that just wraps mini test?

You know, that basically does a lot of the stuff that the Rails one does, but without requiring rails. Um, because I think that is the last thing. Because mini test doesn't come with any command line stuff at all. It's just this auto runable. It all runs at Exit. So if you know what colonel.at exit is like, you know, you have mini test auto run and then at exit literally everything happens is when all your tests run.

And so it's like a fragile state from a runtime perspective. So having a CLI that wraps it in its own process with his own arguments and options and so forth, um, is appealing. But somebody just needs to build it and I highly doubt it's gonna be that much work.

Matt: Yeah. I think that raises an interesting question that I'd like to pose to you specifically. Um, and I think probably anyone that, uh, knows you from the Ruby world is familiar with the, the standard RB project. Um, that I think you're, I don't, I don't know what you're, you, you'd officially title yourself as the, the creator or the, the lead person on, but, um, I'm kind of curious what you think it would take to, um, to basically like, uh, make a, make your own sort of test tool suite.

And I know you have started some of this that would sort of make it, uh, like, uh, how should I say this? Um, like what would it take to beat our spec? And I think, I think like the command line stuff you talked about is one, and I think Mox are another one. And I know you've, you've got a, you've got your own, um, mock library, you've got your own, um, uh, sort of factory fixture answer.

And so like, do you think, do you think we'll see, uh, sort of the, the third option that is really, uh, just a nice wrapper around mini tests that makes it so that, you know, choosing our spec is, uh, very clearly like a no.

Justin: I, I, uh, I'll take that, uh, as a challenge, cuz I think I'm, I'm really good at rolling my own stuff when I don't like what's on the menu. Uh, so I've got a lot of little gems, a lot of loan paying packages everywhere. You know, you, you mentioned, I have a mocking library, uh, I wrote last year called Mocktail.

Uh, and it is, in my opinion, the very, very best testable library that we ever had. Uh, just based on my, what I've learned over the years and how much I care about a very particular and arcane way to do test driven development, uh, I've got, you know, uh, I don't like factory bot either. We could have a whole nother episode just on things I've seen go wrong with Factory bot and I've got a, uh, uh, another library called test data that, that takes a totally different approach to, to managing test data.

And, you know, I could totally imagine myself based on this conversation going, spending my weekend building, um, you know, that mini test CLI wrapper right? To, to, to go build

Matt: and then you just need a meta, a meta gem that, you know, installs all, all five of these gems with the latest version, and you're good to go.

Justin: And that, that's the hard part, right, is I don't know at what point it becomes a stack or just like a certain number of, maybe it's just like a vanity on my part, but I kind of feel like, uh, I, I, I specialize in building tools for people who've been burned badly by the, the kind of dominant path, because that marketing is way easier than the DHH style of marketing with like slick demos and, you know, fancy language and, uh, good looking designed websites, you know, Uh,

Matt: It's, it's sort of like the sidekick model, right? It's like battle hardened and like I have the scars, you know, Don't make the mistakes that I did.

Justin: Yeah. Well, and, and Mike has built a business on it, in part because Sidekick is very focused and serves a real, you know, uh, important business need. And here, I think I just sort of got this like island of misfit toys where I just throw all, like eight of my test related gems into my GEM file. And I mostly worried about myself.

And, and I probably should take another look at, um, what's the bigger story here, so that you can give somebody, at least to batteries included approach, but to, I wouldn't want anyone to leave this conversation without thinking, like, honestly, if you're in Rails, you already kind of have that, Like there's a, there's stuff the margins where it's just like if you, you know, don't feel bad for just using like the rails testing stuff that's in the Rails guides.

It works pretty well. I use it. Um, and everything else is kind of like a pretty subtle nuanced augmentation at this point in my opinion.

Matt: Cool. After our chat, if I asked you, uh, our spec, do we need it, yes or no?

Justin: No

Matt: And it's gonna be a no from me. So there you have it.

Justin: I, I, I really appreciate the conversation, Matt. I think this is important stuff to tussle with. It's not like if, if you use our spec today, you're not gonna like just throw it away tomorrow and, and go all mini test. But if you're starting something new, I think it's worth thinking about this stuff and, and questioning our biases, our past experiences, it was a really hard for me to drop our spec.

Um, but I'm glad I did

Matt: show notes, links in a transcript can be found@yeni.fm. Today's guest was Justin Searles. You can find Justin on Twitter ATSs, and I'm your host, Matt Swanson. You can find me on Twitter at underscore swanson. Until next time, just remember you ain't gonna need it.

View episode details


Subscribe

Listen to YAGNI using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Google Podcasts Overcast Pocket Casts
← Previous · All Episodes · Next →