Docs as Tests: Keeping documentation resilient to product changes with Manny Silva

Kate Mueller: [00:00:04] Welcome to The Not-Boring Tech Writer, a podcast sponsored by KnowledgeOwl. Together, we explore topics and hear from other writers to help inspire us, deepen our skills and foster our distinctly not-boring tech writing community.

Kate Mueller: [00:00:20] Hello my fellow lovely not-boring tech writers. This week I am so excited to be joined by a writer who I have a deep amount of respect for. In part because he has an absolute respect for the idea of automating, what can be automated, and trying to keep things up to date and accurate. Which are definitely two of my big priorities in life and priorities with my docs, which I never feel like I'm doing very well at. I'm super excited for this interview in the hopes that I get to learn some really exciting, unusual tips to make my life and my docs better. And with that, I will welcome Manny Silva to the show. Manny, welcome to the pod.

Manny Silva: [00:01:03] Thank you for having me, Kate.

Kate Mueller: [00:01:05] I'm so excited to have you here. For folks who do not know anything about you, tell us a little bit about your tech writer villain origin story. How did you get into this crazy field to begin with?

Manny Silva: [00:01:19] It started when I was but a wee lad, and my mom disassembled and reassembled computers on the kitchen countertop. She worked in sales for a now-defunct computer company, and she figured the best way to be able to sell was to literally understand them inside and out. My godfather also worked for AMD, so he would bring over parts and me and my brother would build computers. I was also a big gamer at the time. Well, still.

Kate Mueller: [00:01:51] Lifelong gamer.

Manny Silva: [00:01:52] Yes, both virtual and tabletop for anyone keeping score at home. So I approached computers through that lens as I grew up. Then I discovered I had a knack for English when I was in high school. That was a bit of an unexpected discovery, but when I got into college, I was like, "I like computers, I don't want to do software engineering. I don't want to do hardware engineering, that is not something I want to do." Which I now laugh at. But I also have this knack for English, how do I mesh these two things together? Because they seem completely separate. Then I stumbled on to tech comms and technical writing in particular.

Kate Mueller: [00:02:31] While you were still in college?

Manny Silva: [00:02:33] While I was in college, while I was doing my [unintelligible].

Kate Mueller: [00:02:34] That is impressive.

Manny Silva: [00:02:36] I am one of the oddballs who intended to go into tech writing.

Kate Mueller: [00:02:41] I think you might be the first one on the show. I have to say, gold star for you, Manny.

Manny Silva: [00:02:49] Out of my undergrad, I got an internship with Apple and I was doing tech writing for them. I ended up staying there for a good number of years while I did my Master's in Technical Communication, then I moved over to Google, where I helped start up a developer relations group. Then I moved over to Skyflow, which is where I am now, where I'm head of documentation.

Kate Mueller: [00:03:14] Since you and I have found we have weird synchronicity on this, I have to share this story. My dad also used to assemble computers on the dining room table, and one of my mother's favorite stories to humiliate and embarrass me was a story about some point when she came home and found me sitting in a diaper on the table in the midst of a whole bunch of computer pieces. She asked my dad what the heck he was doing, and he was like, "She's helping me build a computer. Kate, show mom the motherboard." I look around, and then I lift it up and I'm like, "Mudderboard!" So apparently we have this in common, Manny.

Manny Silva: [00:03:58] This is wonderful. I, for the record, am passing it along to my three boys. I'm continuing this tradition. I have a laptop that you can fully disassemble, repair and reassemble. For those listening at home, it's called a framework laptop. No, they did not sponsor this. When I got it in, the boys helped me build my laptop, and they absolutely love it.

Kate Mueller: [00:04:28] In an era where it feels like people just throw devices away when they get to a certain point, the idea of both understanding how they go together and being able to repair it, is huge. I love that.

Manny Silva: [00:04:39] Yes, completely.

Kate Mueller: [00:04:42] I did not think I would ever meet somebody else who had a similar childhood story, so I'm totally loving this. Maybe now it's more common, but when I was a little one, computers were not a thing you found everywhere.

Manny Silva: [00:04:55] No. I have very early memories of childhood games that my mom coded little menus for me and my brother to put in the correct, this is dating myself, giant floppy disks and to choose the right game. It would tell us which ones to put in, and those were wonderful days. Simpler times.

Kate Mueller: [00:05:19] Indeed. Talk to me a little bit about what your role at Skyflow is at this point. You're head of docs there, right?

Manny Silva: [00:05:27] Yes, I'm head of documentation.

Kate Mueller: [00:05:28] What does that mean? Do you oversee a team? What does that look like in practice?

Manny Silva: [00:05:33] What it means is that I oversee a lot. I own, or at least have responsibility for, everything that involves words that are not marketing. That means the technical documentation that our customers see. We're a developer platform. Our developers, or our customers' developers, also anyone who is assessing the product for potential purchase, doing all of the typical tech writer-y things where we're doing support deflection and that sort of task. Also, I am part of the developer experience group. Any changes to the APIs need to go through the developer experience group. I have a chance to say, "No, you need to fix that before it breaks consistency with other things that we've got." I also own error strings, error codes, error messages in our APIs, SDKs. I really meant it when I said anything that involves text. It's given me flexibility because I've built tools that we use internally at Skyflow. One of which relates to testing and one of which relates to AI. As long as it is within my domain, it is within my purview. I built Skyflow's internal AI toolkit for the express purpose of lowering the barrier of entry for documentation contributions. You asked about my team size, my team is me and one dedicated writer. But we solicit contributions from the entire company. So we act as editors, as content curators, just as much as we author new content.

Kate Mueller: [00:07:21] You'd have to be fairly successful at that, if you're doing a good job, to get those contributions to come in.

Manny Silva: [00:07:28] I like to think so.

Kate Mueller: [00:07:29] In that, if you are soliciting contributions from across the entire company and you are working in what is largely a developer relations, and probably API driven, role, are you doing a DOCSIS code setup for your technical documentation?

Manny Silva: [00:07:43] I am doing a DOCSIS code setup. I am probably going to be one of the fiercest DOCSIS code advocates that you come across. But I will say, even right now, every tool has its place, every tool has its time. It all depends on the business needs of what you're trying to achieve, what tools are available to you, and what specialties your team has. I may do DOCSIS code, that doesn't necessarily mean that it's the right fit for another team.

Kate Mueller: [00:08:09] One adjacent question that has nothing to do with your work at Skyflow. One of the things I've learned doing this podcast is that different people like to call our roles by different things. Some people really like the label 'tech writer', some people prefer the more all-purpose documentarian. Some of the folks we get would not even call themselves tech writers, they'd call themselves customer experience folks or support. What label are you comfortable with? Is it tech writer, or is it something else?

Manny Silva: [00:08:41] That depends on who I'm talking to. I generally like the term 'documentarian' as an all encompassing one. If I'm talking to someone who is new to the craft, I'll generally go with 'tech writer'. But if I want to be more specific and honest with what my day to day is, I call myself a 'docs engineer'. It's all about the audience, just like our writing.

Kate Mueller: [00:09:06] It is, you have to know your audience and what's going to fit them and what's going to make sense. If I tell most of my friends I'm a tech writer, they look like I have antlers so trying to explain what I actually do is always fun. You know the docs you go to when you can't figure out how to do something, and you're frustrated and you don't want to talk to a person? I write those, that's what I do. That's usually how I end up explaining it to people. They're like, "Wait, there's documentation for that? I just got frustrated and gave up." That's all I do.

Manny Silva: [00:09:38] I've actually found 'docs engineer' easier to explain, because nowadays I do end up writing a lot of code and I maintain a lot of pipelines for making all of the docs work. It's like, okay, I'm an engineer who works on docs. That's pretty straightforward.

Kate Mueller: [00:09:54] That makes sense, it is pretty straightforward. This is a really nice segue into what I'm hoping we can spend the bulk of the episode talking about, which is Docs as Tests. I think you were the person who actually invented this phrase, as far as we know.

Manny Silva: [00:10:10] I am, yes. I want to be clear, I am not the person who invented the different practices that make up the strategy overall. But yes, Docs as Tests is a strategy to keep your documentation resilient to product changes. To be able to have confidence that your content is accurate and not have to rely on customer feedback to let you know when things are broken.

Kate Mueller: [00:10:41] Which is the kind of proactive dream we all want to live in, right? The, "Please don't let a customer tell me my thing's broken. Please let me find it on my own and fix it before anybody knows that it was ever outdated." You're speaking my language.

Manny Silva: [00:10:57] There are very practical business reasons for that too. Because if a customer or a prospect, someone who might become a customer, ends up looking at your docs, maybe they have your product up in another browser window and screenshots don't match, or they can't find the button that your procedure is referring to, it is a pretty bad look. It might be something small if they're already a customer, but it might be the final straw to make them leave. At the very least, it undermines whatever trust they still had in your product. And if they're a prospect, that's a really bad look, and you just severely decreased the possibility of them converting to a paying customer.

Kate Mueller: [00:11:45] It's a huge red flag in many ways, when you feel like the documentation you're experiencing-which is an extension of the product. I think this is a thing people sometimes forget about, is the experience of that documentation is an extension of the experience of the product as a whole. If your documentation is seriously out of alignment with the product, it's going to lead to a more negative product experience as a whole. A lot of folks don't want to make the distinction. "Oh, that's docs rather than the product. I'll just try to figure it out in the product." They view that as a complete package, as a complete holistic thing. So if the docs experience is bad, it undermines the product experience. Conversely, no amount of good docs will save a bad product experience. They are like a careful partnership, they have to support each other. You used the word 'strategy', which I really want to focus on here. This is not about, if I'm understanding it correctly, a particular tool or necessarily a specific workflow or tech stack, it is about a strategy around how you're managing that. Am I right? Can you talk about that a little more?

Manny Silva: [00:12:56] You are correct. 'Docs as Tests' does not necessarily mean you need docs as code. In fact, they are two completely separate things. 'Docs as Tests' is a way to find the tools that work well for your current tech stack and help make sure your stuff is up to date. Whether you're doing docs as code like I am, or whether you do DITA in a CCMS, or whether you're in a word file because that's what's available to you, all of these you can do 'Docs as Tests' with. The underlying conceit of 'Docs as Tests' is that documentation contains testable assertions about a product. Let me unpack that a little bit. Docs are assertions, or statements, that a tool is supposed to work a certain way. And because those assertions are verifiable, it's either accurate or it's not, then they are testable. That means that if docs contain assertions, and assertions are testable, docs are testable, docs inherently are tests.

Kate Mueller: [00:14:14] I feel like there needed to be an 'ergo' in that statement.

Manny Silva: [00:14:21] What's happened historically is that docs have functioned as tests informally. It's just that most of the time we've made our customers test our docs. That's where we get these customer reports, that we were just talking about, that we hate.

Kate Mueller: [00:14:40] They're basically telling you that a test failed.

Manny Silva: [00:14:43] That's exactly what they're telling you.

Kate Mueller: [00:14:45] The test of this doc's accuracy just failed because I couldn't figure out how to do the thing because something in here was wrong.

Manny Silva: [00:14:51] Yes. I have a second villain origin story for you.

Kate Mueller: [00:14:56] I love a double villain origin story, it adds so much nuance. Tell me the second one.

Manny Silva: [00:15:01] 'Docs as Tests' came about in the first place because at one point I had published a guide. I knew it was good, I stepped through every step in every procedure myself. 100% accurate, good to go. I published the doc, life was good, and I moved on to the next thing.

Kate Mueller: [00:15:24] Until?

Manny Silva: [00:15:25] Until three months later when I got a report from support that a customer said that the guide was broken. I said, "What do you mean the guide is broken? I know the guide, it's accurate." They're like, "No, the guide's broken. Here's exactly what they did and here's what happened to it." I was like, "Huh." So I went and investigated. I looked at the guide, and in fact, the guide was broken. Even though no changes had happened to the guide in the intervening three months. What ended up happening was that an underlying behavior of a specific enum in the API changed. It was very subtle, but my guide relied on that behavior and therefore the guide was now broken. I had very good lines of communication with my team at the time, so I was astounded that this slipped through. They're like, "Sorry, Manny. This was a thing that we had been meaning to fix. This was actually a bug that was fixed." I was like, "Oh, thanks for telling me."

Kate Mueller: [00:16:32] Except I was relying on that bug. It was like, "Is it a bug or is it a feature? I treated it as a feature."

Manny Silva: [00:16:38] Yeah, I had never been told it was a bug, so sorry for making the assumption. Just like our customers would. I got into a debate with my engineers. I said, "Hey folks, you've got all of your unit tests, you've got your integration tests, you've got your end to end tests, all to make sure that your code works the way it's supposed to work. What do I have for my docs? How do I make sure that my docs are up to date?" They're like, "Manny, that's what we have you here for."

Kate Mueller: [00:17:09] So you get to live in the world of lovely automated tests and I'm stuck over here in the dark ages, manually retesting and retesting my docs constantly to make sure they're up to date? That's fun.

Manny Silva: [00:17:20] I had a few very colorful words for them, which I will not repeat on recording.

Kate Mueller: [00:17:27] Just imagine a big string of bleeped out expletives right now, dear listeners.

Manny Silva: [00:17:32] More or less. We agreed to disagree, and we went our separate ways that day. But it stuck with me. Fast forward a couple of years, and I still didn't have a solution to the problem. I was slowly losing my sanity. Not because of docs, but because of my children. I have three children who are wonderful, chaotic little beings. I was on paternity leave for my youngest while trying to sleep train my middle child, which meant, for anyone who's been involved in that, no one was getting any sleep. Me most of all. I was up in the dead of night with lots of screaming children, five minute snippets between going to comfort them. I couldn't read a book, I couldn't listen to an audiobook, I couldn't do anything that required synchronous thought. So I decided, "I'm going to open up this laptop. I'm going to take a crack at this stale docs problem and just see what comes of it." So over the next five weeks, and five minute snippets, I ended up creating the MVP for a piece of software that did just that. And I've been working on it for the past three years.

Kate Mueller: [00:18:43] Five minute snippets in between screaming children. Manny, you're kind of my hero right now. I can't even imagine producing anything meaningful in five minute snippets. That's astounding. So is this Doc Detective? Is that what you ended up creating as a result of this?

Manny Silva: [00:19:00] Yes, I created Doc Detective which is an open source tool specifically tailored for technical writers to be able to test and validate their documentation. The way that it works is it will take a procedure, let's just say written in markdown for the sake of conversation, go to DuckDuckGo, type in kittens, press enter and look at the search results. Maybe there's a little screenshot of search results. Then what it'll do is actually rip apart the markdown, look at all of the syntax, identify and understand the different components of that procedure. It'll say, "Hey look, where said DuckDuckGo, that's a URL and it was prepended by goto so we actually need to navigate there." So it'll open up a web browser and it will navigate to DuckDuckGo. Then it said "Okay, type 'kittens'. And kittens are within quotes, so it's a literal string, so I'm going to take that and I'm going to type in the string 'kittens' in the search box. So it does that. Oh hey, it says press enter. So I'm going to detect that and I'm going to press the enter key to run the search. Then it says, "This screenshot image reference down here has a particular class on it that says that it's a screenshot, so I'm going to capture a screenshot of those search results and save it to file so that you can display it in your procedure." And if you've already run it once, then the second, third, nth time that you run the procedure and it captures the screenshot, it can actually compare the newly captured screenshot to the previously captured reference screenshot to see if things have changed. That way we get visual regression testing baked in.

Kate Mueller: [00:20:48] That's fantastic. But it's doing all of that basically by parsing what's in the documentation. This is not somebody sitting down and writing those steps in code, it is actually ingesting, in this case, the markdown file that contains those steps, and then it's figuring out what to do with them. That's astounding. As somebody who's done a lot of manual QA in my life, and has had to use some tools for that, and often didn't use tools because I found them really annoying to try to use, this sounds phenomenal to me.

Manny Silva: [00:21:19] For the record, today it is not using AI. And that's an intentional choice because you want your tests to be reliable. If your tests aren't reliable, then there's no point in doing them and therefore the tests need to be deterministic. AI famously is non-deterministic. So no AI in this, at least in this capacity, for now.

Kate Mueller: [00:21:43] You don't want hallucinations in your test results?

Manny Silva: [00:21:45] I would prefer not.

Kate Mueller: [00:21:48] It does rather seem like it undermines the point of doing them, but let me just make up an answer to this and hope that's okay. You'd either be chasing a lot of false failures, or having some false confidence that it was accurate when it wasn't, I'm guessing.

Manny Silva: [00:22:05] What ended up happening back then was, once I had the early version of this tool, which I could not explain as clearly as I just did now, I was talking to other writers, trying to articulate this, and they didn't understand. They didn't get it. I realized they didn't have a mental bucket to put it in. What kind of a tool is this? There is no other writing tool like this. So I had to take a step back and I had to look at the bigger picture. This is more than just me. There have been other people who have taken stabs at trying to validate their docs in various different ways. How can I bundle this all together and create a bucket for all of these tools and techniques that people have done, so that we can talk about this? And that's where 'Docs as Tests' came from. There have been people who have been doing this in various capacities for many years before I started. Even just a few years back at Write the Docs, MongoDB gave a talk about unit testing your documentation. There are folks who have been using Playwright, Cyprus, these engineering level testing tools to test the flows of their documentation. Their engineering teams see them as end to end tests, if you want to get technical about it, but for the docs folks, they're just testing their docs. But those tools required engineering level skills. This is hugely valuable to everyone in our communities, and I wanted to find a way to democratize it. So in addition to coming up with a term and starting the discussions around the strategy, as I ended up calling it as a whole, I open sourced Doc Detective and I ended up writing a book on Docs as Tests.

Kate Mueller: [00:24:02] Which, at the time of recording, we are very close to official public release. By the time this airs, we will be past the official public release on the book, but give us a little shout out here. What will it be called? How can people find it?

Manny Silva: [00:24:17] The book is called 'Docs as Tests: A Strategy for Resilient Technical Documentation'. It is going to be available in ebook, print, and I'm going to be recording my own audiobook version of it. I cannot promise when that's going to be available, but you heard it here first.

Kate Mueller: [00:24:34] That's right, the queen of audio here, bringing you the piping hot tea from the tech writing community. Inside scoop. We will definitely include a link to the book in the show notes, because by the time we actually go to air for this, it'll be available. So check out the show notes if you want to check out the book.

Manny Silva: [00:24:58] It covers a lot, but in what I like to think is a very digestible format. It's divided into two parts. The first is mostly on the 'what' and the 'why' or the theory of Docs as Tests. What it takes to actually write tests. What it takes to sell this to your organization, and how to help instill a culture of docs testing. Then the second half of the book is more the 'how' or the practical side of things. There are individual chapters about different product interfaces and how you might test docs for them. Like, you have a graphical user interface, a GUI, whether it's in a web browser or a native app. How do you test docs for that? How do you test docs for APIs? How do you test docs for code interfaces like SDKs or CLIs? Giving practical tips and talking about a variety of different tools that you might use to achieve this, given whatever your team's preferences and constraints are.

Kate Mueller: [00:26:01] Sounds fantastic. Let me pick up one thread out of that from in the middle of it. You say you're talking a little bit about how to sell the organization on the strategic approach. I'm imagining that there's a fair amount of investment of time, energy and effort upfront to get this going. I'm assuming that's where the sales pitch has to come in. To say, "Hey, this is going to take us a bit to get set up, but here's why this will be worthwhile." Is that a fair assumption, or are there other elements of the sales pitch that I'm missing?

Manny Silva: [00:26:37] I actually advocate for doing your POC quietly. Because some of these tools are very easy, surprisingly easy to get started with depending on your level of comfort with investigating new tools, obviously. But if you can take one guide, one procedure out of one guide and say, "With this tool, I'm able to validate this and then I can extrapolate how much effort this is going to be." To go to this whole document, to go to this whole doc suite, and then take that information to the powers that be and say, "I can validate this". It's going to give us the benefits of reliability, it's going to give us benefits of saving, support, time and investigation on docs outages, on docs breakages. It's going to be able to automate all of our screenshots so that we can consistently capture more screenshots and keep them up to date, because people keep harping that they want more screenshots. There you go, that's a solution to it. There are some tools that can even record the tests as they run, so that you can output an animation that you can turn around and embed in your documentation. It just depends on what you want. What's the shiny thing that your approving stakeholder wants to see, and can you give that to them before you ask for permission?

Kate Mueller: [00:28:12] Show them that it will actually do what they want it to do, and then ask for permission. I like it.

Manny Silva: [00:28:19] Then once you get that permission, I call this the 'cupcake to wedding cake' scale. And yes, that's actually in the book.

Kate Mueller: [00:28:28] I absolutely love it, It's a great visual. I'm right there.

Manny Silva: [00:28:33] You start with the cupcake, this tiny little POC, and then you build out to a birthday cake as you expand a little. And this is where you need to start building culture. This is where you need to win over your engineering teams. Win over your support teams, your sales teams, anyone who has any stake in the docs. That's, in my experience, one of the harder parts. Because your engineering teams might come back and say, "Why are you doing your testing? We already have all of our tests. What do you need tests for?" Because they're already thinking that they have good test coverage which they, in all likelihood, do. But here's the difference. 'Doc as tests' is not engineering testing. You're not testing the code, you are testing the documentation. You're testing the user experience of the product. Not as someone who wrote all sorts of test cases envisioned when a feature was being developed, but how we're actually telling people to use the product. Like I said earlier, if you want to talk in engineering terms, these are end to end tests. But engineers hate writing end to end tests because they're very flaky, because they break so very easily. And guess what? Suddenly you can tell them, "All of this documentation that I've been writing for years, now with a little bit more investment, it becomes end to end tests." All of it becomes end to end tests so that you don't have to worry about writing them quite so much. We have even more coverage, and everything about our user experience is now validated instead of assumed.

Kate Mueller: [00:30:18] Nothing like telling somebody they don't have to do as much of the thing that they really hate doing to try to win them over to your side.

Manny Silva: [00:30:27] Exactly.

Kate Mueller: [00:30:27] It's a fantastic change management tactic for sure. I think that's a fantastic note for us to take a break on, so we will take a break and we'll be back in a few.

Kate Mueller: [00:30:37] This episode is sponsored by KnowledgeOwl, your team's next knowledge-based solution. You don't have to be a technical wizard to use KnowledgeOwl. Our intuitive, robust features empower teammates of all feathers to spend more time on content and less time on administration. Learn more and sign up for a free 30-day trial at knowledgeowl.com.

Kate Mueller: [00:31:00] We are back for more about Docs as Tests with Manny Silva. Everything I'm hearing about this sounds fantastic. I love the idea of being able to automatically check my docs, particularly my screenshots, to figure out if they're accurate or inaccurate. You've sold me on that, but the operational part of my brain really wants to know what that actually looks like in practice. I was hoping you could walk us through a little bit of what you do at Skyflow and how you have implemented Docs as Tests there.

Manny Silva: [00:31:35] What I do is, when I'm writing content, whenever somebody sends me content to include on the docs, before it gets merged, I run it through Doc Detective so I can make sure that it's already accurate. So when it gets merged, great, it's already there. I have a check in my CI, or continuous integration, pipeline so that anytime a PR gets submitted before it gets merged, that tooling already runs Doc Detective. There is an official Doc Detective GitHub action that gets run every single time a PR changes. Then when it gets merged, we know it's good to go. Once it is merged, then I have Doc Detective run every single day on the entire doc set and check all of the doc set that has been configured for tests to make sure that everything's good. And if something breaks, it will send me a message in Slack. If things don't break, then it'll still send me a message in Slack just to give me a thumbs up. 'Everything's good, Manny.' I don't just test procedures. That's the bulk of what I test, but whether it's UI based procedures, whether it's API based procedures, those are validated, I also do API contract testing with Doc Detective. As I mentioned earlier, API changes have to go through me and the rest of the devex team. Part of what that means is I own all of the OpenAPI definitions for our APIs. Doc Detective can ingest those OpenAPI definitions and read the examples for different operations. I can just say, "Hey Doc Detective, referencing this OpenAPI definition, run this operation, use this example, go. And by the way, validate that the request matches the API schema. Validate that the response I get back validates against the OpenAPI schema, and let me know if the API consumer or producer fails, and if I get the expected responses." I also do negative test cases so that I can say, "I'm intentionally trying to trigger this error. Make sure I get the correct error response. Both the code and the body." So I do pretty comprehensive contract testing with Doc Detective.

Kate Mueller: [00:34:09] That seems like it comes back a little bit to you owning all of the text. Those error messages are texts that you have written and you technically own, so there are things you want to make sure are actually popping up when they're supposed to. That the correct error is being triggered and it's showing what it's supposed to show.

Manny Silva: [00:34:28] Yes, and as far as monitoring our developer experience. Such that they're expecting, this particular edge case returns this particular error code, and there are people using this behavior in production. Getting back to the bug or feature debate, this is a feature I'm going to make sure it continues operating like a feature. If it changes its behavior, this doesn't look like a documentation issue, this is a product issue and let's raise it with the necessary folks. One of the ways I think about 'Docs as Tests', is a way of establishing a zero trust relationship between docs and product. Which doesn't mean that there isn't respect, but it means that we're keeping everyone honest. We are making sure that everyone is abiding by the behavioral contract that is the documentation. And I consider OpenAPI documents documentation.

Kate Mueller: [00:35:28] I trust that you're going to give me this thing, but I'm also going to check to make sure that what you gave me is what I was expecting to receive.

Manny Silva: [00:35:36] Trust but verify.

Kate Mueller: [00:35:36] Yeah, I like that approach a lot.

Manny Silva: [00:35:39] One thing that I haven't rolled out as much as I would like, but I'm working on, is we have SDKs, and in my experience developer doc sets are either API heavy or they're SDK heavy. I want both, I want all the things, but the reason we haven't been able to have all the things is because of the maintenance overhead, because of making sure that everything works as written. Well guess what, Doc Detective lets me make sure that it's all good programmatically. I have functionality in Doc Detective that I'm just starting to get into production at Skyflow. Where if I have code blocks in my documentation that show JavaScript or Python or whatever, Doc Detective can take the file that contains all of it, tear the file apart, find all of the code blocks, dynamically assemble them into a script, and run the script to make sure it runs as written. This is a strategy, perhaps not quite so elegantly done in other places, that folks like AWS have been doing for years. What MongoDB has been doing for years, this is the unit testing documentation that they talked about years ago, but doing it in a way that doesn't require engineering level skills. So that's what I do, but there are other folks who do things very differently. Docker does testing for their web based UIs using Playwright, which is admittedly an engineering level tool, but it's a tool and it meets their criteria, and it's what their QA teams already use so that's how they were able to get engineering sign off. Why use another tool when we can just use the same one for another purpose? They have something like 80% test coverage of all of their web based procedures, last I heard. And they did it in six months. They do that, but then like I mentioned, Raspberry Pi, AWS, MongoDB have all validated their CLI commands and SDK code snippets by using variations on the unit testing strategy. There are a whole raft of companies that do API contract testing, but there's one other aspect, especially on the API side, that is a bit newer, that people haven't considered as much, and that's workflow testing. Because OpenAPI documents are really great at explaining the surface area of an API, but not really how to use it. Whereas Arazzo, which is the newer specification from the OpenAPI initiative, actually tells you how to use multiple operations in sequence. It's essentially procedure testing for APIs. So there are new tools coming out that help support those kinds of testing workflows too.

Kate Mueller: [00:38:35] I have two questions here, so let me ask what is probably the easier one first. Since implementing that, what has changed for you in terms of how you go about updating docs or even prioritizing the docs work you do? How significant has the impact of those changes been for your own workflows?

Manny Silva: [00:39:02] In some very subtle ways, it's been very impactful. Notably, I find myself writing more consistently. Because if my tooling knows how to differentiate between checking to make sure a link is valid, or actually opening a web browser and navigating to a URL based on how I phrase the wording around a hyperlink, then I'm going to be writing much more consistently. Which is great for voice, for consistency in our output, and it makes our readers more comfortable too. So that's a net positive, and it ends up meaning that my style guide is enforced more consistently. Yes, I use Vale. Shout out to the Vale folks, awesome. Vale is not 'Docs as Tests', though it is testing your docs for style. I just want to have that differentiation there. I find that it also takes a lot of the ambiguity out for other contributors. For the folks who aren't as used to writing technical content. Because if I just say, "When you're wanting people to go somewhere, this is how you say it.", it's much more simple, much more straightforward, and lowers the barrier of entry for giving those contributions.

Kate Mueller: [00:40:26] It removes some of that ambient stress that a contributor would have, particularly somebody who's not super confident in their writing abilities. You just reduce the number of things that they actually have to think about, so that instead of thinking about, "This is how I have to word this thing", or "Do I know how to word this thing" and going off and doing it in some weird way that's going to be difficult for everyone involved. Instead, they can get, similar to Vale in the linting process, you can get that feedback almost immediately. To be like, "Wow, I couldn't figure out what to do with this because it wasn't worded this way." It certainly is a little bit prescriptive, but probably prescriptive in very good ways in that it's enforcing consistency, which is often a problem when you're seeking a lot of contributions from different corners. You're helping solve that problem at the same time. I promised you it was a two question thought, here's the second question. You've got tests running to validate the statements in your docs, what happens if you need to do a massive docs overhaul? Maybe you're totally changing the information architecture of the site or the product strategies going a slightly different direction, so you want the docs to better reflect who the new target audience is or something like that. Does having this approach mean that you have additional work to do when you're making those kinds of broad, sweeping overhaul changes?

Manny Silva: [00:42:08] Not generally, in my experience. 'Docs as Tests' is really good at making sure that the specific procedures are accurate, but it doesn't really do anything for the more conceptual pieces. It can't say, "This is how you defined a particular term over on this other page, but you're defining it differently here." Docs as Tests doesn't do that, it just makes sure that the button that you say has the label of 'click', there's a button that says 'click'. For IA changes, this doesn't really add any more overhead. If you're doing a big rewrite of a big guide, then you need to understand how your tooling interacts with your doc's content, which gets into a little bit of implementation considerations. There are multiple ways that you can define tests. We've been talking about what I call 'detected tests' for most of this conversation. Where you have the source file, and then the tooling is able to directly extrapolate tests based on the source content. But there are times where that's not possible or feasible based on the tooling available to you. For example, KnowledgeOwl is a CMS and there is an API that I researched and I could see you can fetch content in markdown, but natively it's not in markdown it's a CMS.

Manny Silva: [00:43:40] So what you would have to do is figure out a tool that either hooks directly into the CMS, or a tool that hooks into the API to extract the content and extrapolate from there. Or you can write standalone tests. This is what happens a lot with engineering type tests, or engineering type tools, where you write a test that is parallel to your written content. Your content might say, go to DuckDuckGo, search kittens, press enter, here's a screenshot. Then the test, which is a completely separate file, does all of those actions. Even with something like Doc Detective, which can programmatically detect and infer those test steps, you can still write tests separately. You can say, "Here's a YAML or JSON file of each of the individual steps that I want." It relates back to this other file stored somewhere else. That way you can say, this article, this topic, whatever your chosen noun is, is tested by this other file over here. Instead of pointing your tool at the remote hosted file in your CMS, you just point it to the parallel test and run that. There are pros and cons. It certainly makes it easier because it doesn't really matter what format your content is in. You can always write standalone tests, but it's not quite as tightly coupled, so there is more risk of drift between your docs content and your test.

Kate Mueller: [00:45:21] You basically have to remember to update both. In detected tests, you're basically having it ingest the documentation and use that as the test. If it fails, you can update the doc and then the next time it ingests and runs it, it'll just use whatever's in the updated doc. Whereas standalone tests, you've got the doc here, you've got the test here. If the test fails so you have to update the doc, most likely you're going to have to manually update that standalone test because the button no longer has the same label. You want what it's testing to actually to test the correct assertion in there. If you keep the old test but have the new doc, it's definitely going to fail because it's expecting something different. Am I understanding the distinction between the two types?

Manny Silva: [00:46:18] You are understanding correctly. Some tools can even do a bit of a mix. Maybe there's something that you want to test in your UI that isn't represented in the actual literal procedure that you are displaying to users, and so you need to do an extra click or two. There are ways to put, what I call, inline tests in the body of the content. It's still in your documentation content, it's just not displayed to users, but a testing tool could still pick that up and include it as part of the tests that it runs. Smooth over edges that can't quite be inferred.

Kate Mueller: [00:46:59] So if I wanted to try to implement this strategy, let's say with my KnowledgeOwl documentation, what does that process look like? It sounds like I am going to have to make some decisions about tooling, but lay out for me in broad strokes what I would be looking at here.

Manny Silva: [00:47:17] First, find one article that you want to start with. Singular one, this is your cupcake.

Kate Mueller: [00:47:24] I want it to be a very sweet little article.

Manny Silva: [00:47:28] There are two ways of going about doing it, but generally the simpler the article, the better. The other option is the more high priority, but that really ups the stakes and most folks aren't about that. So find a simple, safe, small article. Find a single procedure in that article. And before you try to do anything fancy with detecting tests, with trying to infer things and getting all advanced in your configuration, just find a tool that you think might fit and write a standalone test for that procedure. See if you can do it, it's really that simple. It's cool, find the simplest procedure you've got. Try Playwright, try Cypress, try Doc Detective, try whatever else tickles your fancy and see which one is easiest for you. See how far you can take it after you write a standalone test. See if there's a way you can directly integrate it into your content. For example, if it's open source, then maybe you have an engineer friend who thinks that this is a lovely idea. If the tool doesn't natively support KnowledgeOwl's CMS, they could integrate with the API and you can get an integration that way or whatever else. Just see what it would take to hook in deeper. However far you can take it, it's how far you can take it. And depending on the tools you have available to you, that will decide how far you can scale it across your entire doc set.

Kate Mueller: [00:49:07] I'd imagine, also, if you've already got a QA team who's using tooling and you might want to tap into them as help here figuring out what they're using and testing whatever that tool is, whether it's Playwright or Cypress or whatever, might also be one of the things that you test here. To see, if we're already using this for testing not the docs, could we also employ it for testing the docs?

Manny Silva: [00:49:34] Exactly. And QA teams, just like docs teams, feel chronically understaffed and underappreciated. If you go up to your QA team specifically and say, "I want to help you all do what you do, but with my stuff", they're going to be ecstatic, generally. So yes, go talk to them. They can also help you figure out workarounds to really tricky questions like, my docs or my product require multifactor authentication, how do I handle that? Well, that's going to be a very difficult question because everyone handles multifactor authentication differently. How does your QA team solve that? They have to, somehow. So make friends with them and figure out their answers. And if it means using their tooling, learning their tooling to test your docs, then so be it.

Kate Mueller: [00:50:30] Or if you don't have a QA team, this is what the internet is for. Find out how other people have done that thing. How have they solved that problem with this particular tool, with whatever that type of multifactor authentication or whatever the question is?

Manny Silva: [00:50:45] If you don't have a dedicated QA team, then talk to the engineers who most own QA. Just figure out whoever's doing the task. Just like many documentarians don't call themselves technical writers, there are lots of QA folks who don't call themselves quality assurance.

Kate Mueller: [00:51:02] All the years I did QA manually, I was technically either doing support or product or docs or generally all three. QA just worked its way in because I'd be like, "I want to work up documentation for this release before it goes live." Then I'd find stuff and I'd kick it back to the engineers to say, "I think this is broken. Could you fix that?" And then that gradually developed into, "Kate does manual QA." I was like, "No, I was just trying to write the docs, but okay."

Manny Silva: [00:51:32] That actually touches on one of the other things that I feel really strongly about. Technical content owners often end up doing informal QA all the time. Docs as Tests is a way of claiming ownership of that aspect of our jobs and formalizing it, and being able to articulate to the other people on your team, to your stakeholders, to your managers. This is a thing I do. This is the value I bring to the company, to the product, to customer experience, everybody. This is something that's important and we need to make time for. Because otherwise, you're just going to be sitting there slinging issues back to the product every time you run into a bug that should have been caught a month ago.

Kate Mueller: [00:52:27] This is a conversation that comes up so often in our community. How do I demonstrate the value of the work that I'm doing? This at least helps you tap into entire teams, entire languages. The people do understand, QA is the thing people do understand. The value of docs is sometimes a little bit harder to quantify, but being able to say, "We found and resolved eight bugs before we ever released." Is there eight fewer bugs that an end user ever had to experience? That is quantifiable.

Manny Silva: [00:53:03] There are specific ways you can even go about optimizing for that. Like what I mentioned about how I do it earlier, I run all of my tests in production. On testing accounts in production, but still in production. But also, I run my tests in sandbox environments before releases even happen. Every time there is a push to sandbox, I run all of my tests against sandbox to make sure that no unexpected changes have gone out. And guess what? You find bugs. If you can do that, you catch bugs before there's ever a chance of a customer running into them. You have all of your screenshots already up to date by the time the release is set to go out. You don't have to do that after the fact, testing and screenshot capturing. You can even take it a step further if you're really ambitious. If there is a shared development environment, you can test against that. You can test against it daily to be like, "Hey so and so, your code that you just merged yesterday broke this procedure. Did you mean to do that?"

Manny Silva: [00:54:18] With Docs as Tests, you get to further deflect questions from customer support because you don't have people simply reporting broken procedures and asking what the right one is. You get to continue to build and maintain customer trust in your product and its overall experience. You get a deeper relationship with your engineering and QA teams. You get a deeper understanding and appreciation of what you do across the entirety of your organization. Yes, it takes a little bit of effort, of elbow grease, to get this set up. This is a very new space, the way we're talking about it now. But with that elbow grease comes trust, comes understanding. And guess what? You don't have to do yearly docs audits anymore because the docs audits are done for you every day automatically.

Kate Mueller: [00:55:13] Except maybe your conceptual docs, those you would still need. But those generally do not change as frequently as your procedural docs do.

Manny Silva: [00:55:22] I would hope not.

Kate Mueller: [00:55:23] What a nightmare. While listening to that list, I'm reminded of something one of my parents' family friends used to say. Which was, "It does you good, and helps you too, besides the benefits you get out of it." Because there's just this lovely long laundry list of advantages to this approach, which I adore. That might be a good note for us to start to wind down the episode on. Manny, are there any resources that you think are helpful that you'd like to share? It can be related to Docs as Tests, it does not have to be related to Docs as Tests. It's a good opportunity for us to shout out. We've already mentioned Doc Detective, we've already mentioned your book, but what else is out there? If somebody really wanted to dive deep on this, what else might be useful?

Manny Silva: [00:56:11] On this specifically, there's also a Docs as Tests blog that I run that does not specifically have to do with Doc Detective. We talk about all sorts of different tools and how to use them, but more generally not specific on Docs as Tests. I cannot stress enough how much benefit I have gotten from the Good to Great books. They are business development books. It's an entire series, don't stop with the first one. It's 25 years of business research about what makes organizations of all sizes. They focus on businesses, but the lessons you can extrapolate down to even an individual docs team. What makes a great organization an enduring organization and one that is resilient to turbulence? I have listened to all of the audiobooks. They are fantastic and they have genuinely changed how I look at leadership and organizational structure as a whole.

Kate Mueller: [00:57:13] I love that, and we've not had that recommendation yet, so that's fantastic. Thank you for sharing it.

Manny Silva: [00:57:18] My pleasure.

Kate Mueller: [00:57:19] Also, what is a great piece of advice that you've been given? It does not have to have anything to do with tech writing or documentation. Just a great piece of advice, period.

Manny Silva: [00:57:31] This I wasn't given, but I learned the hard way. Be ready to say no. Be ready to listen to someone's idea and acknowledge what is good about it, but not move forward with it unless they are willing to collaborate with you on it. Do not let them throw work at you and expect you to do it because it was a novel idea they had driving into work that morning. If someone is willing to partner with you and put in some elbow grease, then yes, go for it. As long as it is the correct time, you have the correct resources, you're not straining yourself overmuch. But if it's not the right time, if it's perhaps half-baked, perhaps you're just not in a place to do it, then say, "That is an excellent idea. Thank you for your thought and contribution. I'm going to put that on my backlog." Or, "No, not right now. I have these other priorities that I need to focus on." Telling people 'no' is not a sign of disrespect. In fact, it is a sign of respect as long as you approach it respectfully because you are showing consideration for not only them and their idea, but for yourself.

Kate Mueller: [00:58:47] When I was doing product, I had no formal training in doing product management. I had formal training in project management, which is similar but different, and I had a ton of imposter syndrome. So I started listening to this podcast called 'This is Product Management'. I still wasn't very sure that I was in a space that made sense until I listened to an episode called 'Chaos is Product Management'. In that, they share what is a fairly widely known idiom within the product management space, which is something like, 'To build something good, you have to say no to a thousand great ideas.' Sometimes that is what saying 'no' is. It's saying, "I see how brilliant that idea is. It could be amazing, but I'm building this thing right now, and if I jumped at every single brilliant idea that somebody gave me, I would never actually finish building anything." So I think it aligns. I love the idea of being prepared to say 'no' or saying 'not right now' or 'not without help' or 'not without collaboration'. I love that. Other people's brilliant ideas do not necessarily mean you have to reorganize your own priorities.

Manny Silva: [01:00:15] Focus is imperative to any great achievement.

Kate Mueller: [01:00:19] Says the man who wrote an open source tool in five minute snippets in between sleepless, screaming children. Yes, I will definitely heed advice from you on the power of focus.

Manny Silva: [01:00:31] Hey, I went back for each and every one of those five minute snippets to tackle the next problem, thank you very much. I maintained intense focus while I was comforting children.

Kate Mueller: [01:00:47] Well hey, you appreciated the time you had, I would say. Lastly, Manny, since you are such a delight to talk to, if anybody who's listening to this wants to get in touch or follow what you're doing, what is the best way for them to do that?

Manny Silva: [01:01:03] I am active on LinkedIn, please join in conversations there. Feel free to connect, I like to think I'm pretty friendly. Otherwise, I try to be available wherever technical communication discussions happen. I'm on the Write the Docs Slack. I'm in a variety of different communities. You can also follow what I do, like I mentioned, on the Docs as Tests blog. Also, Doc Detective has a public Discord server, so come join the conversations and chat with other folks who are implementing Docs as Tests with what I believe is the only tech writer focused tool to do so.

Kate Mueller: [01:01:41] Or just vulture quietly until you're ready to commit and participate.

Manny Silva: [01:01:47] Yeah, you're more than welcome.

Kate Mueller: [01:01:48] Dabble your toes into the Discord server if you're there, folks. You don't necessarily have to engage. Manny, this has been a delight. Thank you so much for reaching out and saying you'd be willing to volunteer as tribute as a guest on the show. I'm so glad you came, I very much enjoyed this conversation.

Manny Silva: [01:02:07] It was my pleasure. Thank you so much for having me, Kate.

Kate Mueller: [01:02:15] The Not-Boring Tech Writer is co-produced by Chad Timblin, our podcast Head of Operations, and me. Post-production is handled by the lovely humans at Astronomic Audio, with editing by Dillon, transcription by Madi, and general post-production support by Been and Alex. Our theme song is by Brightside Studio. Our artwork is by Bill Netherlands. You can check out Knowledge Owl's products at KnowledgeOwl.com And if you want to work with me on Docs knowledge management, coaching or revamping an existing knowledge base, go to KnowledgeWithSass.com. That's KnowledgeWithSass.com. Until next time, I'm Kate Mueller and you are the not-boring tech writer.

Creators and Guests

Kate Mueller
Host
Kate Mueller
Kate is a documentarian and knowledge base coach based in Midcoast Maine. When she's not writing software documentation or advising on knowledge management best practices, she's out hiking and foraging with her dog. Connect with her on LinkedIn, Bluesky, or Write the Docs Slack.
Chad Timblin
Producer
Chad Timblin
Chad is the Head of Operations for The Not-Boring Tech Writer. He’s also the Executive Assistant to the CEO & Friend of Felines at KnowledgeOwl, the knowledge base software company that sponsors The Not-Boring Tech Writer. Some things that bring him joy are 😼 cats, 🎶 music, 🍄 Nintendo, 📺 Hayao Miyazaki’s films, 🍃 Walt Whitman’s poetry, 🌊 Big Sur, and ☕️ coffee. Connect with him on LinkedIn or Bluesky.
Manny Silva
Guest
Manny Silva
Technical writer by day, engineer by night, and father everywhere in between, Manny wears many (figurative) hats. He's passionate about intuitive and scalable developer experiences, and he likes diving into the deep end as the 0th user. Here are a few things that keep him busy: Head of Docs at Skyflow, a data privacy vault company; Codifier of Docs as Tests, a tool-agnostic strategy for keeping docs and their products in sync by using doc content as product tests; Creator and maintainer of Doc Detective, an open-source doc testing framework; AI development and experimentation. He's always looking for collaborators on projects, and he loves chatting with folks, so don't hesitate to reach out.
Docs as Tests: Keeping documentation resilient to product changes with Manny Silva
Broadcast by