-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Develop a philosophy and flow for our internal docs #4353
Comments
PreviouslyMoving context over from #3379.
And @CowMuon writes in response:
|
Continuing the conversation from #3379. @CowMuon I think the situation is meaningfully different now then it was when I established the Notion wiki, and different too from when you inherited it. The color-demarcation system (red/yellow/green) was built to serve two functions. First, it was envisioned as a way for engineers to specifically flag (along with comments) stale or incomplete docs, which they came across organically in the course of using the docs, using an easy-to-tag, readable-at-a-glance color system. Second, I imagined it as a way to take an "agile" approach to docs, where I could publish documentation for some part of a component/file, and flag it as incomplete so I could come back to it later. This system failed, IMO, because of our overwhelming push for new feature development at that time—not because of the intrinsic/formal features of the system itself. After setting up the initial docs system—on the premise that I would be able to devote 1 day of my work week to maintaining/expanding it—this maintenance/expansion time allocation never materialized, and there was no one around to freshen up stale docs, or add missing sections to incomplete entries. (This was in large part my fault, for not advocating strongly enough for its importance, but we were juggling many priorities...) The Notion docs system failed also, I think, because a large amount of stubs were added (not by me) without any intent or possibility of their ever being built upon. My new presence as a dedicated docs-handler, in addition to our new PR system for approving all new docs additions (including any stubs), should markedly ameliorate both of these problems. I'm not attached to the legacy system, and am happy to go other directions. But these are my two takeaways: (1) I don't think that formal features of earlier systems should take the blame for what was a failure of resource allocation. (2) An overwhelmingly repetitive takeaway of my research into docs systems—both best practices/failed approaches at major software companies, and also my team-specific one-on-ones this past week—has been that doc staleness is one of the biggest issues a successful documentation system needs to solve. We could try debundling the affordances of a status tracker (e.g. R/Y/G), and solve them independently. I could (1) set up a system for engineers to report doc problems + codebase areas in need of documentation; (2) have some entry-scoped, minimum viable staleness tracker like a "last certified fresh" timestamp; (3) keep track of desired changes/updates to various incomplete entries, so that per request, I can iteratively publish docs in an agile way, rather than waiting til they're perfect & fully complete. I'll continue my research into how other software teams handle this, and can continue pitching alternative systems. But these are the problems in need of solving, as I see them. |
Preface: If this is not the place for this comment please delete it. I think this thread and set of thoughts are great and true. I just wanted to add two questions here:
|
Good questions and yes, this is the place to have these sorts of conversations! I think there are some open questions still about our inline documentation system (most likely TSDoc), but we have a firmer grasp on the wiki entries right now, so I'll focus on those.
I think quantifying "time saved" may not be possible, unfortunately. We'd have to do a really thorough audit of new engineers' onboarding processes (& of the experience of outside contributors), and I'm not sure our sample size is large enough to get an answer over a timeline quicker than 12ish months. It might not be a bad idea to start keeping track of eng question-asking and blockers (e.g. new engineers reaching out to more experienced engineers/team leads, asking open questions in Slack channels, getting stuck on assigned tickets due to lack of knowledge/code legibility). But I think that will be a slow process of figuring out how to do it properly—and there is of course the cost in eng & documentarian time to tracking this info. As for maintenance, the hope is that engineers will only have to update docs in two situations: (1) Either they have, in the course of working on a ticket, realized that existing documentation is insufficient. In the process of gathering the necessary information to complete their ticket, they also store it in our Wiki, and the documentation is included in their PR, or (2) In the course of working on a ticket, they realize that existing documentation will be made obsolete by their changes, and update obsolete sections accordingly. I've been broadcasting an open offer to engineers that I'll happily hop on a 1-on-1 anytime they'd like help writing documentation. So, to answer your question in (unfortunately, a qualitative and not quantitative way), I think our documentation system will be successful if these two cases are, broadly speaking, the only time documentation requires engineer time.
Good to know re: creating doc surface area for outside contributors. I'd be very curious @CowMuon's thoughts on macro priorities. If we do decide to begin prioritizing this additional surface area, a good first step might be auditing opened issues, and looping me into any channels/conversations where outside contributors reach out to our team for help & guidance. I'm just not very exposed to that side of Common, so I'm not sure what our needs are. |
That seems fair. On this, I have one item / suggestion / thought. Ideally a special purpose bot (similar to this) link. Happy to add more on this, but very vaguely. Manually editing is frictionful. Ideally a flow would be |
On this point
I think for now, it's out of scope. Where, Internal Dev Team documentation is an "MVP" of sorts. I would not want to add this to current scope, but as a second "stage" of the project, once internally, the docs, process and maintenance workflows are established. Very related to this. Farcaster does a great job on conversations around "protocol" or "domain level" improvements. See here for example -> https://github.com/farcasterxyz/protocol/discussions/categories/fip-stage-4-finalized. Of course, Ethereum is the gold standard here, see EIPs As we work on 2.0 Workstream, there are places where there are questions around interfaces, and concretely memorializing changes to the interface, domain would be desired. That said, again "out of scope" but worth keeping in mind, what's beyond |
I've been looking into tools that link documentation files to documented code files, so that if code files change, the relevant documentation is flagged for updating. Think that could help us track things. In terms of non-manual updating, our biggest failure mode historically with docs is that they are low quality, and become more work & clutter to use & navigate than they save. On that note, my biggest priority with docs is making sure that they're consistently excellent, and that there is no fragmentary or poorly written documentation whatsoever in our wiki. I'm working on auditing & updating our existing docs under that banner. |
x-posting Jake's comments from #4736 discussion:
|
One open question on my mind right now is how to handle our many package ReadMes. Do we duplicate these ReadMes inside the Wiki? This violates DRY and makes our docs prone to falling out of sync. (Slash it's just plain annoying to keep two copies updated.) Do we keep these ReadMes where they are, and add pointers from our Wiki? I don't hate this idea—we could simply add a TOC section for Package ReadMes, and link from there. What is the boundary between "information that belongs in a ReadMe" and "information that doesn't"? Right now, our Commonwealth package ReadMe has references to installing and starting the app, but for basic database commands (which are also documented in our in-progress Package-Scripts.md entry), as well as a list of .env variables, linter instructions and frontend code style, configuring custom domains, using Datadog and Heroku, setting up local testnets, etc. Some of this information isn't even documented anywhere in our main wiki yet! This ties directly into @dillchen's comment about our two docs audiences: internal team + outside contributors. I've generally gotten the sense that our ReadMe primarily targets outside contributors, whereas our internal docs (although accessible to outside contributors) are internally oriented. But we are better off shrinking down the ReadMe, and linking out from it to more in-depth documentation on the Wiki. |
Going to use this space to think aloud about our "Certified fresh" system, and metadata generally, since I am not satisfied with "Certified fresh" as a solution. What are the goals of tracking metadata?
Why is git version history insufficient?
Why is "Certified fresh" insufficient?
What we don't want: DRIs Per previous conversation (incl. upthread) with eng leads, we are trying to avoid DRIs, which would be one way to deal with these problems. Proposal: Change Log In this model, we would retain the "Certified fresh" syntax as a label which an engineer or documentarian can add to docs that they have recently, successfully used or verified. However, this label would be one of several standardized labels that could be provided in an entry's change log along with a custom description. Example Change Log
|
This styleguide is one of the beter sets of recommendations for dev docs I've encountered. Much of the advice agrees with our current approach, but bears reiterating:
See also its guide to ReadMes. |
@gdjohnson what's the status on this ticket? |
@jnaviask This is an ongoing project that I work on whenever I have breathing time between higher-priority tickets. It basically involves researching existing doc systems/best practices, and trying to think through what we're optimizing for with documentation, big-picture. I wanted to be able to "think out loud" in a way that leads & engineers could follow along with, be tagged in for Qs, and participate in. Let me know what you think the best classification here is. Don't want to gunk up the sprint/ticketing system. |
Reposting a conversation with @Rotorsoft here, so that I can keep thinking through different approaches to organizing our docs:
|
Description
Separating this from the migration ticket, #3379, since I see this task as more a slow-burning, long-term project. This ticket's comment section will allow us keep track of our evolving thoughts on docs philosophy and processes.
What are some key areas that need to be thought through?
Process:
Content:
Relationship with rest of codebase:
The text was updated successfully, but these errors were encountered: