One of the senior engineers I worked with at my last job was very picky about the difference between verification and validation activities. In testing, most design engineers will focus very heavily on verification activities which seek to answer the question "does what I built satisfy my design intent?" In contrast, validation activities seek to answer the question "did I design and build a thing that actually solves the problem?"
It's very important when you're building something that you answer both questions in your test activities. So you might write your Readme for your new project, and then as a validation activity before you build your project you shop your Readme around to the target audience of the tool and ask them if the tool sounds useful. You might also elicit feedback about how features could be more useful, or if there are other features that could be added.
From your Readme and your initial validation, you start an initial architecture of the system, laying out the pieces you think you'll need and describing how they will interoperate in broad strokes. Other developers can join you at this point, and knowing your architecture design and implement specific pieces of the tool. All design work references the Readme as a functional specification. And all of your verification activities verifies that the design was implemented and that the implementation satisfies the Readme.
As you're iterating and building new functionality, you integrate periodically and perform validation and verification against integrated versions of your system. And the verification tells you that you're following your Readme, while the validation tells you that your users indeed find the tool that you spec'ced useful.
The only real difference between Waterfall and Agile in this model is the cycle time. Waterfall has a very long cycle time in the specify-design-implement-test-validate cycle, whereas Agile has a very small cycle time. And so the chunks of the system vary in scope as well.
This is similar to Amazon's "working backwards" based product development. Before writing the manual, they do another important writing - press release. It's amazing to see they talked about this idea a decade ago and still use the same approach to drive the development today.
Yes, I suppose this is similar to how copying an existing product is always easier than coming up with a new product. After the point where you have the user's manual, you're basically copying an existing product.
It could be a good way to build something when you don't know what you're building. Writing a user manual is cheaper than coding, but may force you to get pretty explicit about what the user experience will be like.
Agreed. But a README does not have to be an entire manual; you could start with describing the program and your intentions with it, a long with basic operations. :-)
Incidentally, Donald Knuth (whose literate programming idea has never quite taken off, mostly because of a lack of good tools and examples IMO) was doing this even before coming up with the idea of literate programming. He wrote basically a TeX manual (or one could at least call it a design doc), completely describing what TeX "does" (he wrote it in the present tense), how it works, etc., months before writing a single line of code. (See TEXDR.AFT and TEX.ONE, published in Digital Typography.)
Even later, when TeX completely changed between its earlier version (the one written in SAIL, aka TeX78) and the current version (written in WEB, aka TeX82), the program has completely changed, but the manual he wrote for TeX78 is still very similar to the latest version of The TeXBook. Since declaring TeX "done", he's generally been willing to make only changes that don't change The TeXbook much.
I get the impression that is supposed to be somewhat derisive. That said, I think the tools are more usable than people give them credit. They are just slower to get started with.
In particular, I've been picking up The Stanford Graphbase book recently and finding that after I have gotten used to reading more parts, the programs are getting a bit easier to understand. In ways that don't litter my mental model with tons of surface complexity.
What I mean by that is that if you look at the javadoc or "function/method" base of a complete system, it is easy to lose site of the whole. Granted, getting started with the larger texts of some other documents can have similar problems in reverse. It is hard to really get a handle on where in the system you are to start.
You got almost the exact wrong impression. You're correct that my comment was a bit derisive, but the derision was aimed at myself. Don Knuth is a giant in our field.
I also recommend reading some of the programs he has posted to his site. I have not picked up the rendering book done in a literate style. Is on my ambitions list.
Have you tried using emacs org mode? It has built-in support for literate programming, and source blocks are annotated with the language so you get language-specific behavior (syntax highlighting, structural editing, etc).
I took an OS class in college where either a TA or professor would read every line of C code in the OS you turned in. They required every function to have a doxygen style doctrings. It was also a partner based course. Writing those docstrings before implementing the functions not only helped them be more useful, but also helped my partner and I coordinate on features/interfaces before they were done. I think documentation in general is highly underrated.
So many times I've looked at a peer's code and had a hard time reconciling what the comments and wiki articles said with what their code was attempting to do.
My father was convinced that Quark XPress was written that way: once they had the core functionality figured out, they wrote the manual, then implemented it.
The thing is - this is interesting because nobody is doing it anymore. Once everybody starts doing it, i.e. literally "human politics" gets directly involved on all levels of development, everybody would try to escape this style to get some development freedom for themselves to stay sane.
this is pretty much de facto in python, and though not as widely used as i'd like, there is a testing framework called doctest which allows you to write tests in the documentation.
so for a given method or class, you have a couplefew paragraphs which explain how to use it, with invocations that are run as part of the test suite.
sometimes there end up being too much acrobatics for this to be as useful as i'd like, but the basic idea is really neat, IMO.
Also a feature of Rust: Every example in your docs are tested with your unit tests (unless you explicitly turn it off for a specific example). It's also a useful way to check that you didn't change your API by mistake, or generally make sure your doc is up to date.
Doctest exhibits this idea but it only really works for small, self contained utility functions - e.g. slugify a string, round a number to 3 decimal places, etc.
Doxygen should never be your sole source of documentation although a lot of projects use it that way. It does not communicate the design philosophy behind the code, so while it will answer whether you can use an API a certain way it doesn't answer if it was the intended use. This becomes important because once you step outside the programmers intent things are less likely to work and will be more brittle.
If you are a public library it also results in the infuriating situation of knowing an API will do what you want but not knowing how to get a parameter or class it requires.
doxygen is tool to programmatically generate documentation from properly formatted comments. If you use python, it is very similar to pydoc. We never actually used the generated documentation but basically used the format as a style guide. For functions it looks like:
/** @brief Prints character ch with the specified color
* at position (row, col).
*
* If any argument is invalid, the function has no effect.
*
* @param row The row in which to display the character.
* @param col The column in which to display the character.
* @param ch The character to display.
* @param color The color to use to display the character.
* @return Void.
*/
void draw_char(int row, int col, int ch, int color);
I always write a README first for my projects, even when I am working alone on it. I also always create a Makefile, even if the project doesn't use make at all, I know I can enter any of my project and hit make help to know what it is, and if I am browsing on gitlab/github, I can read the README.
When creating the project, I put in the README a general description, project goals, a TODO list, contact info (email...), requirements (like node, ruby...). Often, I write the readme, and leave the project a few hours to a few days, when coming back, I read the readme and it should "enlight" me.
The purpose of documentation is to keep all the stakeholders on the same page. However, each stakeholder may write the document in his/her own perspective. This leads to further confusion and eventually makes people hate documentation. Thus, making the whole exercise ineffective.
For example, the same product will be paraphrased in different ways by your customer, your marketing dept, your business team, your tech team, etc.
On the contrary, Amazon had been quite successful with their 'Working Backwards' model of development because they always document from the customer's perspective. This I believe is the right approach.
While not exactly what the article suggests, writing the documentation for my open-source game engine [0] actually helped me improve the API itself:
When I was writing out the examples and trying to see them through the eyes of a new reader, I noticed that some simple tasks took too many lines of code, and would be a turn-off to potential users of the engine.
I felt like I should get this down to N lines, and that led me to revising the API until it could be illustrated in "prettier" examples (though there's still some work to be done.)
So yes, writing out the documentation and looking at it while "away" from the project (e.g. reading it on another device, an iPad in bed in my case, rendered on GitHub etc.) can definitely help you improve other parts of the project.
My role at our agency recently has been implementing a data layer for one of our clients and I found our current documentation combo of Google Doc/Sheet so frustrating. The times I find things to be inconsistent between pages and the lack of examples really pushes back my productivities on the project, and I can see the lack of good documentations will cause many troubles down the road. I am almost tempted to stop the project for the moment and recommend to my manager that we should revise the documentation workflows first before any kind of implementations. My only struggle is to find any good software/examples that can help me convince the manager that this is worth the time and a good practice to carry out throughout the company
This is how Flask actually started. Armin wrote an April fool's joke but had no code (at least not much) at that point, but nobody looked the code! They just liked the concept and the documentation, so he had to implement it afterwards :D
I just got through https://www.amazon.com/Specification-Example-Successful-Deli... which pitches a common structure (Given-When-Then) for describing behaviors and getting them out of your ticketing systems, which are not designed for knowledge explanation long-term, but are more for describing active work in short term.
BDD is almost a poisoned term at this point because it’s become associated with tooling and opinionated holistic processes. But if you think of Specification by Example as readmes with a Given-When-Then structuring, then you have a strategy (document before writing) combined with a language definition to assure your strategy is executed at the right level of detail. Which solves the entangled problem of what do I do first (document) with what level of detail (enough to describe all the input behaviors to whatever I’m working on)
right the problem is cucumber and gherkin lead to people trying to be very smart and then tests become some ridiculous NLP problem. The value is the readme file over a ticket, and language definition, not the gymnastics of tying human language to executable test file.
I see many people see this as backwards and top to bottom approach while I consider it utmost bottom to top approach
in all the teams I worked most successful were ones that insisted on writing wiki / readme pages for each feature and task...
why?
1) project manager (person knowing why we're building) has to do fair amount of explaining to the developer
2) it's easy to verify that developer understands what he/she needs to develop
3) everyone else in the team can easily update themselves
my template was something like this
1) what has to be built
2) for whom - who is the typical user and scenario
3) who will built it and which branch on git
4) what app parts will be changed
5) any information related to deployment (database changes and similar)
doing it upfront meant a lot to the team, discussions and clarity... I would just call this more Wiki development then readme :)
The most important aspect of software development is getting the requirements right. I suppose this is where Readmes can help. However, I'm astonished that many existing projects don't even document their underlying requirements properly!
I wish I was a better and quicker writer. I am pretty good at designing things and coding them but it's really hard for me to put the design on paper. Takes me forever. I know other people who are really good at this and it's an invaluable skill especially if you have a lead role.
I totally agree! Especially for highly experimental features/libraries. I am trying to finish a small new library today, and I'm basically following the docs I wrote a week ago:
https://github.com/franciscop/premonition
Code examples are very important in these situations.
The main issue I've found is that some times I like the docs I wrote, but I don't do the implementation and feel like I wasted my time by documenting too much. These times it might be because it was just a thought experiment, a wild idea tat I had to write down or I just have other priorities.
I am still trying to find a good balance for requirements format. I have looked at Gherkin in a few books on specifications. The Microsoft press book on software requirements lists every possible way to represent requirements. I am working through the book Telling Stories requirements book by Ben Rinzler. I still have not found anything ideal that would be easy enough to teach to a business side that has no training on writing requirements and still provide enough detail to the technical side. I am getting close to settling on some form of diagram, user story statements, and use cases with Gherkin.
What's wrong with English, literal sentences and paragraphs, or a few bullet points? If you can write the readme or user-docs, you have written a significant amount of your requirements. If you can add the various edge cases, you've probably written about 99% of your requirements.
On a higher level, this reminds me of Charlie Munger's Inversion mental framework, best described here in a Farnam Street post [0]. A TLDR is to not just think of all things needed to achieve X, but which that could make us NOT achieve X.
It's very important when you're building something that you answer both questions in your test activities. So you might write your Readme for your new project, and then as a validation activity before you build your project you shop your Readme around to the target audience of the tool and ask them if the tool sounds useful. You might also elicit feedback about how features could be more useful, or if there are other features that could be added.
From your Readme and your initial validation, you start an initial architecture of the system, laying out the pieces you think you'll need and describing how they will interoperate in broad strokes. Other developers can join you at this point, and knowing your architecture design and implement specific pieces of the tool. All design work references the Readme as a functional specification. And all of your verification activities verifies that the design was implemented and that the implementation satisfies the Readme.
As you're iterating and building new functionality, you integrate periodically and perform validation and verification against integrated versions of your system. And the verification tells you that you're following your Readme, while the validation tells you that your users indeed find the tool that you spec'ced useful.
The only real difference between Waterfall and Agile in this model is the cycle time. Waterfall has a very long cycle time in the specify-design-implement-test-validate cycle, whereas Agile has a very small cycle time. And so the chunks of the system vary in scope as well.