I hope they keep an eye on contribution statistics post this change.
I've said it before, but I think this move was a mistake.
They've thrown away the benefits of a incredible wiki-based platform (where changes were pretty much instantaneous and any of us could easily make a difference to the docs!), the (actually pretty decent) WYSIWYG editor and an overall frictionless editing experience and replaced it with what, some (not even Markdown) document files in a GitHub repository. I honestly believe this will dissuade people from making small, quick fixes and ultimately drive away contributors.
When I last brought this up, I was told that there were a number of contributors put off by the wiki-based nature of the previous iteration and they will now be excited to be able to contribute using Git.
I'd honestly be interested to hear from any of these people. Given you could seamlessly login with your GitHub account, easily author your documentation and one-click preview your changes, what exactly are you gaining from having the editor gone and now requiring a local docs environment to do all of the preview/edit/commit steps manually?
Addendum: wtf, this change (or a change that happened prior to the final dump from Kuma) appears to have lost almost the entirety of one of the articles I have contributed.
Losing the entire history by starting with a dump is ridiculous. Authorship would be somewhat tricky (I suppose they could use username + email on file, but might run into issues where people don’t intend to make their email addresses public), but it’s not like it would be very hard to migrate the diff + message history to git. I guess they either don’t want to start the repo with a gazillion commits (actually that would be a good stress test for the system, but would make cloning harder for people who don’t know how to use --depth), or just don’t care?
Btw: now you need to clone the entire repository and run a dev server just to edit and preview a single page (let’s not get into super involved ways to clone only part of a git repo). You can avoid cloning by editing with GitHub’s web editor, but you can’t preview that way.
> I guess they either don’t want to start the repo with a gazillion commits
They could've cut it down to e.g. the last n changes to a page if this was a concern.
Honestly the whole migration feels like it has happened before things had been properly thought through. There are pages lying around with {{IncludeSubnav{"")}} directives in the text that don't render properly - and they put this into production.
I can't speak to MDN's tooling or culture, but... it sounds like the transition that many FOSS projects made from wiki-workflow docs (Confluence/Mediawiki/etc) to PR-workflow docs (ReadTheDocs/Github/mkdocs/etc).
Anecdotally, for the project where I contribute most... the issue about fewer contributors is real. But you do still get contributors, and PR-workflow makes other aspects easier (like broad clean-ups/reorgs/scheduling/versioning). For contributors who come, you also get the opportunity to socialize/engage them during review. Overall, there are drawbacks/risks, but I'd say it was a net-improvement (wrt quality/clarity of the final docs).
Maybe look at it this way: Both workflows provide a way to organize open/community docs. Both workflows have positive role-models. In both, you need capacity+interest for editing, for socialization, etc. If you tend to these, you can do good. But if there's neglect... then that's where you'll see the starkest differences:
* In wiki-workflow, the likely symptoms of neglect are draft-quality content, bad prose, quirky TOC, drive-by edits that are out-of-place, etc.
* In PR-workflow, the likely symptoms of neglect are slow review/feedback, older content, would-be contributors who can't assimilate to the workflow, etc.
> what exactly are you gaining from having the editor gone and now requiring a local docs environment to do all of the preview/edit/commit steps manually
Lets be real: internet points in my github profile contribution heatmap
Kidding aside tho, as a first-time contributor back then the non-markdown editor was clunky for me, the spaces the highlights were just un-natty. You're right about the non-instantaneous fix visibility tho, but for extra QA maybe it's a fair trade-off.
> I think this move was a mistake. They've thrown away the benefits of a incredible wiki-based platform
Ideally, you want to cater to both the CLI centric workflows of developers/build-systems and the WebUI centric workflows of everyone else (the old system). The Mozilla blog posts mention the JAMStack so I'm guessing that the intent was to port the server-side Django apps/editors to a browser-side framework like React or one of its alternatives.
If the design and implementation of the new platform was perfectly executed then only the new functionality would be apparent; both systems are essentially "wiki-based". Platform transitions are rarely seamless as your lost content/metadata clearly shows.
> I've said it before, but I think this move was a mistake. They've thrown away the benefits of a incredible wiki-based platform (where changes were pretty much instantaneous and any of us could easily make a difference to the docs!), the (actually pretty decent) WYSIWYG editor and an overall frictionless editing experience and replaced it with what, some (not even Markdown) document files in a GitHub repository. I honestly believe this will dissuade people from making small, quick fixes and ultimately drive away contributors.
Yeah, I really agree with this. Having built a website which relied on people submitting PRs to add their content on GitHub - I wouldn't do it again. It's a lot of friction for both the maintainers and people contributing.
Also, keep in mind all the tooling GitHub offers. Comments with reactions, quoting, permalinks, blocking, etc.
Yes, it can be built in-house but would cost a fortune.
GitHub has incredibly high friction to get started. I don't see the antipathy toward WYSIWYG editors. It's one of the best human interface inventions we've come up with. Having to do a level of indirection about Markdown syntax is a pain, on top of remembering git commands.
What's cool about git as a persistent storage, is that you can use vscode, emacs, vim, or notepad. Or you can write a little web app that starts a WYSIWYG editor on localhost:8888 or Heroku and have it dump to the file system and then you just need to write a script that takes care of submitting the PR from there.
What I'm getting at is that we're misconstruing a storage system (git), with a user-friendly interface. Ideally we get the ubiquity and interoperability of git with a WYSIWYG editor. A website that lets you edit files in the browser and is backed by a git repo could be helpful. But maybe in edge cases, it'd still be too complex. Using git here seems like overkill.
This, plus there are many server-side WYSIWYG markdown editors that make it transparent to the end user that they are even using Markdown. This is really the future IMO, you get the best of both worlds. Hopefully they'll add some extensions though to Markdown. I hope the community will create even more as well.
I don't know if I hit an exception, but I added a small change (a single clarifying example) to the RegEx page, which was listed on the dev environment's history for ages before actually becoming active.
Also keep in mind that, given MDN's target audience, especially the target contributors, GitHub is less of a barrier than it would be for other platforms.
But yes, of course it'd still be good to keep an eye on the contribution statistics to validate those assumptions.
I agree. I've personally noticed time and time again how git-based workflows discourage me personally from making small one-time changes like typo fixes compared to a wiki or similar. It's just so much more overhead to have to start up a CLI, edit the document, push a commit, open a pull request etc…
This is not to say there aren't advantages to the git-based workflow and it might be a valid choice for a technical audience like this, but to claim it as "huge advantage in terms of contribution workflow" seems questionable.
1. You can have both. Parts of our site (Our World in Data) the researchers edit via a GUI, but under the hood it is making a commit and pushing to GitHub. My guess is MDN just hasn't gotten around to that yet?
2. GitHub and GitLab are a lot better nowadays at allowing you to edit markdown docs via the web and preview the results in a few clicks. Not quite as frictionless as 1 click, however if you are an editor and are constantly bombarded by spam, probably saves a lot of time as you know the person who is making the change and they are forced to add an explanation via commit message.
1. That's right! But it's not really on the roadmaps either. One can hope that someone in the community writes a fancy web app wrapper for GitHub's API or something.
2. Yeah, it's really really convenient to submit a quick PR entirely in the GitHub Web UI. But I admit, we have some work to do make the previewing experience a bit better.
> They've thrown away the benefits of a incredible wiki-based platform (where changes were pretty much instantaneous and any of us could easily make a difference to the docs!), the (actually pretty decent) WYSIWYG editor and an overall frictionless editing experience and replaced it with what, some (not even Markdown) document files in a GitHub repository. I honestly believe this will dissuade people from making small, quick fixes and ultimately drive away contributors.
I'm only a sample of 1, but I once tried to edit a page and just couldn't find out how to do it. So I'm not sure their change will be any worse.
Interesting that they are moving to a git based approach. In KDE, we did something very similar 3-4 months ago with moving most of our development tutorials from various sources (wikis, old pdf book, ...) to a new Hugo based documentation platform at https://develop.kde.org/docs
The problem was that our documentation was bit rotting in the wiki and the previous mediaWiki wasn't working so well for having consistent formatting across pages, reviewing changes and organizing the content.
Using the new Hugo based software, we now have a few very helpful macros to add links to Doxygen based documentation, include snippets of code from the library repository and automatic generation of navbars and the navigation.
And we also get with Gitlab web editor an easy way for people to contribute. It is still like before one click away to edit the content.
Nice. We are currently moving Our World in Data from MySQL to git as well.
Since 2017 I've moved all CMS I am responsible for to Git from MySQL and haven't looked back. You can still preserve any frontend experiences you want, but get to drop a huge dependency, a lot of code, and gain a ton of new capabilities (faster syncing, easier backups, forks, patches, full audit trails, etc). You never need to worry about migrations again or build your own change control logic.
Yes, in KDE we also moved most of our websites to static website generators to Jekyll or Hugo. From the more than 20 websites only a few are still using Drupal 7 or a custom php framework. The last one and biggest website to be migrated https://kde.org was a big challenge because it is an 20 years old websites with thousands of pages and translations.
But my efforts are finally paying off and we saw an increase of 33% in the visitor count of the website and the current codebase made it so much easier to work with and add new content. And this time, I blogged about it: https://carlschwan.eu/2020/10/30/kde-org-hugo.html
When I click "Edit this page" I am asked to sign in. Is there a reason for that? I'd expect to see the source and be able to submit a PR as a normal repo on Gitlab.
Just curious: What is the benefit/reason for the readfile shortcode for displaying code snippets as opposed to just pasting the code in and using the typical syntax highlighting?
I suppose if the actual source file changes then the page would be updated as well which is neat, but if you're specifying certain lines to read from then that could easily become a problem.
Considering how few shortcodes you're using I just felt like there must be a good reason to use it in this way and I was curious why.
The readfile shortcode was added because this sort of code inclusion was used previously and when converting the documentation I needed something with similar semantic. Another advantage is that it allows me to have a compilable code examples that can be easily downloaded using gitlab support for creating zip from directories[0].
More interesting are the doxysnippet shortcode[1] that provides the extraction from the source code or the custom rendering of links[2] that allows this sort of markdown links: [Overlay](docs:kirigami2;OverlayDrawer)
Really glad to see this project continuing on. I do have concerns about being limited to Markdown syntax though. While Markdown has its place on small to medium sized projects, its simplicity quickly becomes a hindrance, and you end up falling back to html in Markdown. I could see something like ReStructuredText or Asciidoc being a better fit, if not a full blown enterprise style Docbooks or DITA system.
Not a big fan of the logo, it is cool, but doesn't really inspire my inner web documentation.
(edit) Actually no, I think its the size and style of the logo. Singling out the top of the spear and using that would be cool, but it reminds me too much of a fighting game character as is.
Is it? The use of markdown is to prevent having to write out html tags for formatting. And in the end, HTML will be one of the target outputs. Even sites that accept HTML directly, are doing so through the use of a Javascript WYSIWYG editor.
The vast majority of docs.microsoft.com is written in markdown. This project seems be both very easy to contribute as well as produces a great docs site.
MSDN has been around for much longer than Markdown has even existed, so I highly doubt that. Maybe the more recent stuff, which by the way is much much worse from a technical pov than their older technical documentations sadly. Compare the mess that is .NET Core / ASP.NET MVC documentation to the mostly excellent WINAPI documentation...
Having said that, https://docs.microsoft.com 's flavor of Markdown does allow for embedded HTML, like GitHub's and enough pages still use that feature that the conversion to Markdown is arguably incomplete. It isn't a big issue in practice, however; you can update markup from HTML to Markdown along with your other changes.
* As a recent-past engineer on the Windows team, I have a rather lower opinion of Windows API docs than you. :) The Windows developer platform has not had enough dedicated technical writers for years; our developer content teams are mainly editors of engineer- and PM-written original docs, which can lead to API doc sets with badly written pages, important missing information, or references to Windows-internal developer tools. I tried to channel my frustrations into correcting and extending my coworkers' writings, or into gently asking them to fix their omissions when I didn't have the free time to spend on the needed research.
Could you explain what you mean by hindrance? As a developer documentation site, I'm having a hard time thinking of a use case that markdown doesn't support.
I'm having a hard time thinking of a developer documentation use case with any substantial structure that Markdown does support well (by itself).
Any form of structured information, for example API functions and their argument types, properties of a class, or tables of historical compatibility, is already stepping outside Markdown.
You're left with either writing everything out in text and having error-prone parsers cross-check pages against each other, or badly cross-referenced information that goes undetected, or using something that is Markdown plus additional markup for semantic data.
Markdown doesn't come with basic typesetting features that you may want, such as something as basic as centering text.
Other things that are critical features for development documentation, like tables, are extensions which may or may not completely break if you ever change your Markdown renderer.
If you need more typesetting control than what Markdown allows, dropping down to HTML is always an option.
To the second point, while it's possible that content could render differently if you change renderers, I would presume the folks at Mozilla are aware of this and won't do it unless it's absolutely necessary.
Do we actually know which renderer MDN is using? I don't see this mentioned in the post. I would argue that although alternate implementations exist, Gruber's `markdown.pl` is Markdown (whatever it does, warts and all) and deviating from it is generally a bad idea if you can avoid it.
> If you need more typesetting control than what Markdown allows, dropping down to HTML is always an option.
Is it? In the public setting? HTML comes with things missing from Markdown that you want, sure, but it also comes with a full scripting environment, the ability to arbitrarily inject code and so on.
If you can drop to HTML, you have to have a sanitiser process. So then you're dropping to some-unknown-subset of HTML if the process is automatic - or you've failed to reduce the amount of effort being put on the editors if it's a human one.
For a place like MDN, this really isn't a problem.
> If you can drop to HTML, you have to have a sanitiser process. So then you're dropping to some-unknown-subset of HTML if the process is automatic - or you've failed to reduce the amount of effort being put on the editors if it's a human one.
I presume Kuma had - and Yari will have, if it doesn't already - some way to prevent unsafe injection and other dangerous things.
This isn't to say that the issue doesn't exist. Instead, I would expect it to be the exception rather than the rule. After all, Markdown and its derivatives owe their existence to this very phenomenon.
Well the beauty of Markdown, is that it doesn't specify a lot of functionality, making it easy to understand. But when you have many, many pages of documentation, you will need to start linking them together in meaningful ways.
This is where is becomes useful to have tools that can generalize things. For example, say you link to another page in your documentation repository, in Markdown, you create a link either with an absolute or relative path, specifying the filename and optionally anchor on that page. Now later down the road, you edit the folder hierarchy, or rename a page, you will now need to find all references and update them manually.
Something like ReStructuredText has the "interlink" module, which allows you to modularize your pages, and use symbolic names instead of the relative or absolute path. Now, there are pros and cons to each approach, i.e. if you have a good set of tools, doing a global search and replace across documents can deal with this too. But having the flexibility of things like symbolic names and macros can make things much more manageable.
Of course, this is a double edged sword, in that you can customize, create macros, until you now have a monster in of itself, but that can be said of any tool.
I tend to see Markdown as perfect for standalone documents, and its especially good for formatting internet comments and the likes.
Tools like DocBook and other XML processors attempt to provide the maximum amount flexibility, and the cost of a steep learning cliff and lots of boilerplate, but if implemented well, it can allow things like conditionally including parts of documentation based on tags, or output formats, but definitely requires extensive tooling as opposed to Markdown and other formats that are meant to be readable in their source form.
one of the challenges i see with this logo is that it can only be used vertically, and only quite large. here's a quick mockup of how it would look as a horizontal stack: https://i.imgur.com/S8RtqS2.png
with this arrangement, the character is leaning away from and pointing away from the words, and scaling down the character has made it difficult to make out the details of their pose (is that a hand? is the face empty? etc.).
Are we sure that's an actual logo meant to be used for anything besides maybe the repo considering it's a representation of "Yari", which is just a codename?
The previous backend was codenamed "Kuma" and was represented with a bear, but wasn't displayed anywhere on the actual MDN platform.
Yes, that's definitely the balance to maintain. Ease of entry, vs tooling to manage the project as it grows. The main reason I see it being an inhibition is due to the size of the HTML spec, and the number of pages that it will need.
I think this is why a lot of sites take markdown, then add their own extensions, like how there is "Github Markdown" among many other flavors. That's definitely one route, but I see something like ReStructuredText or Asciidoc as more mature and interoperable, while still being relatively easy to master in the same way as Markdown. Since they can both produce docbook output, vastly easing any migrations in the future by adhering to an industry standard.
It's not that easy. We have 60+k pages carried over from 15 years of organic evolution. It's unstructured and messy.
A move away from HTML to something "more popular syntax" (like Markdown) is NOT easy.
I'm on MDN every day, and find it an essential resource! There's been an explosion of new Web APIs: atomics, devices, filesystems, crypto, payments, xr, and even proposals for neural nets in browser. Plenty of opportunity to become a domain expert and contribute live code ;)
- how is it acceptable to run off github for a project that wants to encourage authoring sites on a supposedly federated medium (the web)?
- do we really need an enormous "web documentation project" for something entirely man-made, and which once set out to facilitate easy self-publishing?
Even thinking about these and similar questions means that, rather than attempting to document the craptastic overcomplicated web, we're probably better off to leave the web behind us for good, and start to concentrate on defining HTML+CSS subsets (such as for purely static docs, for light content apps, and so on), to distribute simple static text via alternate p2p protocols. W3C and WHATWG have failed miserably to do so.
> - how is it acceptable to run off github for a project that wants to encourage authoring sites on a supposedly federated medium (the web)?
You may have a point here in reality, but "technically" it should be very easy to push to multiple backends (GitLab, for example). So I could see lock-in to GitHub not ever being more than a hypothetical problem.
> do we really need an enormous "web documentation project" for something entirely man-made, and which once set out to facilitate easy self-publishing?
What point are you trying to make? Should Oracle throw out its docs for Java because it's a large language that's man-made and designed for easily writing software—but lots of folks here think it's not good?
> start to concentrate on defining HTML+CSS subsets (such as for purely static docs, for light content apps, and so on), to distribute simple static text via alternate p2p protocols
And this will still require docs, and tutorials, and information about the P2P protocol, and how it's kept secure and ~anonymous, and how to facilitate discovery, and how to avoid a network partition, and how to build non-static tools like search engines, and how to securely interact with those tools, and how to do archival work, and a million other things that make that system an ecosystem...it's not easy to build a resilient distributed network that actually works.
The fact of the matter is that people use the web whether you think it's good or not, and having decent docs for it is important. Nothing is stopping you or anyone else from building an alternate web, but so far there hasn't been anything that people actually care to use.
That maybe our time is better spent by taking the chance to simplify the languages of the web along use cases rather than document the whole of it in an encyclopedia. The Java example is a good one, since IME tools such as javadoc, and just writing down the purpose of something and why and when it was introduced, immensely helps cleaning up your API surface. The web as it is is way too complicated when it really doesn't have to be, as witnessed by browsers vanishing.
An immediate benefit of using git over the old wiki platform - or any wiki platform, for that matter, is this: no matter what happens with whoever's in charge of maintaining Yari, the wide community still has full capability to continue using, improving, including the ability to see past changes.
Before I read this post I've never considered the fact that using git over an SQL database makes it easier for collaborating on documents, interesting!
Most document editing software (including wikis) has a pretty mediocre collaboration UX. PRs/MRs are a genius idea and I wish git was used more often for collaboration on documentation.
git has a high barrier of entry. I'm not sure you can get non technical people to use it. Although I agree that once you know git, collaboration is great.
I agree. I know some people, who are good at writing, who were going to opt out of another documentation project if Git was used as the means of collaboration because they felt it was an unnecessary barrier to entry when a wiki is so much more straightforward.
In the end we did use Git for documentation (pressure from devs), and in practice the non-devs tried to stay involved but ended up letting "technical" people do all the Git and diffing stuff, so it remained a practical barrier to involvement in the documents.
People who are used to Git and development in general tend to forget that a lot of tools we take for granted, including text editors, terminals and command lines, are completely alien to non-developers and not everyone wants to learn that stuff for just one project.
I've known developers struggle with Git too (when they don't use it often, or it's only used for 1 out of 10 of their projects), so I agree it can be a fairly high barrier.
I'm assuming the intended use-case will be clicking on a shortcut to Github's integrated "Edit & do pull-request" function embedded in the MDN website pages, and write one's edit in the Github editor GUI, which isn't bad.
Anything closer to having to make a manual Github pull-request would be incredibly unergonomic for someone without an existing CLI setup for Github (ie. most developers).
I built a Wiki based on Mercurial years ago; it's not too hard to just build a web interface in front of it: it was just a thing to learn me some Ruby at the time. In many ways it's just an implementation detail, but I don't know if the MDN people are planning anything of the like (but MDN is for tech people, so I don't think it's a problem there).
The big upshot is that all the hard problems like merges and history/diffs and whatnot are solved for you by the VCS. I don't think MediaWiki allows you to "blame" anything for example; something I've often wished for. It's just a whole class of problems you don't have to think about.
the problem with crowd sourced docs is that nobody is interested in writing out very technical details especially for new apis, they are just left forever in dust. Popular stuff gets covered to death. But good lucking finding exact params for Web RTC sdp offer.
Can you scroll to the bottom of that page and use the "Report a problem with this content on GitHub" link?
The raw content is not Markdown. It's HTML with some macros (called kumascript macros).
And yes, you can "host MDN locally" now. Before you had to write a web scraper, now you can just iterate over the files after a `git clone`. But you might need Yari to build the raw HTML to fully formed HTML that you can open in your browser.
No. MDN is using a Microsoft platform as a front-end to attract contributors to their git repo, as most open-source projects do. If a company uses Gmail, does Google control the company? Github is a contractor for ICE, does Microsoft control US immigration enforcement?
I don’t like Microsoft owning Github but there is a broad, easily-seen line between reasonable objection and this absurd overstatement.
> If a company uses Gmail, does Google control the company?
If the only way to interact with coworkers, customers, investors, etc. is through Gmail, then, well. yes. If you can't be an employee without a Google login, then, ummm, yes. If Google can suspend your account and lock you out of your workplace, then yes. If a Google outage means you can't work, then yes. If Google loses some email and it means you can't continue a product launch... If a Google breach means that your company has been compromised...
On the otherhand, if Github is just one avenue for contribution, well ok. Is this mirrored on Gitlab or git.mozilla.org or somewhere else that people can contribute without a Microsoft account? If Github fails is MDN just a flip-of-a-switch or less away from routing around it?
MDN is one of the most important websites in my day to day life a change like this makes me nervous. Hopefully it is for the better - but I'm worried it might be for the worse.
This is just the beginning. Finally we can do the kind of web performance improvements we couldn't do on the old platform. But to boot, we gained 10-20% on the First Render metrics.
And because of the whole new way we deploy, we went from a 27% hit ratio (of popular pages) to 96% in the CDN. That means ~70% of the time you gain about 300-500ms on the initial load.
And besides, before, cold cache misses in the CDN would result in a backend server rendering whereas now a cold cache miss just means an S3 lookup.
It's just a vectorised image of Yukimura from Samurai Warriors[1]. This image to be specific[2]. Don't think there was a designer, just a developer making a logo using a character they like (or maybe it was random image search). Pretty common in open source.
Isn't that just straight up copyright infringement? I understand that it's common in developer tooling to pick a character and use them as a logo, but it seems like a bold move to do that with some sort of external tooling at Mozilla's scale.
Anyone else find the link underline CSS to be distracting? My brain sees it as a strikethrough due to it being high up like that, sort of messes with the reading flow.
Huh, on my desktop there isn't—thanks for pointing that out. My tablet uses Opera; I guess it doesn't support something about those links, but I'm not sure what.
Just wondering: why keep MySQL and not move to PostgreSQL?
I have been migrating all my projects to PostgreSQL because of personal believes (most of the time) and haven’t had any issues so I’m just wondering on the thought process.
PostgresSQL is better. We're all aware of that. But it's miniscule in importance. Especially if you use the ORMs and other decent tooling. The move to git isn't MySQL's fault. It's so more and bigger than that.
Umm... it’s about Mozilla’s rebuild of their system. I think it’s a valid question based on the fact that they have changed the architecture, languages and rendering but kept the SQL server the same. So no, my question is it’s 100% relevant.
I’m guess the downvotes is the community just getting a bit toxic on DBs.
I've said it before, but I think this move was a mistake. They've thrown away the benefits of a incredible wiki-based platform (where changes were pretty much instantaneous and any of us could easily make a difference to the docs!), the (actually pretty decent) WYSIWYG editor and an overall frictionless editing experience and replaced it with what, some (not even Markdown) document files in a GitHub repository. I honestly believe this will dissuade people from making small, quick fixes and ultimately drive away contributors.
When I last brought this up, I was told that there were a number of contributors put off by the wiki-based nature of the previous iteration and they will now be excited to be able to contribute using Git.
I'd honestly be interested to hear from any of these people. Given you could seamlessly login with your GitHub account, easily author your documentation and one-click preview your changes, what exactly are you gaining from having the editor gone and now requiring a local docs environment to do all of the preview/edit/commit steps manually?