No. Ext4 doesn't do data journaling by default. Even when enabled, what's written to the journal is blank data that's about to be written, not the current file's contents.
1) write to journal "I'm going to overwrite this file with this data (zeros)"
2) commit journal
3) write data to file
This is typical for journaling filesystems-- step 3 can be interrupted by a crash and replayed later (by re-reading the journal).
For filesystems with CoW data (ZFS, btrfs), the in-place data will probably not be overwritten.
Before evaluating the claims myself: The shred manual specifically claims that it is "not guaranteed to be effective" on "log-structured or journaled file systems", and specifically calls out ext3 in data=journal mode.
I would assume that the concern with ext3/4 in data=journal mode is that shred does not guarantee that the records of previous writes are evicted from the journal.
In data=journal mode, data to be written is first written into the journal. Only after the journal is flushed it will be written out to the correct location. Therefore, a crash at any time is fixed by replaying the journal forwards.
Note that the ext3/4 journal is a redo log, not an undo log. Old file contents are not copied into the journal on a write.
Thus, I don't see why shred should be less effective in data=journal mode compared to the other journaling modes.
CoW file systems are a different story. They don't allow you to overwrite physical file contents. You have to set the +C (FL_NOCOW) flag, which is, by principle, only effective for a file that does not have any contents yet. Thus, you can't set +C on an existing file and overwrite it's contents.
> Thus, I don't see why shred should be less effective in data=journal mode compared to the other journaling modes.
Because with data=journal, content that was previously written to a file (and made its way through the journal) might still be in there if the journal has not been replayed or garbage-collected in a while.
That's a good point I overlooked. Although the journal is not that big (<1 GB); so doing some extra I/O after the shred should get rid of that. When the original contents are older, then it's rather unlikely they're still in the journal.
There is no reason for blocks that have been fully rewritten to be written back to the same location. In fact, it is faster to write them somewhere convenient near the write head and update the indirect block. So even though only the meta data goes through the log, block locations can change.
I don't know that I'd say there's no reason at all.
For one thing, if you have a contiguous file and you update some (but not all) bytes, putting them back in the original location allows the file to stay contiguous.
Also, if you write the data back into the original location, you don't have to update metadata such as inodes. Now, you may say that's less data, but on a spinning disc, there is some threshold below which the amount of data written doesn't matter much at all and it's the number of seeks that matters more. That is, if it's a choice between a single 50k continuous write or two separate 1k writes in different locations, the single write is probably quicker. (But this falls apart eventually of course.)
Whether these reasons are enough to prefer updating in place is another question, of course. But it's not like there isn't any benefit at all.
For filesystems with CoW data (ZFS, btrfs), the in-place data will probably not be overwritten.