At the moment, we still don't know if we are going to get a more recoverable or less recoverable filesystem. Most if not all of the improvements in ReFS (compared to NTFS) are aimed at
- better resistance to imperfect hardware (which does not follow fail-stop model), and
- better handling of large files and/or files in large numbers.
The recoverability depends significantly on implementation details. For example, B-Tree is fast, but often happens to be a highly vulnerable failure point, difficult to rebuild if lost.
Also, steps to improve fault-prevention and ensure continuous operation often degrade the recovery capability. Once it comes to recovery, you are often better with the filesystem which crashes earlier and easier. To put it the other way round, the filesystems that crash quickly tend to be easily recoverable. Typical example is the ext journal. For the journaling to work, the unused inodes must be immediately zeroed, which adversely affects your undelete capability.