The most important difference is that unlike GFS, Hadoop DFS files have strictly one writer at any one time. Bytes are always appended to the end of the writer's stream. There is no notion of "record appends" or "mutations" that are then checked or reordered. Writers simply emit a byte stream. That byte stream is guaranteed to be stored in the order written.
]]>During name node startup {@link SafeModeInfo} counts the number of safe blocks, those that have at least the minimal number of replicas, and calculates the ratio of safe blocks to the total number of blocks in the system, which is the size of blocks in {@link FSNamesystem#blockManager}. When the ratio reaches the {@link #threshold} it starts the SafeModeMonitor daemon in order to monitor whether the safe mode {@link #extension} is passed. Then it leaves safe mode and destroys itself.
If safe mode is turned on manually then the number of safe blocks is not tracked because the name node is not intended to leave safe mode automatically in the case. @see ClientProtocol#setSafeMode(HdfsConstants.SafeModeAction, boolean)]]>
webhdfs
]]>