Disk Storage Performance

Aug 17, 2010 10:03

This post is sort of dedicated to andrewducker since he specifically asked about the disk drive in question, then encouraged me to LJ my rant about alternatives, and then posted this afternoon pointing out that it's kinda okay to LJ about anything, no matter how boring the populace may find it ( Read more... )

linux, geek

Leave a comment

Comments 6

andrewducker August 17 2010, 09:19:24 UTC
You're still not going to get as much of a speed boost as having an SSD though, are you? Because the sustained read speed is still lower. Even if you load every block as fast as possible from the HD it's not going to be as fast as loading it from an SSD.

Reply

call_waiting August 17 2010, 09:28:10 UTC
Indeed it's not as fast as a pure full-size SSD, and neither is the momentus XT - it wouldn't (I hope) waste its small 4Gb of cache on data that can be accessed by continuous reads. Nothing that you can do in software can make a conventional rotational disk faster or as fast as an SSD because it's physically impossible ;)

However, the fact is that I don't generally care about data that can be read in long sequential bursts. Copying movies around etc, I know that that's going to take a long time so I'm prepared to make tea while it happens. What I care about is reducing the amount of time the machine takes when I want it to do something now like open my application or bring up my desktop or find that file I've lost, and these activities are dominated by non-sequential reads which can be improved orders of magnitude.

Reply

andrewducker August 17 2010, 09:31:46 UTC
Ok - the question then, is why has nobody done this before? Because you're right, an indexed file system does sound like a winning proposition.

Additionally - I've been wondering why file deletes are slow. Not incredibly slow, but deleting a few thousand files takes a while. I'd have thought that it would be as simple as deleting the entries for those files from an index, which should be near instantaneous.

Reply

call_waiting August 17 2010, 10:03:39 UTC
Ok - the question then, is why has nobody done this before?

Indeed. I've been unable to find any references to anything like this. Many people have concentrated on solving the problems within the filesystem by reducing internal fragmentation, and reducing seeks needed for indexing, but nobody seems to have attempted to tackle it from below. That's why the XT is so innovative (it tackles it from below) while ReadyDrive or whatever MS called it is, frankly, so pants.

Additionally - I've been wondering why file deletes are slow. Not incredibly slow, but deleting a few thousand files takes a while. I'd have thought that it would be as simple as deleting the entries for those files from an index, which should be near instantaneous.There's two factors there: one is that the information which needs to be updated can be spread around the place. Deleting a file, you need to update the MFT to mark the MFT entry as free, update the free space map to say that there's some free space that wasn't there before, and update the containing directory ( ... )

Reply


Leave a comment

Up