Chuck said:
I initialized a 4 TB drive with GPT, then put a single NTFS partition on
it with Win7 disk management. The result shows as a 4 TB drive.
I use this drive to record video from NASA select.
Unfortunately Windows 7 eventually corrupts this drive.
Some years ago I had the same problem with a 3 TB drive.
An older motherboard running Linux has no problems storing data on 3 TB
drives.
I am not trying to boot Windows from a 4 TB drive. I just want Windows
to store the data without corruption. Surely someone at MS must be aware
of what
Windows 7 is doing.
I can paint you a scenario.
On WinXP, if I write a lot of NTFS data, memory fragmentation seems
to happen, such that the percentage of CPU used by the file system
grows with time. (Take note of CPU usage now, when the recording
session has just started, then check back in an hour or two and
see if the percentage of CPU is higher than it used to be. Now,
you're in trouble. If the CPU percentage isn't rising, then maybe
that OS doesn't have this bug.)
After writing continuously for around 8 hours, this becomes so bad
under WinXP. that I end up with a "delayed write failure" event. That
means the write attempt was so slow (bandwidth drops to such a low level),
it timed out (didn't complete in 5 seconds or whatever).
This problem doesn't seem to exist on Windows 8. I'm not
really sure about Windows 7. My laptop has the Windows 7 install,
and doesn't have a lot of I/O options for me to test with.
(Testing a large disk over USB2, would suck.)
I would recommend using a program like "dd", to test I/O
on your computer, and see if you can reproduce a problem
that way. This will write at a fair rate, without using
a lot of CPU while doing so (for dd.exe at least).
http://www.chrysocome.net/dd
The largest file I've written with that program, would be around
500GB, to my 2TB disk. And that worked fine in Windows 8.
This would write a file of roughly 4TB. Try something in that range
and see how long it runs before dying. Check for a delayed write failure
in Event Viewer (or even on the screen).
dd if=/dev/zero of=K:\testbig.bin bs=1048576 count=4000000
No matter what test cases you run, it's going to take a while.
You can use SMART statistics, to evaluate drive physical health, but
problems like this, can't all be blamed on the drive. For example,
the free version of HDTune can display SMART stats.
http://www.hdtune.com/files/hdtune_255.exe
In this example, "Reallocated Sector Count" data column is 0,
and "Current Pending Sector Count" data column is 0 as well. That
tells me the disk is OK. Even though there are "yellow marks"
in this screenshot, they're for things which involve a
mis-interpretation of the SMART data. No program is perfect
at this sort of thing. And when I use the free version of
that program, the free version doesn't receive any updates
over time.
http://img94.imageshack.us/img94/2460/hdtunesample.gif
HTH,
Paul