SSD, How do I..

P

Paul

BillW50 said:
Have you ever figured how long it takes to wear out a SSD? I was worried
about this at first and my XP machine I reduced the writes down to 400MB
per day through tweaks. Then I figured out that it would take over 4000
years to wear it out.
<<snip>>

One thing you're assuming though, is there are no other failure
mechanisms besides "write wear". Consider the lowly DRAM module
as an example. DRAM chips aren't supposed to have a wear out mechanism.
They should last forever. And yet, I've had seven of twelve "generic"
memory modules fail on me. That means, the lifetime was compromised
by a chip quality issue - like perhaps the chip surface was
contaminated during the manufacturing process. For some chips,
that leads to failures a couple years after they're fabricated.

So while we can pretend these "lifetime" calculations have some
predictive value, you're forgetting the human element, and
the possibility an electronics product you buy, is populated
with "floor sweepings" quality chips. There's no end to the
shenanigans that could be happening inside or outside the
factory, to make that happen. If money is involved,
someone will eventually take a shortcut.

Who can forget the motherboards made years ago, that
shipped with "empty" cache chips soldered to the motherboard :)
The chips had no silicon die inside, because someone figured
out the customers would not be easily able to figure it out.
If there's a way to cheat, someone will find that way.

Paul
 
C

Char Jackson

Have you ever figured how long it takes to wear out a SSD? I was worried
about this at first and my XP machine I reduced the writes down to 400MB
per day through tweaks. Then I figured out that it would take over 4000
years to wear it out.

This Windows 8 slate tablet has a Samsung 128GB MLC SSD. And all of my
Windows machines, Hard Drive Sentinel normally reports about 6GB is
written per day. Using 10GB per day in the wear level formula, it will
take 300 years to wear this one out.
Check back in with us in 300 years. Your drive will be worn out and
the other guys will still have 3700 years left on theirs. Who will be
laughing then?
Thus trying to reduce the amount of writes will give you what? Less
performance and that is about it.
I don't have any SSDs here to play with, but I think I agree with the
statement above. I'd probably just let Windows enable TRIM and disable
defrag, and be done with it.
 
B

BillW50

<<snip>>

One thing you're assuming though, is there are no other failure
mechanisms besides "write wear". Consider the lowly DRAM module
as an example. DRAM chips aren't supposed to have a wear out mechanism.
They should last forever. And yet, I've had seven of twelve "generic"
memory modules fail on me. That means, the lifetime was compromised
by a chip quality issue - like perhaps the chip surface was
contaminated during the manufacturing process. For some chips,
that leads to failures a couple years after they're fabricated.

So while we can pretend these "lifetime" calculations have some
predictive value, you're forgetting the human element, and
the possibility an electronics product you buy, is populated
with "floor sweepings" quality chips. There's no end to the
shenanigans that could be happening inside or outside the
factory, to make that happen. If money is involved,
someone will eventually take a shortcut.

Who can forget the motherboards made years ago, that
shipped with "empty" cache chips soldered to the motherboard :)
The chips had no silicon die inside, because someone figured
out the customers would not be easily able to figure it out.
If there's a way to cheat, someone will find that way.

Paul
Yes there are many other things to worry about besides the wear level.
And that was my whole point. ;-)

Although since you mentioned it, I do have about 12 computers from the
eighties that works just fine. Although one has a 2YK bug and doesn't
allow you to set the date past December 31, 1999. In the 90's, I wasn't
so lucky. All but two died of an early death. And I don't know how long
they are going to hang on.
 
P

Paul

I've just tested Macrium Reflect Free (so I could compare
to my experience with the Easeus Partition Master Home Edition tool).

I redid my test setup. I put a "Data" partition on the disk, before
installing Windows 7, to offset the partitions as a test for the
tool chain. After installing Windows 7, I used the "shrink" function
in Windows 7, pretending I was shrinking the partition so it would fit
on my new SSD.

Source disk:

+-----------+------------------+---------+----------------------------------+
| Data D: | System Reserved | Win7 C: | Unallocated (after shrinking C:) |
+-----------+------------------+---------+----------------------------------+

Whether I select the convenient "back up stuff related to the OS" option
in Macrium, or, I back up System Reserved + Win7 C:, the results are the same.

1) Files backed up successfully.
2) MBR seems to be preserved. Boot code is present. (Destination
disk was zeroed before usage.) Boot flag set on SR, so it would boot.

What I'd hoped for as a result, was this:

(partition 1) (partition 2)
+------------------+---------+
| System Reserved | Win7 C: |
+------------------+---------+

The actual destination disk looks like this.

(partition 2) (partition 3)
+-----------+------------------+---------+----------------------------------+
| unalloc. | System Reserved | Win7 C: | unalloc. |
+-----------+------------------+---------+----------------------------------+

By keeping the partition slot numbers (the old Powerquest Partition Magic
trick), the BCD doesn't need to be corrected (the equivalent of boot.ini).
Which means, the destination disk did boot correctly, when booted by itself
(as if I was "testing my new SSD").

The alignment seems to have been preserved. I suppose everything was
preserved in fact (as the product name is "Reflect" after all :) ).

So this one rates a "works better than Easeus Partition Master Home Edition",
but it still isn't perfect, in that it didn't attempt to produce my
"ideal result", like this.

(partition 1) (partition 2)
+------------------+---------+
| System Reserved | Win7 C: |
+------------------+---------+

To use Macrium with the SSD, you might benefit from:

1) Back up source disk before messing with it, to a completely
separate backup disk. On windows 7, the System Image could do this.
After all of this is over, you could insert the original hard
drive, and put everything back on it, as it was before step (2).
This step is also useful, if the whole procedure goes into the toilet on you.
2) Delete recovery partition. Shift SR and C: to the left. Shrink C:
to remove excess space. Make sure the size of SR and C: is small
enough to fit on the SSD:. (Easeus could do that perhaps.)
3) Fire up Macrium. Burn the Linux recovery CD it comes with (15MB or so).
Test that the Linux recovery CD boots, and presents you with a
recovery menu. The Linux disc doesn't handle RAID arrays (or, so it
claims). Using the Linux recovery disc, requires no knowledge of
Linux. The CD boots right into the Macrium GUI, and all the good
stuff happens in there. When you quit the Macrium GUI, the CD will
prompt for a reboot (no escaping to Linux).
4) With your restoration path tested in (3), now it's time to do the backup.
Backup SR and C: to a new external backup drive (separate from 1 for safety).
5) Remove the internal hard drive, install the blank SSD.
6) Boot Macrium Linux recovery CD. Enter the menu. Select the backup
image sitting on your external drive. Restore.
7) Resulting SSD partition table is slightly screwy.

Partition1 <empty> [the old Recovery Partition used to live here...]
Partition2 Copy of System Reserved (100MB), boot flag set.
Partition3 Copy of C:
Partition4 <empty>

To correct that, would require a tool such as BCDedit or EasyBCD,
correct the BCD contents for a boot from partition 1, then use
PTEDIT32.exe (Run As Administrator) to move the partition definitions
down one notch. Then, the setup would be logically consistent.

You don't have to do step 7. If you choose not to do (7), then
at some future date, adding two more partitions would result in the
partition table looking like this (out of spatial order). This is
only potentially dangerous, if you use a partition management tool,
and don't keep this "state of disorder" in mind.

Partition1 (future third partition)
Partition2 Copy of System Reserved (100MB), boot flag set.
Partition3 Copy of C:
Partition4 (future fourth partition)

Since the SSD is so small, I doubt the need for additional
partitions would exist, and leaving it like this would
likely be fine for the life of the SSD.

Partition1
Partition2 Copy of System Reserved (100MB), boot flag set.
Partition3 Copy of C:
Partition4

Still looking for a "perfect" solution :) But this is close enough
for me to stop testing now. I could live with the slight mess.
Many things are alright about the setup.

HTH,
Paul
 
G

Gene Wirchenko

[snip]
Who can forget the motherboards made years ago, that
shipped with "empty" cache chips soldered to the motherboard :)
The chips had no silicon die inside, because someone figured
out the customers would not be easily able to figure it out.
If there's a way to cheat, someone will find that way.
I had not heard of that one. I did hear of ECC memory being
faked.

Sincerely,

Gene Wirchenko
 
Y

Yousuf Khan

Have you ever figured how long it takes to wear out a SSD? I was worried
about this at first and my XP machine I reduced the writes down to 400MB
per day through tweaks. Then I figured out that it would take over 4000
years to wear it out.
Yes, nowadays I think the concern is overblown myself. Afterall there
are some laptops being sold with nothing but SSD storage, and so they
obviously can only put their swapfiles on the SSD. But I made the
decision to move the swapfiles off of the SSD, early on when I first got
the device, after having read all of the warnings about wear rates, etc.
So it had me spooked. Having now used an SSD, I know that they aren't
quite as fragile as all of those helpful websites will have you imagine.

However, there are still good reasons to minimize the write pressure on
storage devices in general, and especially the system boot device, not
necessarily just SSD's. Even before I had an SSD I had taken control
over swapfile placement in my system, switched it from the fully
automatic system default to a more semi-automatic system, where the
swapfiles are placed on multiple separate hard disks. This ensures that
the paging activity will not contribute too much to the overall
busy-ness of the system disk. Usually the system disk is always the
busiest disk in a system, at least in Windows, as all OS/program/paging
activity is concentrated on that disk, and often even data activity. If
you have more than one disk, you move as much stuff as you can off of
that disk. Some of the easiest things to move off the system disk is the
swapfile. As it turns out, I have 6 internal drives in my system,
including the SSD, so I also split the swapfile over multiple disks, so
that no single disk is ever going too busy servicing paging activity.
You can measure how busy a disk is by running Windows 7 Resource Monitor
and checking the disk activity tab. One of the sub-windows will show you
the disk queue length of all of your disks. A disk queue length is an
instantaneous view of the number of processes waiting on a request to
any particular disk, at any given second. If the disk queue length is
under 1.00, then it's good; if it's over 1.00, then that means there's
more than 1 process/second trying to access that disk; oftentimes,
you'll see the number hit over 5.0, 10.0, etc., etc.! That's really bad.
However, an SSD is so fast that it will rarely go even near 1.0, let
alone greater than that; very rarely, but that doesn't mean that it
can't get there. So keeping the swapfile out of system boot device,
whatever it is, is not a bad idea.

On another note, I wouldn't completely trust SSD manufacturer's claims
about the service life of their products. Afterall, it's not been so
long since CD's, then DVD's were first introduced, with claims that
they'll last 100 or more years. I'm sure we all have some of these disks
that are now completely unreadable. How did they disintegrate so fast?
I'm pretty sure I'm not a 100+ years old already, so it must because
their estimates were bullshit. I trust that my SSD will last a long
time, but I'm not counting on it to last decades.

Yousuf Khan
 
R

Rob

Yes, nowadays I think the concern is overblown myself. Afterall there
are some laptops being sold with nothing but SSD storage, and so they
obviously can only put their swapfiles on the SSD. But I made the
decision to move the swapfiles off of the SSD, early on when I first got
the device, after having read all of the warnings about wear rates, etc.
So it had me spooked. Having now used an SSD, I know that they aren't
quite as fragile as all of those helpful websites will have you imagine.

However, there are still good reasons to minimize the write pressure on
storage devices in general, and especially the system boot device, not
necessarily just SSD's. Even before I had an SSD I had taken control
over swapfile placement in my system, switched it from the fully
automatic system default to a more semi-automatic system, where the
swapfiles are placed on multiple separate hard disks. This ensures that
the paging activity will not contribute too much to the overall
busy-ness of the system disk. Usually the system disk is always the
busiest disk in a system, at least in Windows, as all OS/program/paging
activity is concentrated on that disk, and often even data activity. If
you have more than one disk, you move as much stuff as you can off of
that disk. Some of the easiest things to move off the system disk is the
swapfile. As it turns out, I have 6 internal drives in my system,
including the SSD, so I also split the swapfile over multiple disks, so
that no single disk is ever going too busy servicing paging activity.
You can measure how busy a disk is by running Windows 7 Resource Monitor
and checking the disk activity tab. One of the sub-windows will show you
the disk queue length of all of your disks. A disk queue length is an
instantaneous view of the number of processes waiting on a request to
any particular disk, at any given second. If the disk queue length is
under 1.00, then it's good; if it's over 1.00, then that means there's
more than 1 process/second trying to access that disk; oftentimes,
you'll see the number hit over 5.0, 10.0, etc., etc.! That's really bad.
However, an SSD is so fast that it will rarely go even near 1.0, let
alone greater than that; very rarely, but that doesn't mean that it
can't get there. So keeping the swapfile out of system boot device,
whatever it is, is not a bad idea.

On another note, I wouldn't completely trust SSD manufacturer's claims
about the service life of their products. Afterall, it's not been so
long since CD's, then DVD's were first introduced, with claims that
they'll last 100 or more years. I'm sure we all have some of these disks
that are now completely unreadable. How did they disintegrate so fast?
I'm pretty sure I'm not a 100+ years old already, so it must because
their estimates were bullshit. I trust that my SSD will last a long
time, but I'm not counting on it to last decades.

Yousuf Khan
You can look at it another way since the SSD is full of chips like your
computer ram, how much work does your PC ram do before it fails??
 
P

Paul

Rob said:
You can look at it another way since the SSD is full of chips like your
computer ram, how much work does your PC ram do before it fails??
But the mechanism is different.

Type Mechanism Pros Cons
---- ------------------------------ --------------- ----------------
Flash Charge placed on floating gate Lasts ten years Hard on the gate
DRAM Charge placed on a capacitor Easy on the cap Lasts milliseconds

With DRAM, the stuff works about as well as a sieve holding a cup of
water. It's constantly draining away. You could lose a bit value
in DRAM in milliseconds, which is why the rows and columns of
memory bits are visited every 7.8 microseconds, to be "recharged".
Even when a computer "sleeps" (S3), the DRAM modules remain
powered, and internal digital logic runs the recharge counter
every 7.8 microseconds. So there's a "constant scanning process"
inside your DRAM module, to "help it remember".

Since the capacitors in DRAM are "ordinary", and no fancy quantum
tunneling method is used, there's nothing to wear out. It
doesn't matter that you're doing 800 million store operations
per second for the whole of a year. It doesn't hurt anything.

Flash memory uses a quantum mechanical effect. The charge
really doesn't have a path to get to the floating gate,
but quantum mechanics provides a way. The article here,
shows an example of the "mumbling to themselves" the
flash chip designers do. DRAM design, by comparison,
doesn't involve quite as many sleepless nights.

http://www.semiconductoronline.com/doc.mvc/Flash-Memory-Characterization-Part-I-0002

Paul
 
B

BillW50

With DRAM, the stuff works about as well as a sieve holding a cup of
water. It's constantly draining away. You could lose a bit value
in DRAM in milliseconds, which is why the rows and columns of
memory bits are visited every 7.8 microseconds, to be "recharged".
I did this first by accident back in the 80's. I was using a DRAM drive
to store my code that I was writing. And once I powered down and I
forgot to save it first. Hit the power button on and it was all still
intact in the DRAM. ;-)

I did some more testing and anything between 0 to 4 seconds without
power, the DRAM still had 100% of it. Between 5 to 7 seconds was iffy.
Sometimes it was all there and sometimes there was lots of corruption.
But after 8 seconds, there was nothing left that was recoverable.

I too heard that flash memory can only hold memory for 10 years. But I
have heard these claims many times before and I have found many to be
wrong. For example, I have one flash drive that is much older than 10
years old. Since it is only 32MB in size, it was only useful to me back
in the DOS days. And it sits in my junk drawer. And I have a funny
feeling it still boots DOS just fine. ;-)

Remember you can't count flash that had power in the last 10 years.
Since wear leveling will move and thus refresh the memory once again.
Although flash without wear leveling, it wouldn't matter if they had
power or not, as long as you didn't touch the older files.

By the way, I used to use bubble memory in the past too. Where did that go?
 
L

Lee Waun

Yousuf Khan said:
I did a clone of my OS from my old HDD to my new SSD, and it's exactly
what was needed to speed things up. Why increase your workload?

Yousuf Khan
Yes I also cloned my 750 gb hard drive to this Crucial 128 gb SSD. I was
only using 44 gb of hard drive space so the cloning was trouble free. I am
using this machine to type this message. I presently have 80 gb of free
space on the SSD.

I used the free Macrium Reflect software to clone the hard drive.
 
B

BillW50

Yes, nowadays I think the concern is overblown myself. Afterall there
are some laptops being sold with nothing but SSD storage, and so they
obviously can only put their swapfiles on the SSD. But I made the
decision to move the swapfiles off of the SSD, early on when I first got
the device, after having read all of the warnings about wear rates, etc.
So it had me spooked. Having now used an SSD, I know that they aren't
quite as fragile as all of those helpful websites will have you imagine.

However, there are still good reasons to minimize the write pressure on
storage devices in general, and especially the system boot device, not
necessarily just SSD's. Even before I had an SSD I had taken control
over swapfile placement in my system, switched it from the fully
automatic system default to a more semi-automatic system, where the
swapfiles are placed on multiple separate hard disks. This ensures that
the paging activity will not contribute too much to the overall
busy-ness of the system disk. Usually the system disk is always the
busiest disk in a system, at least in Windows, as all OS/program/paging
activity is concentrated on that disk, and often even data activity. If
you have more than one disk, you move as much stuff as you can off of
that disk. Some of the easiest things to move off the system disk is the
swapfile. As it turns out, I have 6 internal drives in my system,
including the SSD, so I also split the swapfile over multiple disks, so
that no single disk is ever going too busy servicing paging activity.
You can measure how busy a disk is by running Windows 7 Resource Monitor
and checking the disk activity tab. One of the sub-windows will show you
the disk queue length of all of your disks. A disk queue length is an
instantaneous view of the number of processes waiting on a request to
any particular disk, at any given second. If the disk queue length is
under 1.00, then it's good; if it's over 1.00, then that means there's
more than 1 process/second trying to access that disk; oftentimes,
you'll see the number hit over 5.0, 10.0, etc., etc.! That's really bad.
However, an SSD is so fast that it will rarely go even near 1.0, let
alone greater than that; very rarely, but that doesn't mean that it
can't get there. So keeping the swapfile out of system boot device,
whatever it is, is not a bad idea.

On another note, I wouldn't completely trust SSD manufacturer's claims
about the service life of their products. Afterall, it's not been so
long since CD's, then DVD's were first introduced, with claims that
they'll last 100 or more years. I'm sure we all have some of these disks
that are now completely unreadable. How did they disintegrate so fast?
I'm pretty sure I'm not a 100+ years old already, so it must because
their estimates were bullshit. I trust that my SSD will last a long
time, but I'm not counting on it to last decades.

Yousuf Khan
Ah... yes, I understand. My history and use of flash drives are a bit
different. I've been using them since they become available. I didn't
use them for Windows until March of '08 though. And they didn't come in
large sizes back then, but 2GB, 4GB, 8GB, and 16GB mostly. I never
bought the 2GB for Windows since 2GB is too small for XP SP2. It would
hold Windows 2000, but without updates so that was out too. So I bought
4GB, 8GB, and 16GB ones.

And 4GB was still pretty tight for XP SP2. You had to pick your
applications you planned to use very carefully. And to gain more room on
the SSD (and longevity), we maxed out the RAM and turned off the
swapfile altogether. This is interesting in itself. XP runs very fine
without a swapfile until you get close to 200MB of RAM still free.
Anything under and the system starts to pause, freeze, and if it drops
even lower (it will since Windows is starving for RAM to use and grabs
everything it can find), total lockups. I still have these SSDs and they
still work fine today.

Now the most I ever ran was three drives per system. I have no idea what
life is like with your 6 drives. ;-) Although instead of splitting up
the swapfile among many different drives, I would be running them in
pairs in RAID mode. I have no idea if you could run all six in one RAID
array or not. But if you could, talk about super speed. As you could
transfer a whole DVD movie from drive to RAM in maybe 2 seconds.
Assuming of course, the I/O chips can handle the bandwidth. ;-)

Ah service life? I've seen so many things that wasn't expected to go the
distance, but did and many times longer. Then some things can't even
come close to the expected lifespan. I think many times they really
don't know. ;-)
 
G

Gene Wirchenko

[snip]
Who can forget the motherboards made years ago, that
shipped with "empty" cache chips soldered to the motherboard :)
The chips had no silicon die inside, because someone figured
out the customers would not be easily able to figure it out.
If there's a way to cheat, someone will find that way.
I had not heard of that one. I did hear of ECC memory being
faked.
Excuse me. Parity memory.

Sincerely,

Gene Wirchenko
 
S

sothwalker

You think a clean install would go faster using an external USB drive
enclosure than the drive being right in the laptop?
No, I don't. Just guessing but I think it might take longer.
You just need to clone your existing drive to the new SSD, unless of
course, your Windows install is so borked that seemingly nothing works
properly.
I understand that I can clone but what I want is a clean install.
 
B

Bob I

I understand that I can clone but what I want is a clean install.
Fastest would be to put the new drive in the box and install there so
the proper drivers are used the first time round.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top