Are you considering doing testing with an encrypted (Truecrypt) partition on your Sandforce controller drives? Given that the Sandforce controller does compression before writing to the drive, and given that encrypted data is not particularly amenable to that sort of thing, I'd be quite interested to see how that affects performance, including in comparison to non-Sandforce drives. I'm sure I'm not the only one.
I think it depends on the intended use. If you put un-encrypted data on a SSD and then encrypt it, there may still remain blocks that contain un-encrypted data (thus, defeating the purpose). If, on the other hand, it's a fresh drive and you put a fresh Truecrypt partition onto it, then the drive should only ever see encrypted data, and thus it shouldn't matter.
Even as a boot drive, encryption may be worthwhile, as long as you encrypt it before using it with anything sensitive. IMO of course, I don't claim to be any kind of expert on this.
Well, there are actually quite a few us here from the corporate world. And we use the information from Anandtech to make decisions on how to use SSD:s in the employees PC:s.
Believe it or not but Truecrypt is actually quite commonly used to protect company secrets. The most common diskencryption being rolled out nowadays is probably Microsofts Bitlocker. If Bitlocker doesn't work well with the SSD, it won't be considered for corporate usage.
So I would really like Anand to test with both Truecrypt and Bitlocker.
And for those in the non-corporate world, it's a handy thing (especially for laptops) as an anti-theft measure (ie: your laptop gets stolen), or if the drive fails and you need to send the drive in to be replaced. It's nice not to have to worry if your personal info is floating around out there somewhere.
Double encryption?!? I don't think anyone is talking about that. If you read up on how wear levelling works on SSDs, you'll understand the security concern.
Performance issues with how the Sandforce controller works in comparison to other drives (such as the Intel drives) is my own question, because of how the Sandforce controller appears to work. Theory is no substitute for testing however. It could be that Sandforce have taken this into account somehow and performance remains high.
I think he is referring to Truecrypt from the drive with BitLocker from Windows on tops of it...double secured data...I think it was also meant as a joke :)
Double encryption is not the concern, however your statement is wrong. You could potentially encrypt data twice that reverts it back to its original state, depending on the algorithms involved.
The concern, though, is about security. If the header information is not physically overwritten because of TrueCrypt, then the old header would remain somewhere on the disk. If there was a breach, an attacker could still mount and access the old data. This may/may not not mean something to the average user, but if you have highly sensitive data, it is a big concern.
I too have been wondering if IOmeter is making these SF controllers look better than they are. How does the actual data that the random writes look like? If they all look similar the SF controller might easily compress them hence increasing speed dramatically. But if the data being written is just really all random bytes then I suppose there's no cause to worry about it.
As much as I want to stay clear of the conspiracy theories, a quick glance at the Iometer SCM shows revision 28 dated 2010-03-26 05:41:08 UTC by allenwa:
"Implemented improved random data algorithm and changed default MBps to be in decimal to align to industry standards."
I can understand that AT would prefer at least a release candidate for a full review. Perhaps an additional run of just 4K random writes compiled with patch revision 28 or newer might provide a useful comparison to build 2008-06-22-rc2.
That being said, the Iometer project is open source. If anyone doubts the validity of the test then they could simply review the code and back up the speculation with proof.
I have checked the 2008 RC2 and its random data generation is terrible. Each write consists of a single byte repeated. The only randomness is between one write and the next. I haven't checked the new patch.
The problem I forsee with this version is that SandForce will argue that it puts their controller at a disadvantage because typical user data will be at least somewhat compressible. Perhaps there should be an option where half of each write is completely random and the other half is a single byte repeated. That should give a compression ratio of about 2:1.
I would question the assertion that most user data is compressible. I would think it's fairly common for a drive to be filled with photos, videos, music, or game data files. All of those are typically already compressed. (Even the newer Office file formats are compressed.)
Actually I think I would tend to agree with you. I know that most of the data on my hard disk is compressed music, video or images. Either that or its video game installations which, as far as I know, are mostly compressed music, video and images.
The majority of user files are going to be in formats that naturally reduce entropy and will not compress well if at all. Whatever encoding they use is likely to run into its worst case scenario some percentage of the time leading to an expansion of data to write. This is easy to explain to the layperson - zip a zip and it gets bigger not smaller! There are many other reasons why data compression under the filesystem is a bad idea and has failed in the past.
Files aside, what about constant OLTP load? I find it hard to believe that low level disk activity from any properly normalized database is going to respond well to entropy encoding. Especially if leaf-node-level compression is already being leveraged by the RDBMS.
Data compression at this level will do little more than exploit an assumption made by virtually all synthetic benchmarks - that it's the volume of data that's important and not the content.
Another user posted "zip a zip" and it gets bigger. This happens, but it depends on what's being zipped and what level of zipping has occurred. Most likely the zip engine is using the same algorithm, thus the compression has already reached its maximum potential, leaving the size to increase due to overhead.
There are different levels of encryption/compression to consider. Generally, the higher the level, the more processing is involved, but the smaller the resulting file/storage size. On the other hand, lower levels are faster to process, but leave the file larger.
Compression engines use the combination of pattern recognition and math. My guess is that HDs would look for different patterns than applications, thus an HD could further compress an already compressed file.
I'd like to know more about the SSD compression engine that is being used before saying it doesn't do anything on already compressed data.
It would seem that the most probable resolution will be for Anand to get a hold of the custom SandForce builds of Iometer he referred to in the Corsair Force review comments.
Synthetics aside, I'm most interested in how the Anandtech Storage Benches might differ with FW 3.05.
Similar to the TrueCrypt question, I wonder what performance degradation may appear with NTFS volume compression. Some of us like to make the most of expensive flash!
That would vary quite a bit. I/O to and from the drive itself is reduced. But the compression and decompression is a CPU intensive function and isn't dependent on the drive. I suspect the ratio of compressed speed vs. uncompressed will be the same on a given system, regardless of the drive.
Yeh, but there is the difference between putting the CPU intensive? task of compressing on the drives CPU (I guess that's how they are doing compression..I'm still confused by this because I would expect it to take RAM for dictionaries, too) vs a Dual, Quad, Hex....core main CPU on the motherboard.
Also, I've watched the CPU during compression for NTFS. It is either HDD bound or Microsoft has something in there to slow it down because it doesn't peg out a CPU core
SandForce's original market line was you could buy their expensive controller and then pair it up with cheap flash that wouldn't normally be up to the task of SSD use. Are you seeing any evidence of this when opening them up? The idea of using cheap flash makes me nervous, even if it's mitigated by the controller.
I'm looking forward to the full review! For me, I am more willing to sacrifice some performance in exchange for more reliability/durability and a cheaper price. Speed is nothing without stability!
Hopefully SandForce will be able to deliver on their promises. But I know it's in good hands with Anand torture testing them for us. ;)
Keep up the great work! It does not go unnoticed my friend. :)
With the kind of leverage Anand has with the industry, I am very pleased that he has taken on the SSD movement. I feel that I have gotten more answers than I would have had a less "famous" analyst had taken this on. Kudos Anand! Keeping me coming back for more. I was not ready to take the SSD plunge until I read these articles, now I've got the OCZ Vertex LE and am glad I did!
#1) I'm MrClarke, I haven't used an HDD in more than almost two years.
This is the first one I ordered: OCZ Core Series OCZSSD2-1C128G 2.5" 128GB SATA II Internal Solid State Drive (SSD)
Order #: 86561631 Invoice #: 38140508 Submitted: 8/15/2008 10:23:20 AM Customer PO: Mr.Clarke's PC
So, Stop being afraid of the new technological advances and quit over analyzing the data, Step to the plate and take a swing and hope you get a home run with the one that you choose. All new devices go through a lot of testing and refining of their internal hardware components and their firmware. Only one SSD came out of a disk check empty reporting that it was no longer a GSkill 128 GB SSD But a JMicron loader. It just needed to get a new firmware flash and that's all it took to get it back to full status and ready to store data and be used as a Boot Drive if I wanted. I have had more trouble with Western Digital and Seagate drives than any of the SSD's. And that's tough for me to tell you because I own SEAGATE ( STX ) Securities.
I own 3 GSkill128's, 3 OCZ 30's, 5 OCZ 120's, 2 Imation 128's, 1 Patriot Torqx 64, 5 Patriot Torqx 128's, 2 Patriot Torqx 256's, And now lately 2 Crucial C300's that are 128 GB SATA III SSD's because I bought a mother board that supports USB 3.0 and SATA III 6.0GB/s
So, Here's a thought; Since I can now buy the SSD's in SATA III and they are backwards compatible with SATA II (because right now I'm on a Gigabyte board that is only SATA II and it is Being driven by a Crucial C300 SATA III ); Why would I bother now to buy an SSD that is only supported by a SATA II Mother Board anymore?
Let the Dollars make sense to you all. The value is now in buying these things that are for boards that you are going to buy next, And if you do not buy a SATA III Mother Board, You made an error in judgment.
When I run the benchmark test " AS SSD " on these drives it is the results that are revealing the performance.
No more Slow stuff for you if you step out and take the swing and get a home run by ordering your own SSD.
One of my SSD's was in a machine in Johnstown, Pennsylvania; It was on loan to a friend who is a Hardware Master at HWBOT.org and he was benching and overclocking and making some very big improvements to his previous scores done with conventional HDD's. His house burned down on the last Sunday of November2009 @0615 in the Morning, He got 4 others out of the house and got his face burned and his hair caught on fire and was fighting the fire with a garden hose when the Fire departments arrived at the fire. He's a Fireman, All his Hardware was burned to a crisp. He and I spoke on the phone and he told me he found my OCZ 30 GB SSD in the Debris and he washed it off with Cascade and dried it out and polished the SATA II Signal cable contacts and the power connections and it booted a mother board up the street at his cousins house where he was now living.
He insisted on replacing it, as the Insurance company told him to do it. He sent an OCZ S2 60GB SSD,So now I own one of those also.
So, If you want to Google the house fire Google it and look for house fire Bayush Street Johnstown,
Pennsylvania and get a link to the Johnstown Tribune Democrat and you can verify it. Chimney leaked hot gasses into the attic and ignited the structure,total loss. House is being rebuilt and close to being ready for occupancy.
So the story is about an SSD that still worked after being subjected to more abuse than any HDD could endure and come back from and still work.
Don't over analyze the stuff and get scared of making a move in the right direction.
Prices are coming down and will continue to do so. How do I know? I'm the tightest shopper you will ever meet, It has to be a real good dollar value and FREE SHIPPING for me to buy it.
Actually use of SSD's in the corporate world is far and few between especially among fortune 500 companies. Computer purchases are typically long term contracts containing the minimal specs necessary for the average user/employee. SSD's have yet to become adopted on a large scale by the PC/MAC/Business communities and therefor deemed financially unnecessary.
Very little encryption takes place for the average user because User group policies are put in place to prevent access to sections of networks deemed unnecessary for the user. Truecrypt is more of an industry standard then bitlocker and far more robust. This is something that developers in an R&D unit or executives might use but for the average Manager/Supervisor with any kind of MBA, they typically have know clue on how to utilize this technology. Corporations also reserve the right to remotely scan a users systems if deemed necessary so again encryption is less likely.
With the advent of wireless/wired network solutions for home/office use like HP & Drobo why would anyone need to partiton any drive anymore? I find it to be an antiquated idea that has outlived it's usefulness. Even laptops that have a single drive can access these networks on the fly. Portable Flash (thumb) drives that can encrypt data are also a solution such as the Iron Monkey. There is also (albeit still debated) Cloud computing technology. This Technology alone is a far more financially sound option for corporate expenditures compared to upgrading systems to SSD's & Truecrypt licenses. In essence partitioning SSD's or any Drive is simply an idea best suited for the past. In any event encryption can be hacked using brute force attacks anyway.
Being quite new to SSD tech pros & cons, I need some answers, please:
* how important is flashing the latest firmware first, (even if there isn't any posted yet)?
* is overprovisioning necessary with SF122x newest controller? And if so, doing it while partitioning with Win7 is OK? (say skipping a 10GB of valuable space?)
* is secure erasing with HDDErase 3.3 (before AHCI setting) still necessary?
* Knowing how damaging unnecessary writes are to NAND, how safe for the life of my SSDs would be - say - putting two new SF1200 SSDs in RAID 0 ?
* Ditto, how damaging would be Defragging my SSDs (I am a defrag maniac ...)
My first SSD is (not installed yet): Patriot Inferno 60GB (part #PI60GS25SSDR) My Win7 rig shows a Win7 PI of 6.0 due to 2 WD Raptors in RAID 0, with the rest in PI of 7.6, thus the decision to try a boot SSD...
Any answers and in any order will be highly appreciated!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
36 Comments
Back to Article
greggm2000 - Saturday, April 17, 2010 - link
Anand,Are you considering doing testing with an encrypted (Truecrypt) partition on your Sandforce controller drives? Given that the Sandforce controller does compression before writing to the drive, and given that encrypted data is not particularly amenable to that sort of thing, I'd be quite interested to see how that affects performance, including in comparison to non-Sandforce drives. I'm sure I'm not the only one.
Thanks for the great reviews!
VoidQ - Saturday, April 17, 2010 - link
Is an SSD a good fit for Truecrypt encryption? I seem to recall that wear leveling doesn't agree with Truecrypt. See below:http://www.truecrypt.org/docs/wear-leveling
greggm2000 - Saturday, April 17, 2010 - link
I think it depends on the intended use. If you put un-encrypted data on a SSD and then encrypt it, there may still remain blocks that contain un-encrypted data (thus, defeating the purpose). If, on the other hand, it's a fresh drive and you put a fresh Truecrypt partition onto it, then the drive should only ever see encrypted data, and thus it shouldn't matter.Even as a boot drive, encryption may be worthwhile, as long as you encrypt it before using it with anything sensitive. IMO of course, I don't claim to be any kind of expert on this.
cactusdog - Sunday, April 18, 2010 - link
Hahaha, Why do you guys need to encrypt data? Do you work for KAOS the international organisation of rotteness?? Hahaha Just wondering.....RollerBoySE - Sunday, April 18, 2010 - link
Well, there are actually quite a few us here from the corporate world. And we use the information from Anandtech to make decisions on how to use SSD:s in the employees PC:s.Believe it or not but Truecrypt is actually quite commonly used to protect company secrets. The most common diskencryption being rolled out nowadays is probably Microsofts Bitlocker. If Bitlocker doesn't work well with the SSD, it won't be considered for corporate usage.
So I would really like Anand to test with both Truecrypt and Bitlocker.
greggm2000 - Sunday, April 18, 2010 - link
And for those in the non-corporate world, it's a handy thing (especially for laptops) as an anti-theft measure (ie: your laptop gets stolen), or if the drive fails and you need to send the drive in to be replaced. It's nice not to have to worry if your personal info is floating around out there somewhere.niva - Monday, April 19, 2010 - link
Umm, double encrypting data is not going to result in some data being unencrypted.I too want to hear about this, I don't encrypt partitions but I use truekrypt and this is the first time I hear of potential issues with SSDs.
greggm2000 - Monday, April 19, 2010 - link
Double encryption?!? I don't think anyone is talking about that. If you read up on how wear levelling works on SSDs, you'll understand the security concern.Performance issues with how the Sandforce controller works in comparison to other drives (such as the Intel drives) is my own question, because of how the Sandforce controller appears to work. Theory is no substitute for testing however. It could be that Sandforce have taken this into account somehow and performance remains high.
Kary - Monday, April 19, 2010 - link
I think he is referring to Truecrypt from the drive with BitLocker from Windows on tops of it...double secured data...I think it was also meant as a joke :)vol7ron - Tuesday, April 20, 2010 - link
Double encryption is not the concern, however your statement is wrong. You could potentially encrypt data twice that reverts it back to its original state, depending on the algorithms involved.The concern, though, is about security. If the header information is not physically overwritten because of TrueCrypt, then the old header would remain somewhere on the disk. If there was a breach, an attacker could still mount and access the old data. This may/may not not mean something to the average user, but if you have highly sensitive data, it is a big concern.
shawkie - Saturday, April 17, 2010 - link
Anand,I think you should also seriously reconsider your use of IOMeter on these drives.
jimhsu - Saturday, April 17, 2010 - link
Is there a reason for this particularly? Reproducibility? What alternative multi-threaded benchmark would you suggest?synt4x - Saturday, April 17, 2010 - link
I too have been wondering if IOmeter is making these SF controllers look better than they are. How does the actual data that the random writes look like? If they all look similar the SF controller might easily compress them hence increasing speed dramatically. But if the data being written is just really all random bytes then I suppose there's no cause to worry about it.Though I think it's worth to look into.
TGressus - Sunday, April 18, 2010 - link
As much as I want to stay clear of the conspiracy theories, a quick glance at the Iometer SCM shows revision 28 dated 2010-03-26 05:41:08 UTC by allenwa:"Implemented improved random data algorithm and changed default MBps to be in decimal to align to industry standards."
http://iometer.svn.sourceforge.net/viewvc/iometer?...
I can understand that AT would prefer at least a release candidate for a full review. Perhaps an additional run of just 4K random writes compiled with patch revision 28 or newer might provide a useful comparison to build 2008-06-22-rc2.
That being said, the Iometer project is open source. If anyone doubts the validity of the test then they could simply review the code and back up the speculation with proof.
shawkie - Sunday, April 18, 2010 - link
I have checked the 2008 RC2 and its random data generation is terrible. Each write consists of a single byte repeated. The only randomness is between one write and the next. I haven't checked the new patch.shawkie - Sunday, April 18, 2010 - link
The new patch does produce properly random data (although it looks like it requires an option to be set somewhere):http://iometer.svn.sourceforge.net/viewvc/iometer/...
The problem I forsee with this version is that SandForce will argue that it puts their controller at a disadvantage because typical user data will be at least somewhat compressible. Perhaps there should be an option where half of each write is completely random and the other half is a single byte repeated. That should give a compression ratio of about 2:1.
bhassel - Sunday, April 18, 2010 - link
I would question the assertion that most user data is compressible. I would think it's fairly common for a drive to be filled with photos, videos, music, or game data files. All of those are typically already compressed. (Even the newer Office file formats are compressed.)shawkie - Sunday, April 18, 2010 - link
Actually I think I would tend to agree with you. I know that most of the data on my hard disk is compressed music, video or images. Either that or its video game installations which, as far as I know, are mostly compressed music, video and images.gfody - Tuesday, April 20, 2010 - link
+1The majority of user files are going to be in formats that naturally reduce entropy and will not compress well if at all. Whatever encoding they use is likely to run into its worst case scenario some percentage of the time leading to an expansion of data to write. This is easy to explain to the layperson - zip a zip and it gets bigger not smaller! There are many other reasons why data compression under the filesystem is a bad idea and has failed in the past.
Files aside, what about constant OLTP load? I find it hard to believe that low level disk activity from any properly normalized database is going to respond well to entropy encoding. Especially if leaf-node-level compression is already being leveraged by the RDBMS.
Data compression at this level will do little more than exploit an assumption made by virtually all synthetic benchmarks - that it's the volume of data that's important and not the content.
vol7ron - Tuesday, April 20, 2010 - link
It depends on the algorithm used.Another user posted "zip a zip" and it gets bigger. This happens, but it depends on what's being zipped and what level of zipping has occurred. Most likely the zip engine is using the same algorithm, thus the compression has already reached its maximum potential, leaving the size to increase due to overhead.
There are different levels of encryption/compression to consider. Generally, the higher the level, the more processing is involved, but the smaller the resulting file/storage size. On the other hand, lower levels are faster to process, but leave the file larger.
Compression engines use the combination of pattern recognition and math. My guess is that HDs would look for different patterns than applications, thus an HD could further compress an already compressed file.
I'd like to know more about the SSD compression engine that is being used before saying it doesn't do anything on already compressed data.
gfody - Wednesday, April 21, 2010 - link
they mention in the patent that the compression technique is non conventional but don't describe the technique they use:http://www.freepatentsonline.com/7058769.html
also interesting: the system was apparently designed for spinning disks.
synt4x - Sunday, April 18, 2010 - link
Though I guess Anand doesn't use the latest SVN version of IOmeter; from the Corsair Force review:"Using the 6-22-2008 build of Iometer I ran a 3 minute long 2MB sequential test over the entire span of the drive."
So the data random data written is easily compressible if they're all 4k chunks of ones or zeroes right?
TGressus - Sunday, April 18, 2010 - link
Excellent follow through shawkie.It would seem that the most probable resolution will be for Anand to get a hold of the custom SandForce builds of Iometer he referred to in the Corsair Force review comments.
Synthetics aside, I'm most interested in how the Anandtech Storage Benches might differ with FW 3.05.
Wrish - Saturday, April 17, 2010 - link
Similar to the TrueCrypt question, I wonder what performance degradation may appear with NTFS volume compression. Some of us like to make the most of expensive flash!Lonyo - Saturday, April 17, 2010 - link
+1 for that.Can't say I use NTFS compression much on mechanicals, but on SSDs it would definitely be more worthwhile.
Jaybus - Monday, April 19, 2010 - link
That would vary quite a bit. I/O to and from the drive itself is reduced. But the compression and decompression is a CPU intensive function and isn't dependent on the drive. I suspect the ratio of compressed speed vs. uncompressed will be the same on a given system, regardless of the drive.Kary - Monday, April 19, 2010 - link
Yeh, but there is the difference between putting the CPU intensive? task of compressing on the drives CPU (I guess that's how they are doing compression..I'm still confused by this because I would expect it to take RAM for dictionaries, too) vs a Dual, Quad, Hex....core main CPU on the motherboard.Also, I've watched the CPU during compression for NTFS. It is either HDD bound or Microsoft has something in there to slow it down because it doesn't peg out a CPU core
Mr Perfect - Saturday, April 17, 2010 - link
Anand,SandForce's original market line was you could buy their expensive controller and then pair it up with cheap flash that wouldn't normally be up to the task of SSD use. Are you seeing any evidence of this when opening them up? The idea of using cheap flash makes me nervous, even if it's mitigated by the controller.
Thanks,
MP
529th - Saturday, April 17, 2010 - link
Did anyone see the 50g version of the Vertex LE ???What's that all about?!?
BanditWorks - Saturday, April 17, 2010 - link
I'm looking forward to the full review! For me, I am more willing to sacrifice some performance in exchange for more reliability/durability and a cheaper price. Speed is nothing without stability!Hopefully SandForce will be able to deliver on their promises. But I know it's in good hands with Anand torture testing them for us. ;)
Keep up the great work! It does not go unnoticed my friend. :)
vol7ron - Sunday, April 18, 2010 - link
I can't wait to see the outcome. Could it be that affordable, high performance drives are upon us?vol7ron
'nar - Sunday, April 18, 2010 - link
With the kind of leverage Anand has with the industry, I am very pleased that he has taken on the SSD movement. I feel that I have gotten more answers than I would have had a less "famous" analyst had taken this on. Kudos Anand! Keeping me coming back for more. I was not ready to take the SSD plunge until I read these articles, now I've got the OCZ Vertex LE and am glad I did!capeconsultant - Sunday, April 18, 2010 - link
I agree with Banditworks. What good is speed if I feel like I have to back it up every 5 minutes due to poor stability :)Ol'Bud - Wednesday, April 21, 2010 - link
#1)I'm MrClarke,
I haven't used an HDD in more than almost two years.
This is the first one I ordered:
OCZ Core Series OCZSSD2-1C128G 2.5" 128GB SATA II Internal Solid State Drive (SSD)
Order #: 86561631
Invoice #: 38140508
Submitted: 8/15/2008 10:23:20 AM
Customer PO: Mr.Clarke's PC
So,
Stop being afraid of the new technological advances and quit over analyzing the data,
Step to the plate and take a swing and hope you get a home run with the one that you choose.
All new devices go through a lot of testing and refining of their internal hardware components and their firmware.
Only one SSD came out of a disk check empty reporting that it was no longer a GSkill 128 GB SSD But a JMicron loader.
It just needed to get a new firmware flash and that's all it took to get it back to full status and ready to store data and be used as a Boot Drive if I wanted.
I have had more trouble with Western Digital and Seagate drives than any of the SSD's.
And that's tough for me to tell you because I own SEAGATE ( STX ) Securities.
I own 3 GSkill128's,
3 OCZ 30's,
5 OCZ 120's,
2 Imation 128's,
1 Patriot Torqx 64,
5 Patriot Torqx 128's,
2 Patriot Torqx 256's,
And now lately 2 Crucial C300's that are 128 GB SATA III SSD's because I bought a mother board that supports USB 3.0 and SATA III 6.0GB/s
So,
Here's a thought;
Since I can now buy the SSD's in SATA III and they are backwards compatible with SATA II (because right now I'm on a Gigabyte board that is only SATA II and it is Being driven by a Crucial C300 SATA III );
Why would I bother now to buy an SSD that is only supported by a SATA II Mother Board anymore?
Let the Dollars make sense to you all.
The value is now in buying these things that are for boards that you are going to buy next,
And if you do not buy a SATA III Mother Board,
You made an error in judgment.
When I run the benchmark test " AS SSD " on these drives it is the results that are revealing the performance.
No more Slow stuff for you if you step out and take the swing and get a home run by ordering your own SSD.
One of my SSD's was in a machine in Johnstown, Pennsylvania;
It was on loan to a friend who is a Hardware Master at HWBOT.org and he was benching and overclocking and making some very big improvements to his previous scores done with conventional HDD's.
His house burned down on the last Sunday of November2009 @0615 in the Morning,
He got 4 others out of the house and got his face burned and his hair caught on fire and was fighting the fire with a garden hose when the Fire departments arrived at the fire.
He's a Fireman,
All his Hardware was burned to a crisp.
He and I spoke on the phone and he told me he found my OCZ 30 GB SSD in the Debris and he washed it off with Cascade and dried it out and polished the SATA II Signal cable contacts and the power connections and it booted a mother board up the street at his cousins house where he was now living.
He insisted on replacing it,
as the Insurance company told him to do it.
He sent an OCZ S2 60GB SSD,So now I own one of those also.
So,
If you want to Google the house fire Google it and look for house fire Bayush Street Johnstown,
Pennsylvania and get a link to the Johnstown Tribune Democrat and you can verify it.
Chimney leaked hot gasses into the attic and ignited the structure,total loss.
House is being rebuilt and close to being ready for occupancy.
So the story is about an SSD that still worked after being subjected to more abuse than any HDD could endure and come back from and still work.
Don't over analyze the stuff and get scared of making a move in the right direction.
Prices are coming down and will continue to do so.
How do I know?
I'm the tightest shopper you will ever meet,
It has to be a real good dollar value and FREE SHIPPING for me to buy it.
MrClarke
Central Point,Or. U.S.A.
maxhdrm - Wednesday, April 21, 2010 - link
Actually use of SSD's in the corporate world is far and few between especially among fortune 500 companies. Computer purchases are typically long term contracts containing the minimal specs necessary for the average user/employee. SSD's have yet to become adopted on a large scale by the PC/MAC/Business communities and therefor deemed financially unnecessary.Very little encryption takes place for the average user because User group policies are put in place to prevent access to sections of networks deemed unnecessary for the user. Truecrypt is more of an industry standard then bitlocker and far more robust. This is something that developers in an R&D unit or executives might use but for the average Manager/Supervisor with any kind of MBA, they typically have know clue on how to utilize this technology. Corporations also reserve the right to remotely scan a users systems if deemed necessary so again encryption is less likely.
With the advent of wireless/wired network solutions for home/office use like HP & Drobo why would anyone need to partiton any drive anymore? I find it to be an antiquated idea that has outlived it's usefulness. Even laptops that have a single drive can access these networks on the fly. Portable Flash (thumb) drives that can encrypt data are also a solution such as the Iron Monkey. There is also (albeit still debated) Cloud computing technology. This Technology alone is a far more financially sound option for corporate expenditures compared to upgrading systems to SSD's & Truecrypt licenses. In essence partitioning SSD's or any Drive is simply an idea best suited for the past. In any event encryption can be hacked using brute force attacks anyway.
naviscan2 - Sunday, December 5, 2010 - link
Being quite new to SSD tech pros & cons, I need some answers, please:* how important is flashing the latest firmware first, (even if there isn't any posted yet)?
* is overprovisioning necessary with SF122x newest controller? And if so, doing it while partitioning with Win7 is OK? (say skipping a 10GB of valuable space?)
* is secure erasing with HDDErase 3.3 (before AHCI setting) still necessary?
* Knowing how damaging unnecessary writes are to NAND, how safe for the life of my SSDs would be - say - putting two new SF1200 SSDs in RAID 0 ?
* Ditto, how damaging would be Defragging my SSDs (I am a defrag maniac ...)
My first SSD is (not installed yet): Patriot Inferno 60GB (part #PI60GS25SSDR)
My Win7 rig shows a Win7 PI of 6.0 due to 2 WD Raptors in RAID 0, with the rest in PI of 7.6, thus the decision to try a boot SSD...
Any answers and in any order will be highly appreciated!