EEVblog Electronics Community Forum
General => General Technical Chat => Topic started by: legacy on May 02, 2019, 03:57:41 pm
-
Bought only recently, never used before, but I know that in 2004 there were serious problems with the DMA.
I am now on Linux Kernel v4.16 and v5.1-rc7: there are still serious problems with the DMA.
I am on experimental machines, PowerPC and HPPA, I have never tested on x86, but I see the kernel driver is full of comments and warnings about resetting the DMA at each step.
On non-x86 machines, the big problem is that you cannot use PCI-cards that needs to be initialized by the BIOS, so forget those gears that have x86-code in their ROM.
Basically, forget every professional HW-RAID controller.
HPT374 seems to be good about this because it doesn't need any x86-BIOS, so it can work as "software RAID", this is offered by the Linux kernel, RAID0 and RAID1 are very good supported, but you cannot trust it because if the DMA fails your data are then corrupted, and exactly what I am observing with a testing environment.
-
What is your experience? Anyone playing with it? :popcorn:
-
Never had personal experience with HighPoint controllers, but have used 3ware RAID card back in the "good old days" of having dual CPU (two Xeons) workstation. Before the card, been using Linux software RAID (until it failed hard with all the data gone).
3ware card was used in stripe mode to improve read/write speeds (no SSDs back then mind you). Improvement wasn't impressive by no means. The card was from the "affordable" range of 3ware's line-up, hence the somewhat disappointing performance. And it also failed once. Data was also gone.
Conclusions? Do backups, regardless whenever you have RAID or not. Software RAID is quite unstable thing that is prone to failure. Proper hardware RAID controller is way better, but still susceptible to epic fails (even if controller has an on-board backup battery). My brother manges company's servers ("you can't afford it" grade servers) and regardless of all the measures, they still have occasional fails. So IMO RAID is a tricky business - you have to know what you'r doing.
-
Bought only recently, never used before, but I know that in 2004 there were serious problems with the DMA.
I am now on Linux Kernel v4.16 and v5.1-rc7: there are still serious problems with the DMA.
I am on experimental machines, PowerPC and HPPA, I have never tested on x86, but I see the kernel driver is full of comments and warnings about resetting the DMA at each step.
On non-x86 machines, the big problem is that you cannot use PCI-cards that needs to be initialized by the BIOS, so forget those gears that have x86-code in their ROM.
Basically, forget every professional HW-RAID controller.
HPT374 seems to be good about this because it doesn't need any x86-BIOS, so it can work as "software RAID", this is offered by the Linux kernel, RAID0 and RAID1 are very good supported, but you cannot trust it because if the DMA fails your data are then corrupted, and exactly what I am observing with a testing environment.
-
What is your experience? Anyone playing with it? :popcorn:
I've personally not had a lot of joy with the HighPoint raid controllers (I've previously used the RocketRAID series). For x86 they were "ok" but not something that I would recommend, and on non-x86 ... you are in a world of hurt and misery (my experience is based on SPARC and POWER).
Basically, *ANYTHING* that uses binary "BLOB" firmware loaded by the driver is usually a big red flag for a whole bunch of reasons (this applies to not only RAID cards but also ethernet NICs that support features like layer 2/3 offloading, intelligent serial and so on).
As someone else has mentioned the 3WARE were typically a lot better (I used to use the 9000 series - e.g. 9500) and they were ok. Support for OS such as Linux/BSD/Solaris/etc was typically ok but the 3WARE RAID cards typically *HAD* to have the onboard battery backup module enabled otherwise you couldn't use the WRITE-BACK (vs WRITE-THROUGH), and without that you took a pretty big performance hit on anything that was even remotely write-intensive if you were using RAID5.
The other gotcha was that you *HAD* to often use the 3WARE supported disk enclosures that enabled the card to talk to the enclosure to identify when a disk as inserted/removed to enable the hot swap notification to enable device stop/start.
These days, my own storage is via ZFS, and any "smart" controllers I have effectively just present all disks as "dumb" devices so that I can allow ZFS to management them (e.g. Compaq RAID controllers, DELL PERC).
-
Conclusions? Do backups, regardless whenever you have RAID or not.
Rarely have truer words been spoken ....
Snapshots, RAID and replication are *NO* substitute for a good, reliable set of backups (complement - yes, replace - no).
Software RAID is quite unstable thing that is prone to failure. Proper hardware RAID controller is way better, but still susceptible to epic fails (even if controller has an on-board backup battery). My brother manges company's servers ("you can't afford it" grade servers) and regardless of all the measures, they still have occasional fails. So IMO RAID is a tricky business - you have to know what you'r doing.
Yes and no on this one. Basic software "mirroring" I would probably agree with you on, but *NOT* if we are talking about say ZFS for example.
... that one is a completely different ball game and I have personal experience where ZFS has identified data corruption issues from EMC SANs which their own inbuilt H/W controllers on the SPs were unable to identify silent corruption (granted, this is old generation SANs - so we're talking the EMC Clarrion CX400 and CX500 range here which are now nearly 15 years old when it comes to the CX400s).
For complex RAID topologies, I now *prefer* ZFS over specific H/W RAID cards for a bunch of reasons - hardware flexibility being the main one.
... this said by one whose own business ditched a Clarrion CX500 in preference to a ZFS storage system with 2 x 24 bay SATA enclosures and a H/W RAID controller on a HP server with a bunch of SSDs and the whole lot configured in a plethora of hybrid storage pools many years ago.
-
Perhaps it would be possible to execute the BIOS ROM of those controllers with an x86 emulator before loading the driver ;)
Perhaps somebody even implemented it already.
-
For complex RAID topologies, I now *prefer* ZFS over specific H/W RAID cards for a bunch of reasons - hardware flexibility being the main one.
Never played with ZFS, looks very promising. Although I do remember playing around with Solaris OS. Had to abandon it though, due some of the hardware missing support.
We are so tempted to move into FC (fiber channel) made by IBM for their POWER4-5 machines, so they do not have any PC-BIOS on cards, but this implies we have to find, buy, and test a PCI-X FC adapter and an FC-bridge to SAS, which is something like 400 Euro.
400Euro that's ouch! :)
As a side note, back in the days of me having 3ware card, I was very much interested in hackintoshing (OS X running on a PC) and managed to find OS X driver sourcecode for 7000 series cards (named Escalade). The sourcecode was for Tiger (OS X 10.4.x) hence should run on Power Macs. I can provide the code, if you are interested.
-
...experimental on Linux, and more than experimental on Linux running on an experimental platform. In short, forget that it will ever work out of the box, it will for sure offer a lot of "strong fighting".
Isn't Linux all about "forcing it work" using CLI and a lot of cursing? :)
I might be interesting, but consider that we (my team) primary need to support Linux on HPPA.
I'll attach the source code to this post anyway. Just in case anybody needs it.
-
It had a blob of stuff wrapped into a kernel-module.
Yep, that's reason why I dropped HighPoint long time ago. Those binary blobs might protect a company's IP but they cause a lot of additional hassle and sometimes headaches for the admin when updating linux kernels.
-
can someone confirm that Adaptec AAR-1420SA only offered proprietary drivers (binary-only)?