Well, compared to a relatively simple ARM processor optimised for super low power, even an i3 is quite a power hog. For comparison: the Qnap NAS I have consumes little over 7Watts max. including the hard drive according to the specs. But I'm sure this number is too high as I put a low power (2.9W idle), 5400 rpm hard drive in it.Then the vendor does something unfriendly to the user after the device goes out of support and all those savings are gone. Or worse, the device fails and the proprietary RAID effectively has ransomwared your data. At the least, make sure there's a decent aftermarket firmware community for it (even if you have no initial plans to use it) and that there's a way to read off the data by connecting the disks to a regular PC.
I've got an old Drobo NAS here at home that is 15 years old. Now that Drobo is out of business and support is no longer available, I need to replace it with something more modern before it fails and takes my data with it.
Rather than buying another commercial NAS, I've been thinking about building a FreeBSD box, adding a bunch of disks, and configuring them as a ZFS pool. Has anyone done this? Is it a good idea?
Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality.
Sorry, but this remark makes zero sense. A NAS is a device you physically own and have access to. Unless the hard drive(s) fail(s) or you lose the password, there is no way you can get locked out of your data even if the vendor ceases to exist. It is not cloud storage!
And in case a NAS does fail, you just buy another one and restore the data from another backup.
99% of the NAS devices runs Linux anyway so the chance you can't access the hard drive from a Linux computer is next to zero.
If it wasn't for Intel insisting that ECC DRAM was an "enterprise feature" and making it impossible or unnecessarily costly to implement for consumer CPUs, it would be used in every PC.
A random bit flip caused by a cosmic ray is not the stuff of legends, they really do happen. As the size of each individual bit in a memory chip gets smaller and smaller, the chance that a bit flip might happen increases. Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality. A bit flip which corrupts data in memory before ZFS has checksummed it and written it to disk will never be detected until you discover that the file in question is corrupted. The original authors of ZFS say you should use ECC DRAM. Those guys know what they are talking about.
My TrueNAS has ECC DRAM and I wouldn't even think of building one without it. I also based it on an AMD CPU for this generation of the hardware as I didn't want to pay Intel's premium for something which is an essential feature.
Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality.
To be fair, usually when things fuck up, it's usually Microsoft's fault. I've been chasing weird and wonderful issues for weeks.
Sorry, but this remark makes zero sense. A NAS is a device you physically own and have access to. Unless the hard drive(s) fail(s) or you lose the password, there is no way you can get locked out of your data even if the vendor ceases to exist. It is not cloud storage!https://www.bleepingcomputer.com/news/security/critical-rce-bug-in-92-000-d-link-nas-devices-now-exploited-in-attacks/
If it wasn't for Intel insisting that ECC DRAM was an "enterprise feature" and making it impossible or unnecessarily costly to implement for consumer CPUs, it would be used in every PC.
As the size of each individual bit in a memory chip gets smaller and smaller, the chance that a bit flip might happen increases.
My TrueNAS has ECC DRAM and I wouldn't even think of building one without it. I also based it on an AMD CPU for this generation of the hardware as I didn't want to pay Intel's premium for something which is an essential feature.
I read somewhere that ECC is a standard feature of DDR5, has there been any independent verification that's actually the case for all DDR5?
But who is crazy enough to put a NAS on internet? I mean that in itself is a big no. And chances are there will be more security issues with your self build PC based NAS compared to an off-the-shelve product which should have a minimal attack surface to begin with. IF you need remote access to a NAS, do this via a VPN router / VPN client.
I have built a NAS a few years ago, I opted for a HW RAID controller.
I chose a Avago MegaRAID SAS 9361-16i, as I felt it had the most
space for future upgrades. Yes, it added about $1K to total cost, but
totally worth it in my opinion.
I have built a NAS a few years ago, I opted for a HW RAID controller.
I chose a Avago MegaRAID SAS 9361-16i, as I felt it had the most
space for future upgrades. Yes, it added about $1K to total cost, but
totally worth it in my opinion.
I have had good results with the Areca RAID controllers that I have used. I was originally planning on moving all of my bulk storage to a separate TrueNAS system when I built my Ryzen workstation, but instead upgraded my hardware RAID to an Areca 1680IX with 8x14TB drives in RAID6, plus a couple of 2TB RAID10 volumes.
I like being able to boot the system from the RAID controllers, hence the two 2TB RAID10 volumes, but Windows sometimes gets into a mode after updates that breaks booting from a volume that requires added drivers, so I am looking more favorably on a dumb controller and Storage Spaces because it is easier to replace the drives to increase the storage.
I tested booting and operating from 4 SATA SSDs in hardware RAID10 and it was not any faster. It doubled the storage and added redundancy, but had other disadvantages. The hardware RAID controllers are not fast enough to take good advantage of SSDs.
I tested booting and operating from 4 SATA SSDs in hardware RAID10 and it was not any faster. It doubled the storage and added redundancy, but had other disadvantages. The hardware RAID controllers are not fast enough to take good advantage of SSDs.
I tested booting and operating from 4 SATA SSDs in hardware RAID10 and it was not any faster. It doubled the storage and added redundancy, but had other disadvantages. The hardware RAID controllers are not fast enough to take good advantage of SSDs.
No surprise there . I have a 4 lane M.2 PCIe SSD in my PC. It shows a transfer rate of 1GB/s when reading. I doubt there are any cheap RAID controllers which support that kind of throughput. IMHO RAID as in having disks in parallel is only useful to increase throughput from hard drives (with spinning disks).
Yeah hardware RAID controller cards are not a very good idea anymore.
They are just another potential point of failure, and when they do fail you are in for a world of hurt getting things running again. The striping on your drives might not be just simply compatible with some other RAID card you might have laying around, so you better swap it with an identical one. Then you have to set it up correctly to recognize the array correctly again, if you do something particularly stupid and have it attempt an array rebuild with wrong configuration it might even nuke your data...etc
And what do you get for using a RAID card? Usually it is performance. However these days drives have evolved and CPUs are much more powerful, so in a lot of cases it is actually SLOWER to use a hardware RAID card. You can get very good performance from software RAID solutions these days. Just buy a simple SAS HBA card and throw a ZFS array at those drives on a modern CPU and you will be getting plenty of performance. No hardware configuration needed either, the HBA card can be replaced with any other HBA card by just sticking it in and booting the machine up, as long as the OS can find the drive it just works. All of this is performant enough to saturate a 10G connection.
Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality.
To be fair, usually when things fuck up, it's usually Microsoft's fault. I've been chasing weird and wonderful issues for weeks.
Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality.
To be fair, usually when things fuck up, it's usually Microsoft's fault. I've been chasing weird and wonderful issues for weeks.
Yes. The above was actually quite funny. The probability of a "bit flip" due to cosmic rays crashing your machine is thousands of times lower than a software bug.