Author Topic: Replacement NAS  (Read 1962 times)

0 Members and 1 Guest are viewing this topic.

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26958
  • Country: nl
    • NCT Developments
Re: Replacement NAS
« Reply #25 on: April 15, 2024, 07:51:54 am »
Well, compared to a relatively simple ARM processor optimised for super low power, even an i3 is quite a power hog. For comparison: the Qnap NAS I have consumes little over 7Watts max. including the hard drive  according to the specs. But I'm sure this number is too high as I put a low power (2.9W idle), 5400 rpm hard drive in it.
Then the vendor does something unfriendly to the user after the device goes out of support and all those savings are gone. Or worse, the device fails and the proprietary RAID effectively has ransomwared your data. At the least, make sure there's a decent aftermarket firmware community for it (even if you have no initial plans to use it) and that there's a way to read off the data by connecting the disks to a regular PC.
Sorry, but this remark makes zero sense. A NAS is a device you physically own and have access to. Unless the hard drive(s) fail(s) or you lose the password, there is no way you can get locked out of your data even if the vendor ceases to exist. It is not cloud storage! And in case a NAS does fail, you just buy another one and restore the data from another backup. 99% of the NAS devices runs Linux anyway so the chance you can't access the hard drive from a Linux computer is next to zero. However, the chance the hard drive fails way before the NAS itself is close to 100% anyway.
« Last Edit: April 15, 2024, 08:49:17 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4957
  • Country: si
Re: Replacement NAS
« Reply #26 on: April 15, 2024, 08:01:38 am »
Unraid user for almost a decade here. And would recommend using it at home.

Yes TrueNAS is the superior solution if you are after performance, but for home use it is often overkill.

While Unraid does cost money (but it is lifetime license that includes all future upgrades) it is an easy to use solution where you don't need to know what you are doing, while making it very difficult to have a catastrophic loss of data. Unlike real RAID setups this is just gluing disks together at the filesystem level while using parity drives to guard from disk failures. You can just throw random disks in there and they will join the pool of storage, no need to have same size drives, no need to restripe it over the new drives..etc. Gives the same level of protection as RAID, however if a RAID array degrades to a point where parity can no longer recover your data, then all of your data has been practically nuked. Yet here there is still hope, each drive in the array has its own filesystem and so is still individually perfectly readable in any linux machine, allowing you to recover whatever data is still left. Heck you can even mix and match filesystems, have some of the array disks be BTRFS, some XFS, some EXT4. Doesn't matter as long as linux can read it. You can even spin down the disks in the array that are not needed and only spin up the disk the file is on. You can also do VMs and Dockers and all that stuff that TrueNAS does.

If Unraid is so good why are more people not using it then? Well... performance
Because it is not actual RAID means you don't get the performance boost of RAID. So the reading performance is equal to the read speed of 1 HDD. Writing is even slower because it also has to read then write. This is a deal breaker for enterprise use where storage servers get seriously big. However for home use it is usually good enough. Modern HDDs are usually 150MB/s sequential read/write and so it is faster than a 1Gbit Ethernet connection anyway, so the speed doesn't matter (unless you are one of the rare few that have a 2.5G or 10G LAN). The write speed can also be improved by using a SSD cache. And if you want speed and have 10G networking you can still spend some extra money on making an all SSD NAS that will beat the pants off any regular HDD NAS.
 

Offline M0HZH

  • Regular Contributor
  • *
  • Posts: 206
  • Country: gb
    • QRPblog
Re: Replacement NAS
« Reply #27 on: April 15, 2024, 09:29:35 am »
I prefer a tried, tested & well supported off-the-shelf NAS for my critical data (both personal and for my small business), so I run a Synology as the main NAS with scheduled backups to another local Synology & to Amazon Glacier. Haven't lost a thing in 10 years and except the initial entry cost of a Synology unit, the running costs are quite low: power-efficient, low failure rate (except a few known models), good resale value.

For replaceable / less important data OpenMediaVault running on some old / low-power hardware works, but generally there are tradeoffs: time, effort, reliability, long term power usage for lower initial cost, performance, flexibility. It's not necessarily better.

I find the more advanced / custom NAS solutions (including TrueNAS) outside the scope of a typical home/SMB user.
 

Offline woofy

  • Frequent Contributor
  • **
  • Posts: 337
  • Country: gb
    • Woofys Place
Re: Replacement NAS
« Reply #28 on: April 15, 2024, 09:54:51 am »
I've got an old Drobo NAS here at home that is 15 years old. Now that Drobo is out of business and support is no longer available, I need to replace it with something more modern before it fails and takes my data with it.

Rather than buying another commercial NAS, I've been thinking about building a FreeBSD box, adding a bunch of disks, and configuring them as a ZFS pool. Has anyone done this? Is it a good idea?

I think its worth re-quoting the op's post. This is a new build replacement home NAS. TrueNAS Core is exactly what is being requested. TrueNAS and ZFS are proven technologies and despite some posts here, I can find no evidence that ZFS is unreliable. I don't count social media opinions as evidence. My own experience of TrueNAS is overwhelmingly positive. I've been running TrueNas here at home since the pandemic in 2020, almost 4 years now, and even longer at work where we have two machines running trueNAS. We've had plenty of power failures in that time but never any TrueNAS issues.

As far as raid is concerned, I wouldn't bother. There's no performance gain of any significance, this is a home NAS. Raid may be great for commercial use where a hot swap can restore the system without down time but for a home server its pointless. In any case, you still need a separate backup.

And on power, with idle consumption in the 10w region, does it matter. My own home NAS is (for the last few weeks) an N100 mini PC with a single 2TB SSD for the data. Power consumption is around 10W. Put that in perspective, its 100 hours operation per kwh, or 87.6 kwh/yr. At 7.5p/kwh that's £6.57 year! I can't even buy a couple of pints of beer for that.

Offline unseenninja

  • Contributor
  • Posts: 18
  • Country: se
Re: Replacement NAS
« Reply #29 on: April 15, 2024, 10:14:12 am »
If it wasn't for Intel insisting that ECC DRAM was an "enterprise feature" and making it impossible or unnecessarily costly to implement for consumer CPUs, it would be used in every PC.

A random bit flip caused by a cosmic ray is not the stuff of legends, they really do happen. As the size of each individual bit in a memory chip gets smaller and smaller, the chance that a bit flip might happen increases. Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality. A bit flip which corrupts data in memory before ZFS has checksummed it and written it to disk will never be detected until you discover that the file in question is corrupted. The original authors of ZFS say you should use ECC DRAM. Those guys know what they are talking about.

My TrueNAS has ECC DRAM and I wouldn't even think of building one without it. I also based it on an AMD CPU for this generation of the hardware as I didn't want to pay Intel's premium for something which is an essential feature.
 

Offline Halcyon

  • Global Moderator
  • *****
  • Posts: 5688
  • Country: au
Re: Replacement NAS
« Reply #30 on: April 15, 2024, 11:28:25 am »
Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality.

To be fair, usually when things fuck up, it's usually Microsoft's fault. I've been chasing weird and wonderful issues for weeks.
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9029
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Replacement NAS
« Reply #31 on: April 15, 2024, 12:27:22 pm »
Sorry, but this remark makes zero sense. A NAS is a device you physically own and have access to. Unless the hard drive(s) fail(s) or you lose the password, there is no way you can get locked out of your data even if the vendor ceases to exist. It is not cloud storage!
https://www.bleepingcomputer.com/news/security/critical-rce-bug-in-92-000-d-link-nas-devices-now-exploited-in-attacks/
With a standard PC running an open source server distro, all you have to do is update it. With a lot of proprietary ARM systems, you're stuck with the vendor continuing support.
Quote
And in case a NAS does fail, you just buy another one and restore the data from another backup.
If you're trying to get back the data that has changed since the last run of next level backup, having to buy another NAS from the same vendor is pretty much the definition of ransomware.
Quote
99% of the NAS devices runs Linux anyway so the chance you can't access the hard drive from a Linux computer is next to zero.
Unless it uses some proprietary RAID to support "advanced features". Hence the reason to do some research to make sure a tool to read off the array with a standard PC exists.
If it wasn't for Intel insisting that ECC DRAM was an "enterprise feature" and making it impossible or unnecessarily costly to implement for consumer CPUs, it would be used in every PC.

A random bit flip caused by a cosmic ray is not the stuff of legends, they really do happen. As the size of each individual bit in a memory chip gets smaller and smaller, the chance that a bit flip might happen increases. Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality. A bit flip which corrupts data in memory before ZFS has checksummed it and written it to disk will never be detected until you discover that the file in question is corrupted. The original authors of ZFS say you should use ECC DRAM. Those guys know what they are talking about.

My TrueNAS has ECC DRAM and I wouldn't even think of building one without it. I also based it on an AMD CPU for this generation of the hardware as I didn't want to pay Intel's premium for something which is an essential feature.

I read somewhere that ECC is a standard feature of DDR5, has there been any independent verification that's actually the case for all DDR5?
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26958
  • Country: nl
    • NCT Developments
Re: Replacement NAS
« Reply #32 on: April 15, 2024, 12:53:56 pm »
Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality.

To be fair, usually when things fuck up, it's usually Microsoft's fault. I've been chasing weird and wonderful issues for weeks.
I disagree. I have quite a bit of background in supplying reliable PCs (and making PCs reliable) and in my experience most of the problems in PCs are due to crappy hardware and / or drivers. Windows will run well for prolonged periods of time on good quality hardware.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26958
  • Country: nl
    • NCT Developments
Re: Replacement NAS
« Reply #33 on: April 15, 2024, 01:02:35 pm »
Sorry, but this remark makes zero sense. A NAS is a device you physically own and have access to. Unless the hard drive(s) fail(s) or you lose the password, there is no way you can get locked out of your data even if the vendor ceases to exist. It is not cloud storage!
https://www.bleepingcomputer.com/news/security/critical-rce-bug-in-92-000-d-link-nas-devices-now-exploited-in-attacks/
But who is crazy enough to put a NAS on internet? I mean that in itself is a big no. And chances are there will be more security issues with your self build PC based NAS compared to an off-the-shelve product which should have a minimal attack surface to begin with. IF you need remote access to a NAS, do this via a VPN router / VPN client.
« Last Edit: April 15, 2024, 01:07:10 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16630
  • Country: us
  • DavidH
Re: Replacement NAS
« Reply #34 on: April 15, 2024, 02:31:53 pm »
If it wasn't for Intel insisting that ECC DRAM was an "enterprise feature" and making it impossible or unnecessarily costly to implement for consumer CPUs, it would be used in every PC.

The real cost is in system validation with the BIOS and operating system.  At least AMD allows it even if unsupported in most cases.  Unfortunately for whatever reason, AMD disables ECC on their CPUs that have built in graphics, at least up until recently, except for the Pro versions which are not generally available.  When I built my little server, I could have bought a Pro CPU from the Chinese grey market, but the increased cost was about the same as a cheap graphics card for a server which normally has no monitor, so I did that.

Quote
As the size of each individual bit in a memory chip gets smaller and smaller, the chance that a bit flip might happen increases.

Whether a bit is affected depends on the density of the charge rather than the amount.  The ionizing radiation strike distributes charge across a large volume, so if the bits are physically smaller, they pick up less charge.  DRAM designs have improved density by storing equal or slightly less charge in smaller volumes, so the charge density goes up for each bit and it become less susceptible.  In practice the result has been that radiation susceptibility leveled off several DRAM generations ago for a given amount of RAM, but of course system memory requirements still increased so systems do become more vulnerable, just not nearly as much as originally expected.

Quote
My TrueNAS has ECC DRAM and I wouldn't even think of building one without it. I also based it on an AMD CPU for this generation of the hardware as I didn't want to pay Intel's premium for something which is an essential feature.

The last Intel system I built for myself with ECC was a Pentium 4, which I still have.  Everything since has been AMD because of better ECC support.  I tried figuring out what I needed to build an Intel ECC system a couple years ago when I built my Ryzen workstation, and it was too complicated and questionable, and the Intel system would have doubled the cost of the motherboard.  High AMD motherboard prices became reasonable compared to even higher Intel motherboard prices.

Quote
I read somewhere that ECC is a standard feature of DDR5, has there been any independent verification that's actually the case for all DDR5?

It is, and it is not.  All DDR5 uses ECC internally to provide a limited amount of protection, but errors are only corrected when data is read out, and no scrubbing takes place.  This has to be the case because scrubbing every time that a row is opened would cost too much power.  How often rows can be opened is already limited by power concerns.

Normal DDR5 implements two 32-bit memory channels per DIMM, but ECC DDR5 implements two 40-bit memory channels per DIMM, which has nothing to do with the internal ECC protection.  I assume this means the chips will be 8-bits wide so one channel takes either 4 or 5 chips, and a single rank DIMM will use 8 or 10 chips.

But who is crazy enough to put a NAS on internet? I mean that in itself is a big no. And chances are there will be more security issues with your self build PC based NAS compared to an off-the-shelve product which should have a minimal attack surface to begin with. IF you need remote access to a NAS, do this via a VPN router / VPN client.

Some people are dumb, inexperienced, or desperate enough to expose the Remote Desktop Protocol or SMB port so that they can reach their system remotely.  A VPN is definitely the way to go, and is what I have always done in the past.
« Last Edit: April 15, 2024, 02:35:08 pm by David Hess »
 

Offline luudee

  • Frequent Contributor
  • **
  • Posts: 274
  • Country: th
Re: Replacement NAS
« Reply #35 on: April 15, 2024, 02:36:35 pm »
I have built a NAS a few years ago, I opted for a HW RAID controller.
I chose a Avago MegaRAID SAS 9361-16i, as I felt it had the most
space for future upgrades. Yes, it added about $1K to total cost, but
totally worth it in my opinion.

I also installed a Intel 10G X550T ethernet card.

I changed all FANs to Noctua high performance FANs.

Running ubuntu, not very optimized but it does it job quite well.

Attached are some pics of my monster !

Cheers,
rudi

 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16630
  • Country: us
  • DavidH
Re: Replacement NAS
« Reply #36 on: April 15, 2024, 02:58:49 pm »
I have built a NAS a few years ago, I opted for a HW RAID controller.
I chose a Avago MegaRAID SAS 9361-16i, as I felt it had the most
space for future upgrades. Yes, it added about $1K to total cost, but
totally worth it in my opinion.

I have had good results with the Areca RAID controllers that I have used.  I was originally planning on moving all of my bulk storage to a separate TrueNAS system when I built my Ryzen workstation, but instead upgraded my hardware RAID to an Areca 1680IX with 8x14TB drives in RAID6, plus a couple of 2TB RAID10 volumes.

I like being able to boot the system from the RAID controllers, hence the two 2TB RAID10 volumes, but Windows sometimes gets into a mode after updates that breaks booting from a volume that requires added drivers, so I am looking more favorably on a dumb controller and Storage Spaces because it is easier to replace the drives to increase the storage.

I tested booting and operating from 4 SATA SSDs in hardware RAID10 and it was not any faster.  It doubled the storage and added redundancy, but had other disadvantages.  The hardware RAID controllers are not fast enough to take good advantage of SSDs.
 

Offline luudee

  • Frequent Contributor
  • **
  • Posts: 274
  • Country: th
Re: Replacement NAS
« Reply #37 on: April 15, 2024, 03:03:43 pm »
I have built a NAS a few years ago, I opted for a HW RAID controller.
I chose a Avago MegaRAID SAS 9361-16i, as I felt it had the most
space for future upgrades. Yes, it added about $1K to total cost, but
totally worth it in my opinion.

I have had good results with the Areca RAID controllers that I have used.  I was originally planning on moving all of my bulk storage to a separate TrueNAS system when I built my Ryzen workstation, but instead upgraded my hardware RAID to an Areca 1680IX with 8x14TB drives in RAID6, plus a couple of 2TB RAID10 volumes.

I like being able to boot the system from the RAID controllers, hence the two 2TB RAID10 volumes, but Windows sometimes gets into a mode after updates that breaks booting from a volume that requires added drivers, so I am looking more favorably on a dumb controller and Storage Spaces because it is easier to replace the drives to increase the storage.

I tested booting and operating from 4 SATA SSDs in hardware RAID10 and it was not any faster.  It doubled the storage and added redundancy, but had other disadvantages.  The hardware RAID controllers are not fast enough to take good advantage of SSDs.

Hi David,

yeah, for that exact reason, Windows is VERBOTEN in my office !  >:D

Cheers,
rudi
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26958
  • Country: nl
    • NCT Developments
Re: Replacement NAS
« Reply #38 on: April 15, 2024, 03:43:53 pm »
I tested booting and operating from 4 SATA SSDs in hardware RAID10 and it was not any faster.  It doubled the storage and added redundancy, but had other disadvantages.  The hardware RAID controllers are not fast enough to take good advantage of SSDs.
No surprise there  ;D . I have a 4 lane M.2 PCIe SSD in my PC. It shows a transfer rate of 1GB/s when reading. I doubt there are any cheap RAID controllers which support that kind of throughput. IMHO RAID as in having disks in parallel is only useful to increase throughput from hard drives (with spinning disks).
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4957
  • Country: si
Re: Replacement NAS
« Reply #39 on: April 15, 2024, 04:12:27 pm »
Yeah hardware RAID controller cards are not a very good idea anymore.

They are just another potential point of failure, and when they do fail you are in for a world of hurt getting things running again. The striping on your drives might not be just simply compatible with some other RAID card you might have laying around, so you better swap it with an identical one. Then you have to set it up correctly to recognize the array correctly again, if you do something particularly stupid and have it attempt an array rebuild with wrong configuration it might even nuke your data...etc

And what do you get for using a RAID card? Usually it is performance. However these days drives have evolved and CPUs are much more powerful, so in a lot of cases it is actually SLOWER to use a hardware RAID card. You can get very good performance from software RAID solutions these days. Just buy a simple SAS HBA card and throw a ZFS array at those drives on a modern CPU and you will be getting plenty of performance. No hardware configuration needed either, the HBA card can be replaced with any other HBA card by just sticking it in and booting the machine up, as long as the OS can find the drive it just works. All of this is performant enough to saturate a 10G connection.

If you are going for speed then go for NVME SSDs, you can get 5000MB/s from a single drive, so no RAID even needed to go fast. And if you do have a crazy 100G home LAN network then you can still RAID multiple together and actually saturate such a connection. If you can afford 100G networking then you can afford the SSDs and a server capable of pushing those bytes around fast enough.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16630
  • Country: us
  • DavidH
Re: Replacement NAS
« Reply #40 on: April 15, 2024, 05:37:02 pm »
I tested booting and operating from 4 SATA SSDs in hardware RAID10 and it was not any faster.  It doubled the storage and added redundancy, but had other disadvantages.  The hardware RAID controllers are not fast enough to take good advantage of SSDs.

No surprise there  ;D . I have a 4 lane M.2 PCIe SSD in my PC. It shows a transfer rate of 1GB/s when reading. I doubt there are any cheap RAID controllers which support that kind of throughput. IMHO RAID as in having disks in parallel is only useful to increase throughput from hard drives (with spinning disks).

I should have said that my 20 year old hardware RAID controllers, which were fast for the time, cannot take good advantage of the speed of SSDs.  More recent hardware RAID controllers can definitely take advantage of SATA/SAS SSD speeds.  NVMe SSDs are another thing entirely.

Yeah hardware RAID controller cards are not a very good idea anymore.

They still have their place, like if you want to boot from a redundant volume.

Quote
They are just another potential point of failure, and when they do fail you are in for a world of hurt getting things running again. The striping on your drives might not be just simply compatible with some other RAID card you might have laying around, so you better swap it with an identical one. Then you have to set it up correctly to recognize the array correctly again, if you do something particularly stupid and have it attempt an array rebuild with wrong configuration it might even nuke your data...etc

I have not had any trouble moving my RAID sets between my different Areca cards, but that is one reason I like them.  I have been picking them up on Ebay for cheap, and refurbishing them for my own use.

Quote
And what do you get for using a RAID card? Usually it is performance. However these days drives have evolved and CPUs are much more powerful, so in a lot of cases it is actually SLOWER to use a hardware RAID card. You can get very good performance from software RAID solutions these days. Just buy a simple SAS HBA card and throw a ZFS array at those drives on a modern CPU and you will be getting plenty of performance. No hardware configuration needed either, the HBA card can be replaced with any other HBA card by just sticking it in and booting the machine up, as long as the OS can find the drive it just works. All of this is performant enough to saturate a 10G connection.

That is what I thought and why I ran performance tests using my old workstation.  TrueNAS was faster than Windows Storage Spaces by a little bit, but my old Areca hardware RAID controllers were faster than TrueNAS.  I only ended up using Storage Spaces because of the Samba problem that I mentioned earlier, and Storage Spaces being more flexible with swapping and upgrading drives.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14506
  • Country: fr
Re: Replacement NAS
« Reply #41 on: April 15, 2024, 06:42:06 pm »
Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality.

To be fair, usually when things fuck up, it's usually Microsoft's fault. I've been chasing weird and wonderful issues for weeks.

Yes. The above was actually quite funny. The probability of a "bit flip" due to cosmic rays crashing your machine is thousands of times lower than a software bug.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26958
  • Country: nl
    • NCT Developments
Re: Replacement NAS
« Reply #42 on: April 15, 2024, 07:56:29 pm »
Most people experience bit flips as random blue screens of death and put it down to micros~1's software quality.

To be fair, usually when things fuck up, it's usually Microsoft's fault. I've been chasing weird and wonderful issues for weeks.

Yes. The above was actually quite funny. The probability of a "bit flip" due to cosmic rays crashing your machine is thousands of times lower than a software bug.
You can make fun of cosmic rays and it could be classified as far-fetched but radiation isn't the only possible problem. A poor power supply / power distribution design or slightly flaky memory will cause mysterious problems as well. When I first got my previous PC, it would trip up when doing a longwinded (30 minute) compilation run every now and then. Sometime it would compile OK, sometimes not. In the end I let memtest run with the most extensive tests for a good part of a day until it did find a memory failure. After some more runs and specifying the test and memory area, I managed to pinpoint it to a faulty memory module. After exchanging the memory module for a good one, the compilation process succeeded every time. Needless to say my current PC has ECC memory.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf