Products > Computers

File systems!

<< < (19/21) > >>

Marco:

--- Quote from: SiliconWizard on May 30, 2023, 04:41:50 am ---At least for home or small business use.

--- End quote ---
For a hobby time is free, for a small business storage hasn't cost enough to do anything more than mirroring in decades.

JeremyC:

--- Quote from: Ed.Kloonk on May 30, 2023, 06:46:13 am ---
--- Quote from: JeremyC on May 30, 2023, 05:18:45 am ---From my experience ZFS does not perform best on Linux, but does much better in BSD.

--- End quote ---

I'll bite. Why?

--- End quote ---

Back in ~2015 I was resourcing alternative and redundant storage solution.
On the same hardware, ZFS on Linux couldn’t handle ~10M iops, when at the same hardware BSD (FreeNAS) didn’t have any problems. Solaris 11 performed best, but it requires license in production…
We used 45 SAS (WD Gold) in RAID1+0 (each mirror had 4 drives for redundancy) drives and 8 x Samsung SSD for ARC.
Solaris 11 was the kink, 2nd and acceptable was FreeNAS (BSD) and ZFS on Linux was intermittently choking...

magic:

--- Quote from: Marco on May 30, 2023, 01:22:30 pm ---For a hobby time is free, for a small business storage hasn't cost enough to do anything more than mirroring in decades.

--- End quote ---
Maybe for OS boot disks and small servers.

In high capacity storage arrays the cost of disks is still the deciding factor and parity is very much used, although it is typically doubly redundant parity these days.

Hardware RAID5/6 controllers with battery backed cache and all that jazz are also still being made, sold and bought.

paulca:
I just switched roles in work from a project with an HDFS filesystem consisting of several dozen petabytes over 5000 nodes.  Thousands of cores and terrabytes of online RAM.

Mostly the disks where "node local" SAS arrays or "rack local" SAS arrays.  Due to it being BigData/Compute cluster, the disks stay with the cores.

That wasn't even the scary part of the architecture.  It was the "near real time" SQL caching layer which got used by end business customers.

Somehow it managed to provide, almost, normal SQL query latency < 1 minute from that HDFS cluster.

I believe it runs on a sub cluster with 1000s of terrabytes of RAM.  Whatever it is, it replaced an IBM Netezza data warehose on IBM Power Servers so I expect it's a beast.  Direct "Hadoop" based queries to the same distributed on disk data would take orders of magnitude longer to query.

DiTBho:

--- Quote from: JeremyC on June 02, 2023, 04:28:29 am ---On the same hardware, ZFS on Linux couldn’t handle ~10M iop

--- End quote ---

which one? zfs-kmod or zfs-fuse?  :o :o :o

if it was the fuse one, well ... userspace <- from/to -> kernelspace adds some overhead

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod